January 20, 2020

The Meson Manual is now available for purchase

Some of you might remember that last year I ran a crowdfunding campaign to create a full written user manual for Meson. That failed fairly spectacularly, mostly due to the difficulty of getting any sort of visibility for these kinds of projects (i.e. on the Internet, everything drowns).

Not taking the hint I chose to write and publish it on my own anyway. It is now available on this web page for the price of 29.95€ plus a tax that depends on the country of purchase. Some countries which have unreasonable requirements for foreign online sellers such as India, Russia and South Korea have been geoblocked. Sorry about that. However you can still buy the book if you are traveling outside the country in question, but in that case all tax responsibilities for importing fall upon you.

What if you don't care about books?

I don't have a Patreon or any other crowdfunding thing ongoing, because of the considerable legal uncertainties of running a donation based service for the public good in Finland. Selling digital goods for money is fine, so this is a convenient way for people to support my work on Meson financially.

Will the book be made available under a free license?

No. We already have one set of free documentation on the project web site. Everyone is free to use and contribute that documentation. This book contains no text from the existing documentation, it is all new and written from scratch.

Is it available as a hard copy?

No, the only available format is PDF. This is both to save trees and because international shipping of physical items is both time consuming and expensive.

Getting review copies

If you are a journalist and wish to write a review of the book for a publication, send me an email and I'll provide you with a free review copy.

When was the book first made public?

It was announced at the very beginning of my LCA2020 talk. See it for yourself in the embedded video below.

Can you post about this on your favourite social media site / news aggregator / etc?

Yes, by all means. It is hard to get visibility without so I appreciate all the help I can get.

What was that site's URL again?


Verifying your system state in a secure and private way

Most modern PCs have a Trusted Platform Module (TPM) and firmware that, together, support something called Trusted Boot. In Trusted Boot, each component in the boot chain generates a series of measurements of next component of the boot process and relevant configuration. These measurements are pushed to the TPM where they're combined with the existing values stored in a series of Platform Configuration Registers (PCRs) in such a way that the final PCR value depends on both the value and the order of the measurements it's given. If any measurements change, the final PCR value changes.

Windows takes advantage of this with its Bitlocker disk encryption technology. The disk encryption key is stored in the TPM along with a policy that tells it to release it only if a specific set of PCR values is correct. By default, the TPM will release the encryption key automatically if the PCR values match and the system will just transparently boot. If someone tampers with the boot process or configuration, the PCR values will no longer match and boot will halt to allow the user to provide the disk key in some other way.

Unfortunately the TPM keeps no record of how it got to a specific state. If the PCR values don't match, that's all we know - the TPM is unable to tell us what changed to result in this breakage. Fortunately, the system firmware maintains an event log as we go along. Each measurement that's pushed to the TPM is accompanied by a new entry in the event log, containing not only the hash that was pushed to the TPM but also metadata that tells us what was measured and why. Since the algorithm the TPM uses to calculate the hash values is known, we can replay the same values from the event log and verify that we end up with the same final value that's in the TPM. We can then examine the event log to see what changed.

Unfortunately, the event log is stored in unprotected system RAM. In order to be able to trust it we need to compare the values in the event log (which can be tampered with) with the values in the TPM (which are much harder to tamper with). Unfortunately if someone has tampered with the event log then they could also have tampered with the bits of the OS that are doing that comparison. Put simply, if the machine is in a potentially untrustworthy state, we can't trust that machine to tell us anything about itself.

This is solved using a procedure called Remote Attestation. The TPM can be asked to provide a digital signature of the PCR values, and this can be passed to a remote system along with the event log. That remote system can then examine the event log, make sure it corresponds to the signed PCR values and make a security decision based on the contents of the event log rather than just on the final PCR values. This makes the system significantly more flexible and aids diagnostics. Unfortunately, it also means you need a remote server and an internet connection and then some way for that remote server to tell you whether it thinks your system is trustworthy and also you need some way to believe that the remote server is trustworthy and all of this is well not ideal if you're not an enterprise.

Last week I gave a talk at linux.conf.au on one way around this. Basically, remote attestation places no constraints on the network protocol in use - while the implementations that exist all do this over IP, there's no requirement for them to do so. So I wrote an implementation that runs over Bluetooth, in theory allowing you to use your phone to serve as the remote agent. If you trust your phone, you can use it as a tool for determining if you should trust your laptop.

I've pushed some code that demos this. The current implementation does nothing other than tell you whether UEFI Secure Boot was enabled or not, and it's also not currently running on a phone. The phone bit of this is pretty straightforward to fix, but the rest is somewhat harder.

The big issue we face is that we frequently don't know what event log values we should be seeing. The first few values are produced by the system firmware and there's no standardised way to publish the expected values. The Linux Vendor Firmware Service has support for publishing these values, so for some systems we can get hold of this. But then you get to measurements of your bootloader and kernel, and those change every time you do an update. Ideally we'd have tooling for Linux distributions to publish known good values for each package version and for that to be common across distributions. This would allow tools to download metadata and verify that measurements correspond to legitimate builds from the distribution in question.

This does still leave the problem of the initramfs. Since initramfs files are usually generated locally, and depend on the locally installed versions of tools at the point they're built, we end up with no good way to precalculate those values. I proposed a possible solution to this a while back, but have done absolutely nothing to help make that happen. I suck. The right way to do this may actually just be to turn initramfs images into pre-built artifacts and figure out the config at runtime (dracut actually supports a bunch of this already), so I'm going to spend a while playing with that.

If we can pull these pieces together then we can get to a place where you can boot your laptop and then, before typing any authentication details, have your phone compare each component in the boot process to expected values. Assistance in all of this extremely gratefully received.

comment count unavailable comments

January 18, 2020

Digitizing a analog water meter

For a University project a spent some time working on a project to digitally track the water consumption in my shared flat. Since nowadays everything is about data collection, I wanted to give this idea a shot. In my flat we have a simple analog water meter in my house.

Sadly, my meter is really dirt under the glass and i couldn’t manage to clean it. This will cause problems down the road.

The initial idea was easy, add a webcam on top of the meter and read the number on the upper half it. But I soon realized that the project won’t be that simple. The number shows only the use of 1m^3 (1000 liters), this means that I would have a change only every couple of days, which is useless and boring.  So, I had to read the analog gauges, which show the fraction in 0.0001, 0.001,  0.01 and 0.1 m^3. This discovery blocked me, and I was like “this is way to complicated”.

I have no idea how I found or what reminded me of OpenCV, but that was the solution. OpenCV is an awesome tool for computer vision, it has many features like Facial recognition, Gesture recognition … and also shape recognition. What’s a analog gauge? It’s just a circle with an triangular arrow indicating the value.

Let’s jump in to the project

I’m using a Raspberry Pi 1, a Logitech webcam, a juice bottle and some leds out of a bicycle light.

You need to find a juice bottle which fits nicely over the water meter. Cut of the top and bottom of the bottle and replace one side with a cardboard or wood with a hole in the middle. Attach the webcam centered over the hole and place a led on each side of the webcam to illuminate the water meter (you my need to cover them with paper to reduce reflection on the plastic of the meter).

First step is to set up the Raspberry Pi 1 (it doesn’t have to be a RPI, any computer running linux should work fine). You have to install a Linux Distro on the device, I used Archlinux. You can find a guide to install it on a Raspberry Pi 1 here.

After the initial setup you need to install git, python3 and opencv:
sudo pacman -S python3 git opencv
Clone the needed code to a know location:
git clone https://github.com/jsparber/water-meter-code.git
You need to create a new git repository to store the data and clone it to /home/alarm/water-meter-data/. If you want to use a different name or location you need to modify the name in measure.sh

On the RPI I have a cronjob which runs a script every minute. The script turns on the the led and then takes a picture then it turns the led off again, to save some energy.
With crontab -e you can modify the cron jobs, add * * * * * /home/alarm/code/take_photo.sh to run the take_photo.sh every minute, you may need to adjust the path depending on where you cloned the git repo.

After the picture is taken it calls a second script which then uses OpenCV to read the gauges and it appends the found values to a file which then is pushed to  git repo. I had a issue with the webcam. After some time my script couldn’t access the webcam anymore, I solved it by rebooting the RPI when it wasn’t possible to take a picture. (I did a quick search on the internet, most people solved this issue by changing the cam)

A nice optional feature is the home made switch connected to the RPI on the above picture. The schematics are really simple it’s only a 1KOhm resistor, a transistor and a USB extension cable. The transistor is switched on via the GPIO pin 18 of the Raspberry PI and gives power to the connected USB device. In this case I used it to connect the Leds.

Inside the USB extension cable there should be 4 different colored cables. We need to cut only the red one and connect it the same way as the schematics above show it, where the red_in goes to the male connector and the red_out to the female side of the cable. The GND needs to be connected to the ground pin of the Raspberry Pi, if you need to power something which requires more then 500mA you should connect the ground directly to the power source the same way as you did with the +5V red cable. You need to use the same power source for the switch and the RPI or it may not work.

And now the OpenCv part

First my code finds the circles of the right size on the image, and uses the two most left ones as gauges for 0.1 m^3 and 0.01 m^3 (Sadly since my meter is so dirty I can’t reliably read the other two values).

The input image.
The found circles of the right size

As the second step I create a mask which filters out everything what’s not red (remember the arrows are read). I take the contour of the mask which encloses the center of the circle I want to read. Then it finds the fairest point from the center of the circle which is the tip of the arrow. The software then creates a virtual line between the center and the tip, which is then used to calculate the angle which is basically the value shown on the gauge. The same thing is repeated for the other gauges.

The mask with only red areas showing.
The arrows found on the source image. This lines are used to calculate the angle.

This system sounds extremely simple, but to make everything work well together it isn’t that easy. OpenCV requires a lot of tuning, e.g selecting the right red color so that it detects it well but stays working even with light changes.

Conclusions

I learned a lot during this project, especially about OpenCV which i never used before. Sadly my water meter was really dirty so a couldn’t read all values and get also some wrong readings. So far I didn’t decide for what i want to use the collected data therefore I didn’t spend much time on finding a solution for read errors and problems I have when the gauges make a full turn. A easy solution would be to just keep an internal count of the water. And when we are unsure about a value we can go back to the memorized value.

The final plot can be found here. All values are saved directly without filter this causes the plot to have quite some noise but it allows to change the function used to filter later and adapt it to future needs.

My code is published on github:

Some sources which helped me a lot, many thanks to them:

January 17, 2020

Doing Things That Scale

There was a point in my life when I ran Arch, had an elaborate personalized terminal prompt, and my own custom icon theme. I stopped doing all these things at various points for different reasons, but underlying them all is a general feeling that it’s taken me some time to figure out how to articulate: I no longer want to invest time in things that don’t scale.

What I mean by that in particular is things that

  1. Only fix a problem for myself (and maybe a small group of others)
  2. Have to be maintained in perpetuity (by me)

Not only is it highly wasteful for me to come up with a custom solution to every problem, but in most cases those solutions would be worse than ones developed in collaboration with others. It also means nobody will help maintain these solutions in the long run, so I’ll be stuck with extra work, forever.

Conversely, things that scale

  1. Fix the problem in way that will just work™ for most people, most of the time
  2. Are developed, used, and maintained by a wider community

A few examples:

I used to have an Arch GNU/Linux setup with tons of tweaks and customizations. These days I just run vanilla Fedora. It’s not perfect, but for actually getting things done it’s way better than what I had before. I’m also much happier knowing that if something goes seriously wrong I can reinstall and get to a usable system in half an hour, as opposed to several hours of tedious work for setting up Arch. Plus, this is a setup I can actually install for friends and relatives, because it does a decent job at getting people to update when I’m not around.

Until relatively recently I always set a custom monospace font in my editor and terminal when setting up a new machine. At some point I realized that I wouldn’t have to do that if the default was nicer, so I just opened an issue. A discussion ensued, a better default was agreed upon, and voilà — my problem was solved. One less thing to do after every install. And of course, everyone else now gets a nicer default font too!

I also used to use ZSH with a configuration framework and various plugins to get autocompletion, git status, a fancy prompt etc. A few years ago I switched to fish. It gives me most of what I used to get from my custom ZSH thing, but it does so out of the box, no configuration needed. Of course ideally we’d have all of these things in the default shell so everyone gets these features for free, but that’s hard to do unfortunately (if you’re interested in making it happen I’d love to talk!).

Years ago I used to maintain my own extension set to the Faenza icon theme, because Faenza didn’t cover every app I was using. Eventually I realized that trying to draw a consistent icon for every single third party app was impossible. The more icons I added, the more those few apps that didn’t have custom icons stuck out. Nowadays when I see an app with a poor icon I file an issue asking if the developer would like help with a nicer one. This has worked out great in most cases, and now I probably have more consistent app icons on my system than back when I used a custom theme. And of course, everyone gets to enjoy the nicer icons, not only me.

Some other things that don’t scale (in no particular order):

  • Separate home partition
  • Dotfiles
  • Non-trivial downstream patches
  • Manual tracker/cookie/Javascript blocking (I use uMatrix, which is already a lot nicer than NoScript, but still a pretty terrible experience)
  • Multiple Firefox profiles
  • User styles on websites
  • Running your blog on a static site generator
  • Manual backups
  • Encrypted email
  • Hosting your own email (and self-hosting more generally)
  • Google-free Android (I use Lineage on a Pixel 1, it’s a pretty miserable existence)
  • Buying a Windows computer and installing GNU/Linux
  • Auto-starting apps
  • Custom keyboard shortcuts, e.g. for launching apps (I still have a few of these, mostly because of muscle memory)

The free software community tends to celebrate custom, hacky solutions to problems as something positive (“It’s so flexible!”), even when these hacks are only necessary because things are broken by default. It’s nice that people with a lot of time and technical skills can fix their own problems, but the benefits from that don’t automatically trickle down to everybody else.

If we want ethical technology to become accessible to more people, we need to invest our (very limited) time and energy in solutions that scale. This means good defaults instead of endless customization, apps instead of scripts, “it just works” instead of “read the fucking manual”. The extra effort to make proper solutions that work for everyone, rather than hacks just for ourselves can seem daunting, but is always worth it in the long run. Just as with accessibility and commenting your code, the person most likely to benefit from it is you, in the future.

Plug and play support for (Gaming) keyboards with a builtin LCD panel

A while ago as a spin-off of my project to improve support for Logitech wireless keyboards and mice I have also done some work on improving support for (Gaming) keyboards with a builtin LCD panel.

Specifically if you have a Logitech MX5000, G15, G15 v2 or G510 and you want the LCD panel to show something somewhat useful then on Fedora 31 you can now install the lcdproc package and it will automatically recognize the keyboard and show "top" like information on it. No need to manually write an LCDd.conf or anything, this works fully plug and play:

sudo dnf install lcdproc
sudo udevadm trigger


If you have a MX5000 and you do not want the LCD panel to show "top" like info, you may still want to install the mx5000tools package, this will automatically send the system time to the keyboard, after which it will display the time.

Once the 5.5 kernel becomes available as an update for Fedora you will also be able to use the keys surrounding the LCD panel to control the lcdproc menu-s on the LCD panel. The 5.5 kernel will also export key backlight brightness control through the standardized /sys/class/leds API, so that you can control it from e.g. the GNOME control-center's power-settings and you get a nice OSD when toggling the brightnesslevel using the key on the keyboard.

The 5.5 kernel will also make the "G" keys send standard input-events (evdev events), once userspace support for the new key-codes these send has landed, this will allow e.g. binding them to actions in GNOME control-center's keyboard-settings. But only under Wayland as the new keycodes are > 255 and X11 does not support this.

January 16, 2020

GTK: OSX a11y support

Everybody knows that I have always been a firm believer in Gtk+’s potential to be a great cross platform toolkit beyond Linux. GIMP and Inkscape, as an example, are loyal users that ship builds for those platforms. The main challenge is the short amount of maintainers running, testing and improving those platforms.

Gtk+ has a few shortcomings one of them, one of the biggest ones is lack of a11y support outside of Linux. Since I have regular access to a modern OSX machine I decided to give this a go (and maybe learn son Obj-C in the process).

So I started by having a look at how ATK works and how it relates to the GTK DOM, my main goal was to have a GTK3 module that would walk through the toplevels and build an OSX accessibility tree.

So my initial/naive attempt is in this git repo, which you can build by installing gtk from brew.

Some of the shortcomings that I have found to actually test this and move forward:

  • Running gtk3-demo creates an NSApp that has no accessibility enabled, you can tell because the a11y inspector that comes with XCode won’t show metadata even for the window decorator controls. I have no idea how to enable that manually, it looks like starting an actual NSApp, like Inkscape and GIMP do would give you part of that.
  • Inkscape and GIMP have custom code to register themselves as an acutal NSApp as well as override XDG env vars in runtime to set the right paths. I suspect this is something we could move to G and GtkApplication.
  • The .dylib I generate with this repo will not actually load on Inkscape for some reason, so as of now I am stuck with having to somehow build a replacement gtk dylib for Inkscape with my code instead of through an actual module.

So this is my progress thus far, I think once I get to a point where I can iterate over the concept, it would be easier to start sketching the mapping between ATK and NSAccessibility. I would love feedback or help, so if you are interested please reach out by filing an issue on the gitlab project!

January 15, 2020

Exposing C and Rust APIs: some thoughts from librsvg

Librsvg exports two public APIs: the C API that is in turn available to other languages through GObject Introspection, and the Rust API.

You could call this a use of the facade pattern on top of the rsvg_internals crate. That crate is the actual implementation of librsvg, and exports an interface with many knobs that are not exposed from the public APIs. The knobs are to allow for the variations in each of those APIs.

This post is about some interesting things that have come up during the creation/separation of those public APIs, and the implications of having an internals library that implements both.

Initial code organization

When librsvg was being ported to Rust, it just had an rsvg_internals crate that compiled as a staticlib to a .a library, which was later linked into the final librsvg.so.

Eventually the code got to the point where it was feasible to port the toplevel C API to Rust. This was relatively easy to do, since everything else underneath was already in Rust. At that point I became interested in also having a Rust API for librsvg — first to port the test suite to Rust and be able to run tests in parallel, and then to actually have a public API in Rust with more modern idioms than the historical, GObject-based API in C.

Version 2.45.5, from February 2019, is the last release that only had a C API.

Most of the C API of librsvg is in the RsvgHandle class. An RsvgHandle gets loaded with SVG data from a file or a stream, and then gets rendered to a Cairo context. The naming of Rust source files more or less matched the C source files, so where there was rsvg-handle.c initially, later we had handle.rs with the Rustified part of that code.

So, handle.rs had the Rust internals of the RsvgHandle class, and a bunch of extern "C" functions callable from C. For example, for this function in the public C API:

void rsvg_handle_set_base_gfile (RsvgHandle *handle,
                                 GFile      *base_file);

The corresponding Rust implementation was this:

#[no_mangle]
pub unsafe extern "C" fn rsvg_handle_rust_set_base_gfile(
    raw_handle: *mut RsvgHandle,
    raw_gfile: *mut gio_sys::GFile,
) {
    let rhandle = get_rust_handle(raw_handle);        // 1

    assert!(!raw_gfile.is_null());                    // 2
    let file: gio::File = from_glib_none(raw_gfile);  // 3

    rhandle.set_base_gfile(&file);                    // 4
}
  1. Get the Rust struct corresponding to the C GObject.
  2. Check the arguments.
  3. Convert from C GObject reference to Rust reference.
  4. Call the actual implementation of set_base_gfile in the Rust struct.

You can see that this function takes in arguments with C types, and converts them to Rust types. It's basically just glue between the C code and the actual implementation.

Then, the actual implementation of set_base_gfile looked like this:

impl Handle {
    fn set_base_gfile(&self, file: &gio::File) {
        if let Some(uri) = file.get_uri() {
            self.set_base_url(&uri);
        } else {
            rsvg_g_warning("file has no URI; will not set the base URI");
        }
    }
}

This is an actual method for a Rust Handle struct, and takes Rust types as arguments — no conversions are necessary here. However, there is a pesky call to rsvg_g_warning, about which I'll talk later.

I found it cleanest, although not the shortest code, to structure things like this:

  • C code: bunch of stub functions where rsvg_blah just calls a corresponding rsvg_rust_blah.

  • Toplevel Rust code: bunch of #[no_mangle] unsafe extern "C" fn rust_blah() that convert from C argument types to Rust types, and call safe Rust functions — for librsvg, these happened to be methods for a struct. Before returning, the toplevel functions convert Rust return values to C return values, and do things like converting the Err(E) of a Result<> into a GError or a boolean or whatever the traditional C API required.

In the very first versions of the code where the public API was implemented in Rust, the extern "C" functions actually contained their implementation. However, after some refactoring, it turned out to be cleaner to leave those functions just with the task of converting C to Rust types and vice-versa, and put the actual implementation in very Rust-y code. This made it easier to keep the unsafe conversion code (unsafe because it deals with raw pointers coming from C) only in the toplevel functions.

Growing out a Rust API

This commit is where the new, public Rust API started. That commit just created a Cargo workspace with two crates; the rsvg_internals crate that we already had, and a librsvg_crate with the public Rust API.

The commits over the subsequent couple of months are of intense refactoring:

  • This commit moves the unsafe extern "C" functions to a separate c_api.rs source file. This leaves handle.rs with only the safe Rust implementation of the toplevel API, and c_api.rs with the unsafe entry points that mostly just convert argument types, return values, and errors.

  • The API primitives get expanded to allow for a public Rust API that is "hard to misuse" unlike the C API, which needs to be called in a certain order.

Needing to call a C macro

However, there was a little problem. The Rust code cannot call g_warning, a C macro in glib that prints a message to stderr or uses structured logging. Librsvg used that to signal conditions where something went (recoverably) wrong, but there was no way to return a proper error code to the caller — it's mainly used as a debugging aid.

This is what the rsvg_internals used to be able to call that C macro:

First, the C code exports a function that just calls the macro:

/* This function exists just so that we can effectively call g_warning() from Rust,
 * since glib-rs doesn't bind the g_log functions yet.
 */
void
rsvg_g_warning_from_c(const char *msg)
{
    g_warning ("%s", msg);
}

Second, the Rust code binds that function to be callable from Rust:

pub fn rsvg_g_warning(msg: &str) {
    extern "C" {
        fn rsvg_g_warning_from_c(msg: *const libc::c_char);
    }

    unsafe {
        rsvg_g_warning_from_c(msg.to_glib_none().0);
    }
}

However! Since the standalone librsvg_crate does not link to the C code from the public librsvg.so, the helper rsvg_g_warning_from_c is not available!

A configuration feature for the internals library

And yet! Those warnings are only meaningful for the C API, which is not able to return error codes from all situations. However, the Rust API is able to do that, and so doesn't need the warnings printed to stderr. My first solution was to add a build-time option for whether the rsvg_internals library is being build for the C library, or for the Rust one.

In case we are building for the C library, the code calls rsvg_g_warning_from_c as usual.

But in case we are building for the Rust library, that code is a no-op.

This is the bit in rsvg_internals/Cargo.toml to declare the feature:

[features]
# Enables calling g_warning() when built as part of librsvg.so
c-library = []

And this is the corresponding code:

#[cfg(feature = "c-library")]
pub fn rsvg_g_warning(msg: &str) {
    unsafe {
        extern "C" {
            fn rsvg_g_warning_from_c(msg: *const libc::c_char);
        }

        rsvg_g_warning_from_c(msg.to_glib_none().0);
    }
}

#[cfg(not(feature = "c-library"))]
pub fn rsvg_g_warning(_msg: &str) {
    // The only callers of this are in handle.rs. When those functions
    // are called from the Rust API, they are able to return a
    // meaningful error code, but the C API isn't - so they issues a
    // g_warning() instead.
}

The first function is the one that is compiled when the c-library feature is enabled; this happens when building rsvg_internals to link into librsvg.so.

The second function does nothing; it is what is compiled when rsvg_internals is being used just from the librsvg_crate crate with the Rust API.

While this worked well, it meant that the internals library was built twice on each compilation run of the whole librsvg module: once for librsvg.so, and once for librsvg_crate.

Making programming errors a g_critical

While g_warning() means "something went wrong, but the program will continue", g_critical() means "there is a programming error". For historical reasons Glib does not abort when g_critical() is called, except by setting G_DEBUG=fatal-criticals, or by running a development version of Glib.

This commit turned warnings into critical errors when the C API was called out of order, by using a similar rsvg_g_critical_from_c() wrapper for a C macro.

Separating the C-callable code into yet another crate

To recapitulate, at that point we had this:

librsvg/
|  Cargo.toml - declares the Cargo workspace
|
+- rsvg_internals/
|  |  Cargo.toml
|  +- src/
|       c_api.rs - convert types and return values, call into implementation
|       handle.rs - actual implementation
|       *.rs - all the other internals
|
+- librsvg/
|    *.c - stub functions that call into Rust
|    rsvg-base.c - contains rsvg_g_warning_from_c() among others
|
+- librsvg_crate/
   |  Cargo.toml
   +- src/
   |    lib.rs - public Rust API
   +- tests/ - tests for the public Rust API
        *.rs

At this point c_api.rs with all the unsafe functions looked out of place. That code is only relevant to librsvg.so — the public C API —, not to the Rust API in librsvg_crate.

I started moving the C API glue to a separate librsvg_c_api crate that lives along with the C stubs:

+- librsvg/
|    *.c - stub functions that call into Rust
|    rsvg-base.c - contains rsvg_g_warning_from_c() among others
|    Cargo.toml
|    c_api.rs - what we had before

This made the dependencies look like the following:

      rsvg_internals
       ^           ^
       |             \
       |               \
librsvg_crate     librsvg_c_api
  (Rust API)             ^
                         |
                    librsvg.so
                      (C API)

And also, this made it possible to remove the configuration feature for rsvg_internals, since the code that calls rsvg_g_warning_from_c now lives in librsvg_c_api.

With that, rsvg_internals is compiled only once, as it should be.

This also helped clean up some code in the internals library. Deprecated functions that render SVGs directly to GdkPixbuf are now in librsvg_c_api and don't clutter the rsvg_internals library. All the GObject boilerplate is there as well now; rsvg_internals is mostly safe code except for the glue to libxml2.

Summary

It was useful to move all the code that dealt with incoming C types, our outgoing C return values and errors, into the same place, and separate it from the "pure Rust" code.

This took gradual refactoring and was not done in a single step, but it left the resulting Rust code rather nice and clean.

When we added a new public Rust API, we had to shuffle some code around that could only be linked in the context of a C library.

Compile-time configuration features are useful (like #ifdef in the C world), but they do cause double compilation if you need a C-internals and a Rust-internals library from the same code.

Having proper error reporting throughout the Rust code is a lot of work, but pretty much invaluable. The glue code to C can then convert and expose those errors as needed.

If you need both C and Rust APIs into the same code base, you may end up naturally using a facade pattern for each. It helps to gradually refactor the internals to be as "pure idiomatic Rust" as possible, while letting API idiosyncrasies bubble up to each individual facade.

New essay: A DAG of components – for an internal architecture too

I’ve written a new essay: A DAG of components – for an internal architecture too

I’ve also set up a public git repository with the sources (and backup of the PDFs).

List of all my essays (two so far).

January 14, 2020

Introducing GVariant schemas

GLib supports a binary data format called GVariant, which is commonly used to store various forms of application data. For example, it is used to store the dconf database and as the on-disk data in OSTree repositories.

The GVariant serialization format is very interesting. It has a recursive type-system (based on the DBus types) and is very compact. At the same time it includes padding to correctly align types for direct CPU reads and has constant time element lookup for arrays and tuples. This make GVariant a very good format for efficient in-memory read-only access.

Unfortunately the APIs that GLib has for accessing variants are not always great. They are based on using type strings and accessing children via integer indexes. While this is very dynamic and flexible (especially when creating variants) it isn’t a great fit for the case where you have serialized data in a format that is known ahead of time.

Some negative aspects are:

  • Each g_variant_get_child() call allocates a new object.
  • There is a lot of unavoidable (atomic) refcounting.
  • It always uses generic codepaths even if the format is known.

If you look at some other binary formats, like Google protobuf, or Cap’n Proto they work by describing the types your program use in a schema, which is compiled into code that you use to work with the data.

For many use-cases this kind of setup makes a lot of sense, so why not do the same with the GVariant format?

With the new GVariant Schema Compiler you can!

It uses a interface definition language where you define the types, including extra information like field names and other attributes, from which it generates C code.

For example, given the following schema:

type Gadget {
  name: string;
  size: {
    width: int32;
    height: int32;
  };
  array: []int32;
  dict: [string]int32;
};

It generates (among other things) these accessors:

const char *    gadget_ref_get_name   (GadgetRef v);
GadgetSizeRef   gadget_ref_get_size   (GadgetRef v);
Arrayofint32Ref gadget_ref_get_array  (GadgetRef v);
const gint32 *  gadget_ref_peek_array (GadgetRef v,
                                       gsize    *len);
GadgetDictRef   gadget_ref_get_dict   (GadgetRef v);

gint32 gadget_size_ref_get_width  (GadgetSizeRef v);
gint32 gadget_size_ref_get_height (GadgetSizeRef v);

gsize  arrayofint32_ref_get_length (Arrayofint32Ref v);
gint32 arrayofint32_ref_get_at     (Arrayofint32Ref v,
                                    gsize           index);

gboolean gadget_dict_ref_lookup (GadgetDictRef v,
                                 const char   *key,
                                 gint32       *out);

Not only are these accessors easier to use and understand due to using C types and field names instead of type strings and integer indexes, they are also a lot faster.

I wrote a simple performance test that just decodes a structure over an over. Its clearly a very artificial test, but the generated code is over 600 times faster than the code using g_variant_get(), which I think still says something.

Additionally, the compiler has a lot of other useful features:

  • You can add a custom prefix to all generated symbols.
  • All fixed size types generate C struct types that match the binary format, which can be used directly instead of the accessor functions.
  • Dictionary keys can be declared sorted: [sorted string] { ... } which causes the generated lookup function to use binary search.
  • Fields can declare endianness: foo: bigendian int32 which will be automatically decoded when using the generated getters.
  • Typenames can be declared ahead of time and used like foo: []Foo, or declared inline: foo: [] 'Foo { ... }. If you don’t name the type it will be named based on the fieldname.
  • All types get generated format functions that are (mostly) compatible with g_variant_print().

GtkSourceView on GTK 4

I spent some time this cycle porting GtkSourceView to GTK 4. It was a good opportunity to help me catch up on how GTK 4’s internals have changed into something modern. It gave me a chance to fix a few pot-holes along the way too.

One of the pot-holes was one I left in GtkTextView years ago. When I plumbed the pixelcache into GTK 3’s TextView I had only cached the primary text content. It seemed fine at the time because the gutters (used for line numbers) is just not that many pixels. So if we have to re-generate that every frame, so be it.

However, in a HiDPI world and 4k monitors on our laps things start to get… warm. So while changing the drawing model in GtkTextView we decided to make the GtkTextView gutters real widgets. Doing so means that GtkSourceGutterRenderer will be real GtkWidget‘s going forward and can do all sorts of neat stuff widgets can do.

But to address the speed of rendering we needed a better way to avoid walking the text btree linearly so many times while rendering the gutter. I’ve added a new class GtkSourceGutterLines to allow collecting information about the text buffer in one-pass. The renderers can then use that information when creating render nodes to avoid further tree scans.

I have some other plans for what I’d like to see before a 5.0 of GtkSourceView. I’ve already written a more memory-compact undo/redo engine for GTK’s GtkTextView, GtkEntry, GtkText, and friends which allowed me to delete that code from the GtkSourceView port. Better yet, you get undo/redo in all the places you would, well, expect it.

In particular I would like to see the async+GListModel based API for completion from Builder land upstream. Builder also has a robust snippet engine which could be reusable from GtkSourceView as that is a fairly useful thing across editors. Perhaps we could extract Builder’s indenter APIs and movements engine too. These are used by Builder’s Vim emulation quite heavily, for example.

If you like following development of stuff I’m doing you can always get that fix here on Twitter given my blogging infrequency.

January 09, 2020

GNOME 3.34.3 in Fedora 31 updates-testing

Just a quick heads up that GNOME 3.34.3 just hit Fedora 31 updates-testing repo. It’s a fairly small update; mostly just gnome-shell/mutter fixes and translation updates to leaf applications.

If you are a GNOME user, please install the update from updates-testing and give it a quick spin and leave karma in the feedback section at https://bodhi.fedoraproject.org/updates/FEDORA-2020-194da76ba0

Thanks!

January 08, 2020

Last month in Tracker

Here’s an incomplete report of some work done on Tracker during the last month!

Bugs

Jean Felder fixed a thorny issue that was causing wrong track durations for MP3s.

Rasmus Thomsen has been testing on Alpine Linux, fixing one issue and finding several more. Alpine Linux uses musl libc instead of the more common GNU libc, which triggers bugs that we don’t usually see. Finding and fixing these issues could be a great learning experience for someone who wants to dig deep into the platform!

There’s an ongoing issue reported by many Ubuntu users which seems to be due to SQLite database corruption. SQLite is rather a black box to me, so I don’t know how or when we might get to the bottom of why this corruption is happening.

Ubuntu CI

We now test each commit on Ubuntu as well as Fedora. This a nice step forwards. It’s also triggering more intermittent failures in the CI — we’ve made huge progress in the last few years on bringing the CI up from zero, but there are some latent issues like these which we need to get rid of.

Tracker 3.0

Carlos has done more architectural work in the ‘master’ branch, working towards having a generic SPARQL store in tracker.git, and all GNOME/desktop/filesystem related code in tracker-miners.git.

As part of this, the tracker CLI tool is now split between tracker.git and tracker-miners.git (MR1, MR2).

We also moved the libtracker-control and libtracker-miner libraries into tracker-miners.git, and made the libtracker-control API private. As far as I know, the libtracker-control library is only being used by GNOME Photos to manage indexing of removable devices. We want to keep track of which apps need porting to 3.0, so please let me know if this is going to affect anything else.

New website

Tracker is famous enough that it merits a real website, not just an outdated set of wiki pages. So I made a real Tracker website, aiming to collect links to relevant user and developer documentation and to have a minimal overview and FAQ section. We can build and deploy this straight from the tracker.git repo, so whereas the wiki is easily forgotten, the new website lives in the same repo as the sourcecode. The next step will be to merge this and then tidy up most of the old wiki pages

 

January 07, 2020

Introducing gtherm

Continuous temperature monitoring from the kernel's /sys/class/thermal/ in an application can be cumbersome. gtherm aims to make that simpler by providing a daemon (gthd) that exports thermal zones and cooling cells over DBus and providing a small library libgtherm (and GObject introspection bindings). gthcli is a simple command line client that displays the currently found values:

Thermal Zones
-------------
      dbus path: /org/sigxcpu/Thermal/ThermalZone/0
           type: cpu-thermal
    temperature: 53,00째C
cooling devices: /org/sigxcpu/Thermal/CoolingDevice/0

      dbus path: /org/sigxcpu/Thermal/ThermalZone/3
           type: max170xx_battery
    temperature: 36,60째C

      dbus path: /org/sigxcpu/Thermal/ThermalZone/2
           type: vpu-thermal
    temperature: 54,00째C

      dbus path: /org/sigxcpu/Thermal/ThermalZone/1
           type: gpu-thermal
    temperature: 54,00째C
cooling devices: /org/sigxcpu/Thermal/CoolingDevice/1

Cooling Devices
---------------
    dbus path: /org/sigxcpu/Thermal/CoolingDevice/0
         type: thermal-idle-0
    max state: 100
current state: 0

    dbus path: /org/sigxcpu/Thermal/CoolingDevice/1
         type: 38000000.gpu
    max state: 6
current state: 0

There's support for gnome-usage in the works:

gnome-usage thermal view

Next up is support for trip points (and maybe tuning cooling behaviour from userspace later on).

January 05, 2020

Introducing geewallet

Version 0.4.2.187 of geewallet has just been published to the snap store! You can install it by looking for its name in the store or by installing it from the command line with `snap install geewallet`. It features a very simplistic and minimalistic UI/UX. Nothing very fancy, especially because it has a single codebase that targets many (potential) platforms, e.g. you can also find it in the Android App Store.

What was my motivation to create geewallet in the first place, around 2 years ago? Well, I was very excited about the “global computing platform” that Ethereum was promising. At the time, I thought it would be like the best replacement of Namecoin: decentralised naming system, but not just focusing on this aspect, but just bringing Turing-completeness so that you can build whatever you want on top of it, not just a key-value store. So then, I got ahold of some ethers to play with the platform. But by then, I didn’t find any wallet that I liked, especially when considering security. Most people were copy+pasting their private keys into a website (!) called MyEtherWallet. Not only this idea was terrifying (since you had to trust not just the security skills of the sysadmin who was in charge of the domain&server, but also that the developers of the software don’t turn rogue…), it was even worse than that, it was worse than using a normal hot wallet. And what I wanted was actually a cold wallet, a wallet that could run in an offline device, to make sure hacking it would be impossible (not faraday-cage-impossible, but reasonably impossible).

So there I did it, I created my own wallet.

After some weeks, I added bitcoin support on it thanks to the library NBitcoin (good work Nicholas!). After some months, I added a cross-platform UI besides the first archaic command-line frontend. These days it looks like this:



What was my motivation to make geewallet a brain wallet? Well, at the time (and maybe nowadays too, before I unveil this project at least), the only decent brain wallet out there that seemed sufficiently secure (against brute force attacks) was WarpWallet, from the Keybase company. If you don’t believe in their approach, they even have placed a bounty in a decently small passphrase (so if you ever think that this kind of wallet would be hacked, you would be certainly safe to think that any cracker would target this bounty first, before thinking of you). The worst of it, again, was that to be able to use it you had again to use a web interface, so you had the double-trust problem again. Now geewallet brings the same WarpWallet seed generation algorithm (backed by unit tests of course) but on a desktop/mobile approach, so that you can own the hardware where the seed is generated. No need to write anymore long seeds of random words in pieces of paper: your mind is the limit! (And of course geewallet will warn the user in case the passphrase is too short and simple: it even detects if all the words belong to the dictionary, to deter low entropy, from the human perspective.)

Why did I add support for Litecoin and Ethereum Classic to the wallet? First, let me tell you that bitcoin and ethereum, as technological innovations and network effects, are very difficult to beat. And in fact, I’m not a fan of the proliferation of dubious portrayed awesome new coins/tokens that claim to be as efficient and scalable as these first two. They would need not only to beat the network effect when it comes to users, but also developers (all the best cryptographers are working in Bitcoin and Ethereum technologies). However, Litecoin and Ethereum-Classic are so similar to Bitcoin and Ethereum, respectively, that adding support for them was less than a day’s work. And they are not completely irrelevant: Litecoin may bring zero-knowledge proofs in an upcoming update soon (plus, its fees are lower today, so it’s an alternative cheaper testnet with real value); and Ethereum-Classic has some inherent characteristics that may make it more decentralised than Ethereum in the long run (governance not following any cult of personality, plus it will remain as a Turing-complete platform on top of Proof Of Work, instead of switching to Proof of Stake; to understand why this is important, I recommend you to watch this video).

Another good reason of why I started something like this from scratch is because I wanted to use F# in a real open source project. I had been playing with it for a personal (private) project 2 years before starting this one, so I wanted to show the world that you can build a decent desktop app with simple and not too opinionated/academic functional programming. It reuses all the power of the .NET platform: you get debuggers, you can target mobile devices, you get immutability by default; all three in one, in this decade, at last. (BTW, everything is written in F#, even the build scripts.)

What’s the roadmap of geewallet? The most important topics I want to cover shortly are three:
  • Make it even more user friendly: blockchain addresses are akin to the numeric IP addresses of the early 80s when DNS still didn’t exist. We plan to use either ENS or IPNS or BNS or OpenCAP so that people can identify recipients much more easily.
  • Implement Layer2 technologies: we’re already past the proof of concept phase. We have branches that can open channels. The promise of these technologies is instantaneous transactions (no waits!) and ridiculous (if not free) fees.
  • Switch the GTK Xamarin.Forms driver to work with the new “GtkSharp” binding under the hood, which doesn’t require glue libraries. (I’ve had quite a few nightmares with native dependencies/libs when building the sandboxed snap package!)
With less priority:
  • Integrate with some Rust projects: MimbleWimble(Grin) lib, the distributed COMIT project for trustless atomic swaps, or other Layer2-related ones such as rust-lightning.
  • Cryptography work: threshold keys or deniable encryption (think "duress" passwords).
  • NFC support (find recipients without QR codes!).
  • Tizen support (watches!).
  • Acceptance testing via UI Selenium tests (look up the Uno Platform).

Areas where I would love contributions from the community:
  • Flatpak support: unfortunately I haven’t had time to look at this sandboxing technology, but it shouldn’t be too hard to do, especially considering that there’s already a Mono-based project that supports it: SparkleShare.
  • Ubuntu packaging: there’s a patch blocked on some Ubuntu bug that makes the wallet (or any .NET app these days, as it affects the .NET package manager: nuget) not build in Ubuntu 19.10. If this patch is not merged soon, the next LTS of Ubuntu will have this bug :( As far as I understand, what needs to be solved is this issue so that the latest hotfixes are bundled. (BTW I have to thank Timotheus Pokorra, the person in charge to package Mono in Fedora, for his help on this matter so far.)
  • GNOME community: I’m in search for a home for this project. I don’t like that it lives in my GitLab username, because it’s not easy to find. One of the reasons I’ve used GitLab is because I love the fact that being open source, many communities are adopting this infrastructure, like Debian and GNOME. That’s why I’ve used as a bug tracker, for merge requests and to run CI jobs. This means that it should be easy to migrate to GNOME’s GitLab, isn’t it? There are unmaintained projects (e.g. banshee, which I couldn’t continue maintaining due to changes in life priorities...) already hosted there, so maybe it’s not much to ask if I could host a maintained one? It's probably the first Gtk-based wallet out there.

And just in case I wasn't clear:
  • Please don’t ask me to add support for your favourite %coin% or <token>.
  • If you want to contribute, don’t ask me what to work on, just think of your personal itch you want to scratch and discuss it with me filing a GitLab issue. If you’re a C# developer, I wrote a quick F# tutorial for you.
  • Thanks for reading up until here! It’s my pleasure to write about this project.

I'm excited about the world of private-key management. I think we can do much better than what we have today: most people think of hardware wallets to be unhackable or cold storage, but most of them are used via USB or Bluetooth! Which means they are not actually cold storage, so software wallets with offline-support (also called air-gapped) are more secure! I think that eventually these tools will even merge with other ubiquitous tools with which we’re more familiar today: password managers!

You can follow the project on twitter (yes I promise I will start using this platform to publish updates).

PS: If you're still not convinced about these technologies or if you didn't understand that PoW video I posted earlier, I recommend you to go back to basics by watching this other video produced by a mathematician educator which explains it really well.

January 03, 2020

New essay: Trying to convince application developers to write API documentation

I’ve written a new short essay: Trying to convince application developers to write API documentation

I’ve created the Short essays page on my website. I plan to write more essays in the future, as short articles that can be read independently. Around the theme of programming best-practices. I’ll inform you on my blog when I publish a new essay.

Note, it’s unfortunately not written in ConTeXt (see this previous blog post), as I haven’t found a text editor for ConTeXt that just works and is easy to install, with all the features I’m accustomed to when I write a LaTeX document. So I fell back to using LaTeX.

January 02, 2020

Celebrating GNOME Newcomers’ contributions

A few weeks ago, I sat down to solve some issues related to the GNOME Engagement team. While going through the list, I found this issue created by Umang Jain, which looked forward to celebrating the contributions made by GNOME Newcomers. It was opened in late 2017 and a lot of discussions happened during this period. So, I decided to take on this issue and solve it programmatically.

Problem

There is no doubt that newcomers work hard to make their first contribution to a project they do not know about. So, it’s really important to recognize and celebrate their contributions when they make one.

With GNOME being a large project, there is a need for an automated system which recognizes the contributions made by the newcomers and help the GNOME Engagement team to seamlessly identify them.

The following points were listed on the issue which we need to solve, but I will only consider solving the relevant ones.

  • Come up with an easy way for maintainers to indicate when a newcomer has made their first contribution
  • Create/decide twitter account and a person responsible for handling that
  • Create a way for the Engagement team to broadcast these achievements regularly on social media (e.g. monthly shout-out?)
  • Announce the new plan to key stakeholders (maintainers), and the larger GNOME community

Approach

Many GNOME people proposed there views and workarounds to tackle this problem. Taking the best cues out of each suggestion, I decided to use the Gitlab API. Gitlab API has all the features which can help us to take on this problem effectively.

Using Gitlab API, a list of all the users(with their first ten contributions) present on GNOME Gitlab Instance, is fetched. Along with this, a list of projects is also fetched using Gitlab API. The list of users is traversed which divides the users into Newcomers and Regular contributors. This is achieved by checking when the user first contributed to a GNOME project. If the contribution was made in the last 15 days, then the contributor is categorized as a Newcomer. After the newcomers are identified, they are filtered based on the type of contribution made. Currently, notable contributions are related to merge requests and issues.

After going through the above procedure, a detailed report is created as a JSON file. This JSON file can be found here.

Scheduling scan

The above process is scheduled to run once a day using Gitlab CI. It takes about 5 hours to complete. Once the scan is completed, the result of this whole process is pushed back to the project repository for future use.

Resources

You can find out the project here. You can also open issues and merge requests to make the project better.

Lemme know if you have any doubt, appreciation or anything else that you would like to communicate to me. You can tweet me @ravgeetdhillon. I reply to all the questions as quickly as possible. 😄 And if you liked this post, please share it with your twitter community as well.

January 01, 2020

Introducing Bonsai

TL;DR: Pair your Linux devices, developer APIs to share files, create object graphs with partial sync between devices, transactions, secondary indexes, rebasing, and more built upon GVariant and LMDB. Tooling to build cloudless multi-device services.

A picture of a Bonsai tree and a gnomeI’ve been spending a great deal of time thinking about what types of products I’d like to see in GNOME and what is missing to make that happen.

One observation is that I want access to my files and application data on all my computing devices but I don’t want to store that data on other peoples computers. I have computers, they have internet access, I shouldn’t have to use a multi-tenancy cloud if I’m running as much Free Software as I do. But if that is going to be competitive it needs to be easier than the alternatives.

But to build this I need a few fundamental layers to build applications atop. I’ll need access to files using all the GIO file APIs we love (GFile, GFileEnumerator, GIOStream, etc). I’ll also need the ability to read and write application data in a way that can be shared between devices which may not always be connected to my home Wi-Fi. In particular, we need to give developers great tools to make applications that natively support device synchronization.

What I’ve built to experiment with this all is Bonsai. It is very much an experiment at this phase but it is getting interesting enough to collaborate with others who would like to join me.

Bonsai consists of a daemon that you run on your “mostly connected” computer. Although that could easily be a raspberry pi quality computer in your home. That computer hosts the “upstream” storage space for files and application content.

Other devices like laptops, phones, or IoT can be paired with that primary device. They communicate using TLS connections using pinned self-signed certificates with point-to-point D-Bus serialization on top. The D-Bus serialization makes it convenient to use gdbus-codegen to generate proxies and services.

One service available to devices is the storage service. It can be consumed from libbonsai-storage to allow applications to browse, create, move, modify and stream file content.

Applications are much better when they can communicate between devices. So a Data-Access-Object library, aptly named libbonsai-dao, provides serializable object storage built upon GVariant and LMDB. It supports primary and secondary indexes, queries, cursors, transactions, and incremental sync between devices. It has the ability to rebase local changes atop changes pulled from the primary Bonsai device.

That last bit is neat because it means if an application is running on two devices which create new content they don’t clobber each others history.
The primary issue here is dealing with merge conflicts but libbonsai-dao provides some design for data objects to do the right thing.

Bonsai could also could serve as a base to build interesting services like backup, VPN, media sharing and casting, news, notes, calendars, contacts, and more. But honestly, it can only do that if people are actually interested in something like this. If so, let me know and see if you can lend your time or ideas for what you’d want this to become.

December 31, 2019

Wrapping Up 2019

It’s the last night of the year and the decade, and here is the mandatory End of Year’s post.

Family

This year was without a doubt the most difficult in my (still young) life. Things were setting up to be a great year at the beginning, there were big plans for the Hack project I was working on at Endless with my colleagues, and my wife Helena was going to start an illustration course after our son finally started at the kindergarten (in Germany it’s common for kids to enter it when they’re already 2 years old…), besides other personal projects we were preparing.
However, in a visit to the dentist by my wife in order to check something bothering her, she ended up being disagnosed with mouth cancer.
As would happen with anyone, the news really shook us and made us go through all the common wonderings of why would such thing happen to someone who has no family history of such sicknesses, doesn’t drink, doesn’t smoke, etc.

Still this is a positive post! Everything moved very quickly and neatly on the doctors side after the diagnostic. The tests and surgery happened as fast as they could possibly be done, and since apparently it was disagnosed at a very early stage, Helena “only” needed two surgeries and no aggressive treatments.
In the end, we are very thankful to all the doctors, nurses, and staff. It couldn’t have been better, from the great quality of the services, to the friendliness of the people involved. A big and honest thank you again to the great people who dealt with us at Berlin’s Unfallkrankenhaus.

We are extremely lucky to have universal healthcare coverage. Besides the normal (and public) insurance we have, we only had to pay very little extra costs that are even neglegible. I cannot imagine having to worry with the sickness and also with the costs of treatments.

Being away from our family when this happened also made it more difficult as we had to juggle the hospital trips with taking care of our son (who was not yet in the kindergarten when this started) and my work. On the work side, I need to thank Cosimo and Endless, who made it clear I’d have all the time I needed to organize things on my side; that was extremely important. And we also need to thank our neighbor Ilka, who took care of our son a few times while we both were away. Of course, many more people offered their support, and we had Helena’s mother over for a couple of weeks in the second surgery. All the support and nice words was important and we’re grateful to have such great people in our lives.

One last thing to end this subject. I really need to emphasize Helena’s attitude towards her situation. We have been together for a long time, and I knew she was a positive person, but her positive attitude in the face of such a serious case was mind-blowing even for the doctors (one even said “Do you know what this means? …. Yes? Okay, this is weird, I had never had anyone behaving like this after the news…”). I feel like the drama was all mine and she had to recomfort me, even though she was the one who had to endure the initial uncertainty, the surgeries, the recovery…
After so much time together and so many experiences we shared, this problem made me admire even more the person I love. I wish our kids get that attitude to life and not my traditional-and-very-Portuguese fatality 🙂

Work

On the work side things also had a twist. At about the same time Helena was having her second surgery, my work at Endless was about to change too, and I joined Kinvolk for a temporary position, as explained in this post, since I wasn’t sure about mixing friendship and work.
Well, it turns out that I liked the work, the people, and the possibilities at Kinvolk so much that (in November) I accepted the proposal to make it permanent!

Technically, coming from the Linux desktop world, it felt “foreign” to take over a Go + React project like Nebraska, but I already feel very comfortable with this “ecosystem”.

I am genuinely excited about what is coming from Kinvolk, and I will keep working on the company’s existing and new products. We are also looking for great people to help deliver great & 100% Open Source solutions, so check out our open positions.

Community

About GNOME/community work. It’s difficult to find the time and energy to do anything tech-related outside of work, so I cannot realistically think I will be an active contributor in my spare time.
Still, I keep my eye and interest in the GNOME and flatpak communities. Last year (2018) I “flatpaked” two old games (noiz2sa and rRootage) and added them to Flathub, and now I am in the process of getting Robocode into flathub (more on that soon).

That’s it!

And that’s all for this year’s wrap-up! Despite a very difficult situation, we end the decade feeling very happy and fortunate. I wish everybody a great new decade! Love.

Why we need a free desktop

This post was written by Neil McGovern, Executive Director of the GNOME Foundation.

A photo of Neil McGovern, Ecexutive Director of the GNOME Foundation in August 2019. He is wearing a suit. Behind him is a sign that says "GUADEC" and "Private Internet Access."
Photo courtesy of Richard Brown. Licensed CC-BY-NC.

I am frequently asked if there’s any point in the desktop anymore. With the rise of cloud services, it’s easy to wonder whether there is a need. I believe that a free software desktop system is more important than ever.

GNOME creates an entire desktop environment that is beautifully designed and simple to use. We do this to ensure user freedoms. It is this empowerment of end users – acknowledging their right to control their own computing – that drives me forward.

The intention behind making free software is important, but irrelevant if the reality is that users cannot make use of those freedoms. When fewer than 0.5% of the world’s population can code, the chance of someone being able to modify their own desktop, or pay someone to do so, is vanishingly small. It is our responsibility, as technologists, a community, and a foundation, to provide to put the user first. Software must be built for everyone, and that’s what we are doing.

It is not enough for software to be free of charge, or even available under an open source license, if your data is being sent to third parties in attempts at monetization. It’s not enough if it is still necessary to have a fast, expensive internet connection to get the latest upgrades or access to files. It’s not enough if you need accessibility features that are under developed or unavailable. We see these situations as unacceptable and are working to change them.

Over the last year, we’ve grown from two full time, and one part time, employees to seven. Two more will be joining us shortly. This is to provide the support to enable the GNOME desktop to be what we need it to be. We will be launching a renewed focus on accessibility. We’re introducing out Coding Education Challenge – to make it easier for people to contribute to GNOME and free and open source software, regardless of background. We will do all of this while driving innovation and continuing to update our software based on solid user testing.

To do this, we need your help. We rely on individual donors to help support us. Help us bring the user freedoms to millions more people by joining Friends of GNOME today.

We recommend a donation of $25/month ($5/month for students). These donations support our staff, programs, and the ongoing development of the GNOME desktop environment and other software in the GNOME ecosystem.

With your help and support we’ll continue to develop world class free software and bring user freedom into the hands of every user.

Flathub 2019 roundup

One could say that the Flathub team is working silently behind the scenes most of the time and it wouldn't be far from the truth. Unless changes are substantial, they are rarely announced elsewhere than under a pull request or issue on GitHub. Let's change it a bit and try to summarize what was going on with Flathub over the last year.

Beta branch and test builds

2019 started off strong. In February, several improvements to general workflow but also how things under the hood work landed. Maintainers gained the ability to sign-in to buildbot to manage the builds and start new ones without having to push new commits. A delay has been introduced between finishing the build and publishing it to the stable repository to the possibility to test new build locally and also publish it faster or scrap it altogether. The initial delay was 24 hours but as it was too confusing, it was shortened to 3 hours.

Perhaps most importantly, the changes made it possible to publish test builds of pull requests and completely new applications. Additionally, Flathub gained support for publishing applications to separate beta remote.

Alex wrote more about the changes on his blog.

New reviewer

The review team so far consisted of me, Patrick Griffis and Nick Richards. As each of us has different time commitments, every now and then we were falling behind with handling new application submissions.

We have invited Bilal Elmoussaoui to join us. As he already was reviewing new pull requests and improving GNOME applications manifests, letting him merge changes was only a formality. Late congratulations!

external-data-checker

In November, we have officially integrated external-data-checker, a tool that detects missing sources (not limited to extra-data!) and submits pull requests with fixes. It was described in detail by one of its authors, Will Thompson, here. You can also read my announcement.

Miscellaneous

Buildbot started validating appdata files with appstream-glib.

As Flatpak and flat-manager gained support for end-of-life-rebase, the setting was also exposed to maintainers via flathub.json. It's especially useful for applications that changed their ID, as Flatpak can “seamlessly” switch users to the new app. Currently, it's not supported by GNOME Software or KDE Discover and only visible in the terminal. This is how it looks like:

asciicast

We also automated merging new applications as it was completely manual so far. Similarly to external-data-checker, it uses Github Actions under the hood and can be inspected in Flathub's actions repository.

From boring infrastructure news, flat-manager-client was switched to use aiohttp which improved upload reliability for bigger applications. I also deployed basic infrastructure monitoring to be less surprised when particular builder runs out of disk space or goes offline.

Numbers!

Excluding SDK extensions and themes, 273 applications were submitted and added to Flathub. This brings the total number to 744!

While we don't expose download statistics in a fancy way (yet!), anyone who likes to play with JSON files can find them here. 10 most downloaded apps are:

  1. Telegram Desktop (751557 downloads)
  2. GIMP (667460 downloads)
  3. LibreOffice (643486 downloads)
  4. VLC (459069 downloads)
  5. Spotify (414344 downloads)
  6. Skype (393855 downloads)
  7. Steam (341706 downloads)
  8. Visual Studio Code (309148 downloads)
  9. M.A.R.S. (257233 downloads)
  10. Klickety (240809 downloads)

Note that the numbers above include updates from existing installations so they may look overly enthusiastic. These are not statistics of new installs.

Honorable mentions: other KDE apps (KGeography, KNavalBattle, KDiamond), Audacity, Inkscape, Ri-li and The Battle for Wesnoth.

Some plans for 2020

As GitHub turned out to be a poor place for non-technical discussions, we want to adopt Discourse for community forums. It will be also used for official announcements.

We are in progress of migrating the main repository server to new hardware, generously donated by Mythic Beasts. Apart from improved performance thanks to SSD for buffering writes and caching reads, we will also have enough space to keep us going for many years.

Happy New Year!

December 30, 2019

Designing an Icon for Your App

You’ve designed your app’s interface, and found the perfect name for it. But of course a great app also needs a great icon before you can release it to the world.

After the name, the app icon is the most important part of an app’s brand. The icon can help explain at a glance what the app does, and serves as an entry point to the rest of the experience. A high quality icon can make people want to use an app more, because it’s a stand-in for the quality of the entire app.

Think of the app icon like an album cover for your app. Yes, technically the music is the same even if you have a terrible cover, but a great cover can capture the spirit of the album and elevate the quality of the thing as a whole.

Metaphors

The first thing you need is a metaphor, i.e. some kind of physical object, symbol, or other visual artifact that symbolizes your application.

Finding a good metaphor is a fuzzy and sometimes difficult process, as it’s often hard to find a physical object many people will recognize as related to the domain of your app. There are no hard and fast rules for this, but ideally your icon metaphor should fall into one of these categories:

  • Physical objects directly related to what the app does (e.g. a speaker for Music)
  • Physical objects vaguely related to the app’s domain or an older analog version of it (e.g. a cassette tape for Podcasts)
  • Symbols related to the domain (e.g. the “play” triangle for Videos)
  • A simplified/stylized version of the app’s user interface (e.g. Peek)

The Music, Podcasts, Videos, and Peek icons

There are also anti-patterns for metaphors which should be avoided, if possible:

  • Heavily stylized symbols or logos (e.g. Fondo)
  • Completely random objects or symbols (e.g. Dino)
  • Mascots (e.g. GIMP)

These kinds of metaphors can work, but they make it harder to see at a glance what the app does, and don’t fit in as well with the rest of the system.

Let’s try an example: Remember the Reading List app we designed in a previous tutorial? Let’s make an icon for that!

My process for brainstorming metaphors is quite similar to the one I use to brainstorm names: I come up with a few ideas for physical objects, put them in a thesaurus to find more related ones, and repeat that until I have a list with at least a few viable candidates.

Let’s start with related physical objects:

  • Reading List
  • List
  • Book
  • Bookshelf
  • Article
  • Newspaper
  • Bookmark

How about related non-objects? Maybe we can find some more interesting objects that way:

  • Reading, the activity: Couch, reading light, tea/coffee, glasses
  • Later (as in, “read later”): Clock, timer
  • Collecting things: Folder, clipboard

Now that we have a few options, let’s see which ones are viable. Ideally, the metaphor you choose should have these attributes:

  • Somewhat specific to the app’s domain (e.g. a book is probably too generic in our case)
  • Recognizable at small sizes
  • Can be drawn in a simple, geometric style (this can save you a lot of work later on)

In this case, the most viable options are probably

  • Stack of books
  • Bookshelf
  • Bookmark
  • One of the above + a clock

Sketches

Now that we have some metaphors, let’s try to sketch them to see if they make for good icons. I usually use pencil and paper for this, but you can also use a whiteboard, digital drawing tablet, or whatever else works for you to quickly visualize some concepts.

There are also official sketching templates with the base icon shapes available for download.

While sketching it’s good to think about the overall shape your icon will have. If it makes sense for your metaphor, try to make the icon not just a simple square or circle, but something more unique and interesting. If it doesn’t make sense in your case don’t force it though, there are other ways to make the icon visually unique and interesting, such as color and structure.

In this case, it looks like there are a number of viable concepts among our sketches, though nothing jumps out as the obvious best option. I kind of like the bookshelf, so let’s try going forward with that one.

Start from a Template

We now have a concept we like, so we can move to vector. This is where we can start using the shiny new icon design tools!

The first step is to install App Icon Preview from Flathub. We’ll also need a vector editor that works well with SVG, such as Inkscape.

Open App Icon Preview, and hit the “New App Icon” button on the welcome screen. We’re asked for the Reverse Domain Name Notation name of the app (e.g. org.mozilla.Firefox), and where to store the icon project file.

In most cases you’ll want to keep this file in your app’s git repository. Think of it as your icon’s source file, which the final icon assets are later exported from.

After that, the icon will open in preview mode in App Icon Preview. Now we open the same file in a vector drawing app, and edit it from there. Every time we save the source file, the preview will automatically update.

Now we have our icon source file open in both App Icon Preview and Inkscape. Icon Preview shows just the icon grid:

In Inkscape, open the Layers panel (Ctrl + Shift + L) and check out the layer structure. The icons layer is where the actual icon goes. The grid and baseplate layers contain the icon grid and the canvas respectively.

Behind everything else is the template layer, which doesn’t contain anything visible and is only needed so App Icon Preview can get the canvas size for preview and export. Don’t change, hide, rename, or delete this layer, because the icon might not show up in App Icon Preview anymore.

When previewing the icon in App Icon Preview you’ll want to hide the grid and baseplate layers (using the little eye icon next to the layers).

Make sure you have the GNOME HIG Colors palette in Inkscape. Inkscape 1.0 Beta has it by default, otherwise you can download it from the HIG App Icons repository and put it in ~/.var/app/org.inkscape.Inkscape/config/inkscape/palettes for Flatpak Inkscape or ~/.config/inkscape/palettes if it’s on the host. There’s also a color palette app, which you can get on Flathub.

Inkscape Tips

Once you’re familiar with the template, you can start drawing your icon idea as vector. If you’re using Inkscape and aren’t very familiar with the app yet, here’s a quick overview of the things you’ll likely need.

Toolbox (the toolbar on the left edge)

  • Selection/movement/scaling tool (S)
  • Rectangle tool (R)
  • Ellipse tool (E)

And if you’re doing something a little more advanced:

  • Bezier path drawing tool (B)
  • Path & node editor (N)
  • Gradient editor

Dialogs Sidebar (configuration dialogs docked to the right side)

  • Fill & Stroke (Ctrl + Shift + F)
  • Align & Distribute (Ctrl + Shift + A)
  • Layers (Ctrl + Shift + L)

Snap Controls (the toolbar on the right edge) Inkscape has very fine-grained snapping controls, where you can configure what should be snapped to when you move items on the canvas (e.g. path nodes, object center, path intersections). It’s a bit fiddly, but very useful for making sure things are aligned to the grid. The icon tooltips are your friends :)

Of course, teaching Inkscape is a bit out of scope for this guide. If you’re just getting started with it, I recommend doing a few beginner tutorials first to familiarize yourself with the basic workflows (especially around the tools listed here).

The GNOME Icon Style

Traditionally, GNOME app icons were very complex, with lots of photorealistic detail and many different sizes which had to be drawn separately. This changed when we revamped the style in 2018, with the explicit goal of making it easier to produce, and more approachable for third party icon designers.

The new style is very geometric, so in many cases you can draw an entire icon with just basic shapes.

These icons consist of rectangles (some with rounded corners) and circles exclusively

Perspective

One important attribute of the style is the abstract perspective. Even though the style is simple and geometric, it’s not “flat”: It makes use of material, depth, and perspective, but in a way that is optimized for easy production as vector.

The perspective works by “folding” horizontal and vertical layers into one dimension, so you can see the object orthogonally from both the top and the front.

This results in a kind of “chin” an the bottom of the object, which is shaded darker than the top surface, since light comes evenly from the top/back.

The perspective is achieved by folding the top and front views together

In practice, this usually doesn’t have a huge impact, since it’s also suggested to make objects not too tall, when possible. A lot of icons are just a simple 2D shape with a small chin at the bottom.

That said, it can look very weird when you get the perspective wrong, e.g. by folding the layers from the top/back instead of the front, so it’s important to keep this in mind.

Material & Lighting

Icons can make use of skeuomorphic materials (e.g. wood, metal, or glass) if it’s needed for the physical metaphor, but outside of those special cases it’s recommended to keep things simple.

Examples of icons with realistic materials

Straight surfaces have flat colors (instead of e.g. slight vertical gradients), but curved surfaces can/should have gradients. The corners on the chin on rounded base shapes should have a highlight gradient.

The highlight on the corners of the chin is done with a horizontal gradient.

Shadows inside the icon should be avoided if possible, but can be used if necessary (e.g. for contrast reasons). Do not use drop shadows that affect the app outline though, because GTK renders such a shadow automatically.

Icon Grid & Standard Shapes

In order to make sure icons are somewhat similar in size, alignment, etc. we have a grid system.

The canvas is 128x128px (for legacy reasons), but you’re designing for 64×64, while also taking 32×32 into account where possible. In general, it’s good to make sure you’re putting as many lines as possible on grid lines, so they’re sharp even at 32. Testing in App Icon Preview helps a lot with this.

Each of the grid squares is 8×8 pixels. In order to be pixel-perfect at 64 and 32, orthogonal lines/edges should be on these grid lines (or fractions of them).

The icon grid also has some standard shapes for wide, tall, square, and circular icons, which can be used as a basis for the structure of the icon if it makes sense for the metaphor (e.g. if the object is more or less square, use the square standard shape).

Protip: Great Artists Steal Reuse

There are lots of apps with icons in the GNOME style out there, and they’re all free software. If there’s something you like about another app’s icon you can get the source from GNOME Gitlab or Github, look at how a certain object is drawn, or just take (parts of) other icons and adapt them to your needs.

This is especially useful for common objects needed in many icons, e.g. pencils, books, or screens. The icon template in App Icon Preview comes with a few of these common objects on the canvas, which can be a good starting point for new icons.

Draw, Preview, Repeat!

Armed with this knowledge about the style and tooling, we can finally jump in and start drawing! In this case I re-did the sketch at a slightly larger size to get a better feel for it:

Book shelf sketch

Now let’s try vectorizing it. Since the overall shape is a tall rectangle, we can start with the tall rectangle standard shape. If we change the color to brown, and make the chin at the bottom thicker (by resizing the top layer vertically), we have the basic frame for the shelf.

After that we can add the actual shelves, by simply adding two slightly darker brown rectangles (the back of the shelf), and two wide rectangles at the top of these (the bottom of the horizontal shelf).

Changing the color of the chin is a bit tricky, because it has a horizontal gradient. It requires selecting the bottom rectangle with the gradient tool, clicking each gradient stop manually and changing it to brown by clicking one of the colors in the color palette at the bottom edge of the window.

If you use e.g. Brown 3 from the palette for the top surface, you can use the Brown 4 or 5 for the chin, and Brown 2 or 3 for the highlights in the corners.

Let’s see what this looks like in Icon Preview now:

Getting there, right? Now let’s add some books. Lucky for us, book spines can also be drawn as rectangles, so this shouldn’t be too hard. We don’t want too much detail, because we’re designing for 64px first and foremost. Something like 10 books per row should work.

100% rectangles :)

If we want to get fancy we can also round the top of the spine on some of the books by adding an ellipse of the same color, but it’s not really needed at this size.

Finished full-color icon in App Icon Preview

Looking good! I think we’re done with the full-color icon.

If at this point in the process you feel like the concept or metaphor isn’t working out (for example because it doesn’t look interesting enough, or because it’s too complicated to work at small sizes) you can always go back a few steps and try vectorizing a different one of your sketches. The nice thing about the simplicity of this style is that you can do this without losing weeks of work, which making iteration on concepts much more feasible.

Symbolic

Now that the full-color icon is done, we can start thinking about the symbolic icon for our app. Ideally this is a simplified, one-color version of the app icon, designed for a 16×16 px canvas. It’s used in notifications and some other places in the Shell where a colorful icon would not be appropriate.

I won’t go into too much detail on this here since drawing good symbolics is a big topic, and this post is too long already. I might expand on this in a future post, but for now here are a few quick tips:

  • Alignment to the pixel grid is very important here if you don’t want the icon to end up a blurry mess
  • Stick to the original metaphor if at all possible, go for something else if not
  • Test in App Icon Preview to make sure the icon is actually recognizable at 16px
  • If possible leave the outermost 1px empty on all sides
  • Most strokes should be 2px, but they can be 1px in some cases
  • Don’t overthink it for the first version. This icon is a secondary thing, and it’s relatively little effort to fix/redo it later :)

Our bookshelf example looks tricky at first glance, because we have all these tiny books, and only 16 pixels to work with. However, if we simplify it enough it’s not too hard to get something decent. We can just use a two tall and two wide rectangles to draw the shelf, and three smaller rectangles as books on each shelf:

This one is literally just rectangles :)

And that’s it! We have a real app icon now, with everything that entails. If you want to have a look at the source for the icon we made in this tutorial, you can download the SVG here. It includes the final icon and some of the intermediate steps.

Color and Symbolic in App Icon Preview

Export

Now that we’re happy with the icon, we can press the “Export” button in App Icon Preview and save the final icon assets. The app will automatically optimize the SVGs for size, and if you have nightly builds of your app, you also get an automatically generated nightly icon without any extra work!Export Popover in App Icon Preview

Congratulations for making it all the way to the end! I hope you found this tutorial useful, and will go on to make great icons for your apps. If there’s anything you found unclear while following along, please let me know in the comments.

If you’re looking for more resources on the topic, check out the Icons and Artwork HIG page, the official guide on making GNOME App Icons, and the Icon Design Workflow wiki page.

Happy hacking :)

How about not stabbing ourselves in the leg with a rusty fork?

Corporations are funny things. Many things no reasonable person would do on their own are done every day in thousands of business conglomerates around the world. With pride even. Let us consider as an arbitrary example a corporation where every day is started by employees stabbing themselves in the leg with a rusty fork. This is (I hope) not actually done for real, but there could be a company out there where this is the daily routine.

If you think that such a thing could possibly never happen, congratulations on having never worked in a big corporation. Stick with that if you can!

When faced with this kind of pointless and harmful routine, one might suggest not doing it any more or replacing it with some other, more useful procedure. This does not succeed, of course, but that is not the point. The reasons you get back are the interesting thing, because they will tell you what kind of manager and coworkers you are dealing with. Here are some possible options, can you think of more?

The survivor fallacist

This is a multi-billion dollar company. If stabbing oneself in the leg was bad, as you seem to claim, we could not have succeeded.

The minimum energy spender

It would take too much work to get this changed. Just bite the bullet and do it every morning. You're better off this way.

The blame shifter

This is mandated by our head office, we can't do anything about this even if we wanted to.

The metric optimizer

Our next year's bonus metric will measure the number of leg stabbings reduced that year. We must get as many of them in this year as we possibly can.

The traditionalist

We have always done this. We must always do it.

The cornered animal

How dare you! Do you have any idea how much work it is to get pre-rusted forks? They are all made of stainless steel nowadays. Your derogatory insinuations are a slap on the face of all people working to keep this system running!

The folklorist

This is a commonly accepted best practice in software companies, thus we should do it also.

The brainwashee

This is actually a great invention. Getting a nice jolt of adrenaline first thing in the morning really wakes you up and gives you focus for the entire day. Try it for a month or two! You'll see.

The control freak messiah

This procedure was put in place by the founder/CEO. You do not challenge his choices if you know what is good for you.

The team spiritist

If you don't stab yourself in the leg, you are setting up a very bad example that demoralizes everybody else who do their part diligently.

And finally the (sadly) most common one

Our product is special.

December 29, 2019

Linux Application Summit 2019 – retrospective

I wanted to pen something before the year is gone about the recent Linux Application Summit 2019. This is the 3rd iteration of the conference and each iteration has moved the needle forward.

The thing that excites me going forward is what we can do when we work together between our various free and open source communities. LAS represents forming a partnership and building a new community around applications. By itself the ‘desktop’ doesn’t mean much to the larger open source ecosystems not because it isn’t important because the frenetic pace of open source community expansion have moved so fast that these communities do not have organizational history of foundational technologies that our communities have built over the years that they use every day and maintain.

To educate them would be too large of a task instead we need to capitalize on the hunger for technology, toolchains, and experience that build and possess. We can do that by presenting ourselves as the apps community which presents no prejudice to the outside community. We own apps, because we own the mindshare through maturity, experience, and communities that spring around it.

From here, we can start representing apps not just through the main Linux App Summit, but through other venues. Create the Apps tracks at FOSDEM, Linux Foundation events, Plumbers etc.

In the coming weeks, I will be working with other conference organizers around the globe to see how we can create these tracks and have ourselves represent ourselves there.

LAS represented the successful creation of a meta-community and from there we can build the influence we need to build the norms we need to build on the desktop.

Looking forward to 2020!

Big thanks to the GNOME Foundation for their support of Linux Application Summit.

First milestone, GStreamer pipelines and range requests

This is the second blog post about my Outreachy internship at Fractal. The project I’m working on is the integration of a video player in Fractal.

The progress I’ve made

Like any communication app based on the Matrix protocol, Fractal is structured into rooms. When a user enters a room, they can see the messages that have been sent in that room. I’ll refer to those messages as the room history. During the first weeks of my internship, I’ve integrated a simple video player in the room history: when receiving a video file, the user can play, pause and stop the video right there.

The control you can see in the picture with the play/pause button, the time slider and download button, etc. was already implemented for audio reproduction in Fractal. So basically, my task so far has been to get the video rendered above that box. It might seem simple, but it has been fun. I’ll share just a couple of things that I’ve learned in the process.

Gstreamer Pipelines

A pipeline in GStreamer seems to be one of those concepts whose basic idea is pretty easy to grasp, but that can get as complicated as you want. As its name suggests, a pipeline is a system of connecting pieces that manipulate the media in one way or another. Those connecting pieces are called elements. The element where the media comes from is called source element and the one(s) where it’s rendered is/are called sink element. An example is shown in the drawing in https://bit.ly/2twW6Ht . As you can see there, every element itself again has a source and/or one or more sinks, that connect the elements among each other. The phenomenon, just described, of finding the same concept at the level of elements and at the level of the pipeline is not uncommon. I’ll give two more examples.

The first example is about buffering. On one hand, when pushing data through the pipeline, an element step by step gets access to the media by receiving a pointer to a small buffer in memory from the preceding element (buffers on the level of elements). Before receiving that, the element cannot start working on that piece of media. On the other hand, one can add a buffer element to the pipeline. That element is responsible for letting bigger chunks of data get stored (buffers on the level of the pipeline). Before that’s done, the pipeline cannot start the playback.

The second example concerns external and internal communication. The way a pipeline communicates internally is by sending events from one element to another. There are different kinds of events. Some of them are responsible for informing all pieces of the pipeline about an instruction that might come from outside the pipeline.  An example is wanting to access a certain point of the video and playing the video from there, called seek event. For that to happen, the application can send a seek event to the pipeline (event on the level of pipeline). When that happens, that seek event is put on all sink elements of the pipeline and from there sent upstream, element by element (events on the level of elements), until it reaches the source element, which then pulls the requested data and sends it through the pipeline. But events are just one example of communication. Of course, there are other means. To mention some more: messages the pipeline leaves on the pipeline bus for the application to listen to, state changes and queries on elements or pads.

So I find the concept of pipelines quite interesting. But to practically get media processed the way I want, I’d have to set up a whole pipeline correspondingly. Creating an adequate pipeline and communicating with it and/or its elements can get complicated. But luckily for me, the audio player in Fractal is implemented using a concept called GstPlayer, so that’s what I’ve also used for video. It’s an abstraction of a pipeline that sets up a simple pipeline for you when creating it. It also has a simple API to manipulate certain functionalities of the pipeline once created. And to go beyond those functionalities, you can still extract the underlying pipeline from a GstPlayer and manipulate it manually.

Range requests

In the last section, I’ve briefly mentioned seek events, i.e. events that request to play the video from a certain point. When a source element receives such an event, it tries to pull the requested piece of media. If communicating via http, it tries that sending a range request, which is a request with a header field called Range that specifies which part of the media is requested in bytes (see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Range). In order to make sure that range requests are supported, the responses are checked for a header entry “accept-ranges”. Only if that entry exists and its parameter is “bytes” (the other option would be “none”), the support of range requests is guaranteed. Synapse, the standard server of Matrix, does not include the accept-ranges entry in the headers of its response. Therefore seek requests to media files on that server fail.

At some point, I thought I could solve that problem by activating progressive buffering in the pipeline and seek only in the buffered data. But progressive buffering itself uses seeking. So when activating progressive buffering even playback fails. There might be other kinds of buffering that’d do. But for now, our way around the problem is to download the video files and play them locally.

Chapter #Next: Kindle

Since the beginning of December I started working for Amazon in Madrid to run the team responsible for the Kindle experience on PC and the web. This is a bit of a career shift for me, for a while I’ve been wondering how would it be like to deliver a user experience of a successful consumer product, I have always been working on the seat of the OS/platform provider and more recently I have learned the hardware end by working with OEMs as part of my last role at Red Hat. However I have never been in the shoes of an ISV delivering an app for a vertical market, on top of that the position was advertised for Madrid for an onsite team, while working from home has many advantages running an entire team onsite is also a refreshing change for me, all in all it seemed like a cool opportunity where I could learn lots and try something different so I made the jump.

https://cdn.vox-cdn.com/thumbor/z96qIpPgQsSx9mHfH0-D1IW20DU=/0x0:1920x1080/920x613/filters:focal(807x387:1113x693):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/57280131/kindle_app_logo.0.jpg

By the way, my team is hiring software engineers in Madrid, Spain, so if this is an area you are interested in please DM me on twitter.

 

December 27, 2019

Wifi deauthentication attacks and home security

I live in a large apartment complex (it's literally a city block big), so I spend a disproportionate amount of time walking down corridors. Recently one of my neighbours installed a Ring wireless doorbell. By default these are motion activated (and the process for disabling motion detection is far from obvious), and if the owner subscribes to an appropriate plan these recordings are stored in the cloud. I'm not super enthusiastic about the idea of having my conversations recorded while I'm walking past someone's door, so I decided to look into the security of these devices.

One visit to Amazon later and I had a refurbished Ring Video Doorbell 2â„¢ sitting on my desk. Tearing it down revealed it uses a TI SoC that's optimised for this sort of application, linked to a DSP that presumably does stuff like motion detection. The device spends most of its time in a sleep state where it generates no network activity, so on any wakeup it has to reassociate with the wireless network and start streaming data.

So we have a device that's silent and undetectable until it starts recording you, which isn't a great place to start from. But fortunately wifi has a few, uh, interesting design choices that mean we can still do something. The first is that even on an encrypted network, the packet headers are unencrypted and contain the address of the access point and whichever device is communicating. This means that it's possible to just dump whatever traffic is floating past and build up a collection of device addresses. Address ranges are allocated by the IEEE, so it's possible to map the addresses you see to manufacturers and get some idea of what's actually on the network[1] even if you can't see what they're actually transmitting. The second is that various management frames aren't encrypted, and so can be faked even if you don't have the network credentials.

The most interesting one here is the deauthentication frame that access points can use to tell clients that they're no longer welcome. These can be sent for a variety of reasons, including resource exhaustion or authentication failure. And, by default, they're entirely unprotected. Anyone can inject such a frame into your network and cause clients to believe they're no longer authorised to use the network, at which point they'll have to go through a new authentication cycle - and while they're doing that, they're not able to send any other packets.

So, the attack is to simply monitor the network for any devices that fall into the address range you want to target, and then immediately start shooting deauthentication frames at them once you see one. I hacked airodump-ng to ignore all clients that didn't look like a Ring, and then pasted in code from aireplay-ng to send deauthentication packets once it saw one. The problem here is that wifi cards can only be tuned to one frequency at a time, so unless you know the channel your potential target is on, you need to keep jumping between frequencies while looking for a target - and that means a target can potentially shoot off a notification while you're looking at other frequencies.

But even with that proviso, this seems to work reasonably reliably. I can hit the button on my Ring, see it show up in my hacked up code and see my phone receive no push notification. Even if it does get a notification, the doorbell is no longer accessible by the time I respond.

There's a couple of ways to avoid this attack. The first is to use 802.11w which protects management frames. A lot of hardware supports this, but it's generally disabled by default. The second is to just ignore deauthentication frames in the first place, which is a spec violation but also you're already building a device that exists to record strangers engaging in a range of legal activities so paying attention to social norms is clearly not a priority in any case.

Finally, none of this is even slightly new. A presentation from Def Con in 2016 covered this, demonstrating that Nest cameras could be blocked in the same way. The industry doesn't seem to have learned from this.

[1] The Ring Video Doorbell 2 just uses addresses from TI's range rather than anything Ring specific, unfortunately

comment count unavailable comments

December 24, 2019

2019-12-24 Tuesday.

  • Mail chew; more staff calls. Out for some shopping into town, back for lunch.
  • Posted our annual: Thank-you & summary of 2019 - to try to thank so many who have worked with us this year.
  • Out for a walk, back, babes watched Frozen with J.

December 23, 2019

End of the year Update: 2019 edition

It’s the end of December and it seems that yet another year has gone by, so I figured that I’d write an EOY update to summarize my main work at Igalia as part of our Chromium team, as my humble attempt to make up for the lack of posts in this blog during this year.

I did quit a few things this year, but for the purpose of this blog post I’ll focus on what I consider the most relevant ones: work on the Servicification and the Blink Onion Soup projects, the migration to the new Mojo APIs and the BrowserInterfaceBroker, as well as a summary of the conferences I attended, both as a regular attendee and a speaker.

But enough of an introduction, let’s dive now into the gory details…

Servicification: migration to the Identity service

As explained in my previous post from January, I’ve started this year working on the Chromium Servicification (s13n) Project. More specifically, I joined my team mates in helping with the migration to the Identity service by updating consumers of several classes from the sign-in component to ensure they now use the new IdentityManager API instead of directly accessing those other lower level APIs.

This was important because at some point the Identity Service will run in a separate process, and a precondition for that to happen is that all access to sign-in related functionality would have to go through the IdentityManager, so that other process can communicate with it directly via Mojo interfaces exposed by the Identity service.

I’ve already talked long enough in my previous post, so please take a look in there if you want to know more details on what that work was exactly about.

The Blink Onion Soup project

Interestingly enough, a bit after finishing up working on the Identity service, our team dived deep into helping with another Chromium project that shared at least one of the goals of the s13n project: to improve the health of Chromium’s massive codebase. The project is code-named Blink Onion Soup and its main goal is, as described in the original design document from 2015, to “simplify the codebase, empower developers to implement features that run faster, and remove hurdles for developers interfacing with the rest of the Chromium”. There’s also a nice slide deck from 2016’s BlinkOn 6 that explains the idea in a more visual way, if you’re interested.


“Layers”, by Robert Couse-Baker (CC BY 2.0)

In a nutshell, the main idea is to simplify the codebase by removing/reducing the several layers of located between Chromium and Blink that were necessary back in the day, before Blink was forked out of WebKit, to support different embedders with their particular needs (e.g. Epiphany, Chromium, Safari…). Those layers made sense back then but these days Blink’s only embedder is Chromium’s content module, which is the module that Chrome and other Chromium-based browsers embed to leverage Chromium’s implementation of the Web Platform, and also where the multi-process and sandboxing architecture is implemented.

And in order to implement the multi-process model, the content module is split in two main parts running in separate processes, which communicate among each other over IPC mechanisms: //content/browser, which represents the “browser process” that you embed in your application via the Content API, and //content/renderer, which represents the “renderer process” that internally runs the web engine’s logic, that is, Blink.

With this in mind, the initial version of the Blink Onion Soup project (aka “Onion Soup 1.0”) project was born about 4 years ago and the folks spearheading this proposal started working on a 3-way plan to implement their vision, which can be summarized as follows:

  1. Migrate usage of Chromium’s legacy IPC to the new IPC mechanism called Mojo.
  2. Move as much functionality as possible from //content/renderer down into Blink itself.
  3. Slim down Blink’s public APIs by removing classes/enums unused outside of Blink.

Three clear steps, but definitely not easy ones as you can imagine. First of all, if we were to remove levels of indirection between //content/renderer and Blink as well as to slim down Blink’s public APIs as much as possible, a precondition for that would be to allow direct communication between the browser process and Blink itself, right?

In other words, if you need your browser process to communicate with Blink for some specific purpose (e.g. reacting in a visual way to a Push Notification), it would certainly be sub-optimal to have something like this:

…and yet that is what would happen if we kept using Chromium’s legacy IPC which, unlike Mojo, doesn’t allow us to communicate with Blink directly from //content/browser, meaning that we’d need to go first through //content/renderer and then navigate through different layers to move between there and Blink itself.

In contrast, using Mojo would allow us to have Blink implement those remote services internally and then publicly declare the relevant Mojo interfaces so that other processes can interact with them without going through extra layers. Thus, doing that kind of migration would ultimately allow us to end up with something like this:

…which looks nicer indeed, since now it is possible to communicate directly with Blink, where the remote service would be implemented (either in its core or in a module). Besides, it would no longer be necessary to consume Blink’s public API from //content/renderer, nor the other way around, enabling us to remove some code.

However, we can’t simply ignore some stuff that lives in //content/renderer implementing part of the original logic so, before we can get to the lovely simplification shown above, we would likely need to move some logic from //content/renderer right into Blink, which is what the second bullet point of the list above is about. Unfortunately, this is not always possible but, whenever it is an option, the job here would be to figure out what of that logic in //content/renderer is really needed and then figure out how to move it into Blink, likely removing some code along the way.

This particular step is what we commonly call “Onion Soup’ing //content/renderer/<feature>(not entirely sure “Onion Soup” is a verb in English, though…) and this is for instance how things looked before (left) and after (right) Onion Souping a feature I worked on myself: Chromium’s implementation of the Push API:


Onion Soup’ing //content/renderer/push_messaging

Note how the whole design got quite simplified moving from the left to the right side? Well, that’s because some abstract classes declared in Blink’s public API and implemented in //content/renderer (e.g. WebPushProvider, WebPushMessagingClient) are no longer needed now that those implementations got moved into Blink (i.e. PushProvider and PushMessagingClient), meaning that we can now finally remove them.

Of course, there were also cases where we found some public APIs in Blink that were not used anywhere, as well as cases where they were only being used inside of Blink itself, perhaps because nobody noticed when that happened at some point in the past due to some other refactoring. In those cases the task was easier, as we would just remove them from the public API, if completely unused, or move them into Blink if still needed there, so that they are no longer exposed to a content module that no longer cares about that.

Now, trying to provide a high-level overview of what our team “Onion Soup’ed” this year, I think I can say with confidence that we migrated (or helped migrate) more than 10 different modules like the one I mentioned above, such as android/, appcache/, media/stream/, media/webrtc, push_messaging/ and webdatabase/, among others. You can see the full list with all the modules migrated during the lifetime of this project in the spreadsheet tracking the Onion Soup efforts.

In my particular case, I “Onion Soup’ed” the PushMessagingWebDatabase and SurroundingText features, which was a fairly complete exercise as it involved working on all the 3 bullet points: migrating to Mojo, moving logic from //content/renderer to Blink and removing unused classes from Blink’s public API.

And as for slimming down Blink’s public API, I can tell that we helped get to a point where more than 125 classes/enums were removed from that Blink’s public APIs, simplifying and reducing the Chromium code- base along the way, as you can check in this other spreadsheet that tracked that particular piece of work.

But we’re not done yet! While overall progress for the Onion Soup 1.0 project is around 90% right now, there are still a few more modules that require “Onion Soup’ing”, among which we’ll be tackling media/ (already WIP) and accessibility/ (starting in 2020), so there’s quite some more work to be done on that regard.

Also, there is a newer design document for the so-called Onion Soup 2.0 project that contains some tasks that we have been already working on for a while, such as “Finish Onion Soup 1.0”, “Slim down Blink public APIs”, “Switch Mojo to new syntax” and “Convert legacy IPC in //content to Mojo”, so definitely not done yet. Good news here, though: some of those tasks are already quite advanced already, and in the particular case of the migration to the new Mojo syntax it’s nearly done by now, which is precisely what I’m talking about next…

Migration to the new Mojo APIs and the BrowserInterfaceBroker

Along with working on “Onion Soup’ing” some features, a big chunk of my time this year went also into this other task from the Onion Soup 2.0 project, where I was lucky enough again not to be alone, but accompanied by several of my team mates from Igalia‘s Chromium team.

This was a massive task where we worked hard to migrate all of Chromium’s codebase to the new Mojo APIs that were introduced a few months back, with the idea of getting Blink updated first and then having everything else migrated by the end of the year.


Progress of migrations to the new Mojo syntax: June 1st – Dec 23rd, 2019

But first things first: you might be wondering what was wrong with the “old” Mojo APIs since, after all, Mojo is the new thing we were migrating to from Chromium’s legacy API, right?

Well, as it turns out, the previous APIs had a few problems that were causing some confusion due to not providing the most intuitive type names (e.g. what is an InterfacePtrInfo anyway?), as well as being quite error-prone since the old types were not as strict as the new ones enforcing certain conditions that should not happen (e.g. trying to bind an already-bound endpoint shouldn’t be allowed). In the Mojo Bindings Conversion Cheatsheet you can find an exhaustive list of cases that needed to be considered, in case you want to know more details about these type of migrations.

Now, as a consequence of this additional complexity, the task wouldn’t be as simple as a “search & replace” operation because, while moving from old to new code, it would often be necessary to fix situations where the old code was working fine just because it was relying on some constraints not being checked. And if you top that up with the fact that there were, literally, thousands of lines in the Chromium codebase using the old types, then you’ll see why this was a massive task to take on.

Fortunately, after a few months of hard work done by our Chromium team, we can proudly say that we have nearly finished this task, which involved more than 1100 patches landed upstream after combining the patches that migrated the types inside Blink (see bug 978694) with those that tackled the rest of the Chromium repository (see bug 955171).

And by “nearly finished” I mean an overall progress of 99.21% according to the Migration to new mojo types spreadsheet where we track this effort, where Blink and //content have been fully migrated, and all the other directories, aggregated together, are at 98.64%, not bad!

On this regard, I’ve been also sending a bi-weekly status report mail to the chromium-mojo and platform-architecture-dev mailing lists for a while (see the latest report here), so make sure to subscribe there if you’re interested, even though those reports might not last much longer!

Now, back with our feet on the ground, the main roadblock at the moment preventing us from reaching 100% is //components/arc, whose migration needs to be agreed with the folks maintaining a copy of Chromium’s ARC mojo files for Android and ChromeOS. This is currently under discussion (see chromium-mojo ML and bug 1035484) and so I’m confident it will be something we’ll hopefully be able to achieve early next year.

Finally, and still related to this Mojo migrations, my colleague Shin and I took a “little detour” while working on this migration and focused for a while in the more specific task of migrating uses of Chromium’s InterfaceProvider to the new BrowserInterfaceBroker class. And while this was not a task as massive as the other migration, it was also very important because, besides fixing some problems inherent to the old InterfaceProvider API, it also blocked the migration to the new mojo types as InterfaceProvider did usually rely on the old types!


Architecture of the BrowserInterfaceBroker

Good news here as well, though: after having the two of us working on this task for a few weeks, we can proudly say that, today, we have finished all the 132 migrations that were needed and are now in the process of doing some after-the-job cleanup operations that will remove even more code from the repository! \o/

Attendance to conferences

This year was particularly busy for me in terms of conferences, as I did travel to a few events both as an attendee and a speaker. So, here’s a summary about that as well:

As usual, I started the year attending one of my favourite conferences of the year by going to FOSDEM 2019 in Brussels. And even though I didn’t have any talk to present in there, I did enjoy my visit like every year I go there. Being able to meet so many people and being able to attend such an impressive amount of interesting talks over the weekend while having some beers and chocolate is always great!

Next stop was Toronto, Canada, where I attended BlinkOn 10 on April 9th & 10th. I was honoured to have a chance to present a summary of the contributions that Igalia made to the Chromium Open Source project in the 12 months before the event, which was a rewarding experience but also quite an intense one, because it was a lightning talk and I had to go through all the ~10 slides in a bit under 3 minutes! Slides are here and there is also a video of the talk, in case you want to check how crazy that was.

Took a bit of a rest from conferences over the summer and then attended, also as usual, the Web Engines Hackfest that we at Igalia have been organising every single year since 2009. Didn’t have a presentation this time, but still it was a blast to attend it once again as an Igalian and celebrate the hackfest’s 10th anniversary sharing knowledge and experiences with the people who attended this year’s edition.

Finally, I attended two conferences in the Bay Area by mid November: first one was the Chrome Dev Summit 2019 in San Francisco on Nov 11-12, and the second one was BlinkOn 11 in Sunnyvale on Nov 14-15. It was my first time at the Chrome Dev Summit and I have to say I was fairly impressed by the event, how it was organised and the quality of the talks in there. It was also great for me, as a browsers developer, to see first hand what are the things web developers are more & less excited about, what’s coming next… and to get to meet people I would have never had a chance to meet in other events.

As for BlinkOn 11, I presented a 30 min talk about our work on the Onion Soup project, the Mojo migrations and improving Chromium’s code health in general, along with my colleague Antonio Gomes. It was basically a “extended” version of this post where we went not only through the tasks I was personally involved with, but also talked about other tasks that other members of our team worked on during this year, which include way many other things! Feel free to check out the slides here, as well as the video of the talk.

Wrapping Up

As you might have guessed, 2019 has been a pretty exciting and busy year for me work-wise, but the most interesting bit in my opinion is that what I mentioned here was just the tip of the iceberg… many other things happened in the personal side of things, starting with the fact that this was the year that we consolidated our return to Spain after 6 years living abroad, for instance.

Also, and getting back to work-related stuff here again, this year I also became accepted back at Igalia‘s Assembly after having re-joined this amazing company back in September 2018 after a 6-year “gap” living and working in the UK which, besides being something I was very excited and happy about, also brought some more responsibilities onto my plate, as it’s natural.

Last, I can’t finish this post without being explicitly grateful for all the people I got to interact with during this year, both at work and outside, which made my life easier and nicer at so many different levels. To all of you,  cheers!

And to everyone else reading this… happy holidays and happy new year in advance!

2019-12-23 Monday.

  • Calls with staff, annual review write-ups, a few calls & more work in the car on the way to B&A's arrived; worked through the afternoon, lots of great work done by the team through the year.

Starting from open (and FOSS)

As our society becomes increasingly dependent on computing, the importance of security has only risen. From cities hit by ransomware attacks, to companies doing cutting edge research that are the targets of industrial espionage, to individuals attacked because they have a desirable social media handle or are famous – security is vital to all of us.

When I first got into Linux and FOSS, I have strong memories of the variety of things enabled by the flexibility it enabled. For example, the first year of college in my dorm room with 3 other people, we only had a shared phone line that we could use with a modem (yes, I’m old). A friend of a friend ended up setting up a PC Linux box as a NAT system, and the connection was certainly slow, but it worked. I think it ran Slackware. That left an impression on me. (Though the next year the school deployed Ethernet anyways)

Fast forward 20+ years, we have the rise of the cloud (and cheap routers and WiFi of course). But something also changed about Linux (and operating systems in general) in that time, and that’s the the topic of this post: “locked down” operating systems, of which the most notable here are iOS, Android and ChromeOS.

iOS in particular requires code signing – the operating system refuses to execute code not signed by Apple. And iOS devices can only run iOS of course.

ChromeOS is also a locked-down system by default: while it uses the Linux kernel, it also comes out of the box set up such that the base operating systems only runs the binary ChromeOS builds which come entirely from Google. This is implemented with dm-verity. Android also uses the Linux kernel and has a similar setup, although the story of who owns what is more complicated; a bit more on this below.

Now, ChromeOS has a documented developer mode – and in fact they’ve made this process easier than it used to be (previously it could require toggling a hardware switch, which also reset the device if I remember correctly). Android has documented bootloader unlocking, although (again as I understand it) many popular phones come locked.

In contrast to these types of systems we have the “traditional” Linux distributions, the BSDs, etc. Most Linux distributions are strongly associated with a “package manager” – which make it fast and easy to add software to your root filesystem.

The flip side of course, is it’s also fast and easy for malicious code to end up in your root filesystem (or home directory) if you’re running a vulnerable web browser or service, or you pull from untrusted sources, etc. Particularly if you aren’t diligent with upgrades.

Another way to look at this is – the ChromeOS docs talk about “installing Linux”. One the face of it, this sounds silly because ChromeOS is a Linux kernel…but it’s not the flexible Linux that I first encountered in college. It’s not the flexible Linux that people use to create custom devices.

I think we need to incrementally move the “mainstream” distributions closer to this model – while preserving the fundamental open nature of the system. This will not be easy, in practice a difficult balance to strike, but we can do it.

Partition (containerize/virtualize)

The mainstream default needs to be containers and virtual machines. This is obviously well understood, but doing it in practice is really an enormous shift from how “traditional” default Debian/RHEL/Slackware/Arch installs work.

In most of the Fedora documentation, it’s extremely common to reference sudo yum install.

Getting out of the mindset of routinely mutating your root filesystem is hard. For people used to a “traditional” Linux system, partitioning into containers and VMs is hard. Changing systems management tools to work in this model is extremely hard. But we need to do it.

On the server side the rise of Kubernetes increasingly does mean that containerization is the default. For OpenShift 4 we created a derivative of Fedora CoreOS in Red Hat Enterprise Linux CoreOS – I like to describe it as a “Kubernetes-native OS” in concert with the machine-config-operator.

For other use cases, we’re doing our best to push the ecosystem in this direction with Fedora CoreOS (container oriented server but not Kubernetes native; e.g. can be used standalone) and other projects like the desktop-focused Fedora Silverblue. (On the topic of partitioning the desktop, QubesOS is also doing interesting, mostly complementary work)

One of the biggest shifts to make particularly for desktop systems like Silverblue is to live inside a “pet container” system like toolbox.

When I see documentation that says yum install foo – I now default to doing that inside my toolbox container – or sometimes on a remote Kubernetes pod. This works well for CLI applications.

But remain open

What we’re not changing with Fedora CoreOS (or other projects) is a “default to open” model. We will not (by default) for example require code executing our your device be signed by us. Our source code and build systems are Free Software and will remain that way. We will continue to discuss and write patches in the open, and ensure that we’re continuing to build an operating system in open collaboration with our users.

Today for example, rpm-ostree supports easily replacing the kernel; you just rpm-ostree override replace /path/to/kernel.rpm. Also, the fact that it’s the same kernel package as “traditional” Fedora installs cannot be emphasized enough – it helps us sustain two different ways to consume the same OS content. We can’t just break non-containerized use cases overnight.

Further, while we continue to debate the role of package layering (rpm-ostree install) in Fedora CoreOS, one way to look at this is recasting RPMs as “operating system extensions”, much like Firefox extensions. If you want to rpm-ostree install fish (or e.g. PAM modules), you can do so.

Extending the OS (and replacing parts for testing/development) are first class operations and will remain so; doing so works in a similar way to traditional package systems. We aren’t requiring other shells or PAM modules to containerize somehow, as that would be at odds with keeping the experience first class and avoiding “two ways to do it”.

Finally, the coreos-assembler project makes it easy to do fully custom builds. Our focus of course is on providing a pre-built system that’s useful to users, but our build process is pretty easy to replicate and will remain so.

Not tied in with proprietary cloud infrastructure

Another thing that needs to be stated here is we will continue to make an operating system that is not tied into proprietary cloud infrastructure. Currently in this area besides update rollout infrastructure we ship a counting service – the backing service is fully open, and it’s easy to turn off. In contrast of course, ChromeOS for example comes set up such that the operating system accounts are the same as Google cloud accounts.

Adding opt-in security

All of the above said; there are a lot of powerful benefits from the “locked down” operating system. I’ve been thinking recently about how we can enable this type of thing while “staying true to our roots”.

One thing that’s probably an ingredient of this is the fs-verity work which is also being driven by the ChromeOS/Android use case. They are hitting issues with the inflexibility of dm-verity; per these slides – “Intractable complexity when dealing with the Android partner ecosystem”. We can see the manifestation of this looking at the new Android APKX files – basically, there’s a need for 3rd parties to distribute privileged code. Currently APKX are loopback-mounted ext4 images with dm-verity, which is quite ugly.

fs-verity would mesh much more nicely with OSTree (which has always operated purely at the filesystem level) and other tools. (Update: Since I drafted this blog post a while ago I did get around to experimenting with fs-verity).

I haven’t yet gotten around to writing an fedora-coreos-tracker issue for this – but I think a proposal would be something like built-in functionality that allows you to opt-in to a model where after the OS has booted and Ignition runs, no further privileged code not signed by a keychain including the OS vendor’s key or your keys could execute. We’d ensure that the configuration in /etc was also part of a verified chain; since even if /usr is signed and verity protected, malware could persist in a systemd unit in /etc otherwise. Some people would probably want an “emergency ssh” shell that bypassed this; others would not (perhaps the default would be that anyone who didn’t want “emergency ssh” could simply disable the sshd.service unit). And note that we’d have to either have e.g. .bashrc included in the signature chain, or more likely ignored by default.

For Silverblue, one thing I’ve been thinking about is ensuring that the user flow works well without sudo by default. If you want to become root, you need to type Ctrl-Alt-Del (like Windows NT) and that switches you to a separate VT. The reason is that compromise of the user account with sudo privileges is really the same as a root compromise. You can’t trust your terminal emulators or display (aside: QubesOS approaches this by running everything in VMs with labeled borders and avoiding doing much on the host at all by default). We need to have a default “safe key” exactly like the single button on an iPhone always takes you to the home screen – allows you to make changes, and applications can’t intercept or control that key.

To reiterate, we need to more strongly separate the privileged OS content from your applications (containers/Flatpaks) and development tools by default. But at the same time we should continue allowing the operating system to truly be owned by you should you so choose. It’s your hardware.

Most important: apply security updates by default

As alluded to above: I think one of the most important things we can do for security is simply getting to a world where security updates (especially for the operating system/root filesystem) are applied automatically by default. That is of course the bold move that Container Linux did, and we will be preserving that with Fedora CoreOS.

This blog is focused on the base OS, but when applications are containerized it also is usually much easier to keep them updated too.

Doing automatic updates like that is much more tenable if it’s decoupled from core applications, and also if it’s fully transactional/safe as rpm-ostree enables.

We’ve already released OpenShift 4 which is strongly container oriented and contains an opinionated and streamlined way to update the OS together with the cluster, and includes transactional updates for the OS. There’s also an enhancement for automatic updates in progress. Fedora CoreOS work is progressing too – I’m excited to see where we take all of this in 2020!

Christmas Maps

To stick to the tradition I thought that I should write a little post about what's been going on since the stable 3.34 release in September. The main thing that's come since then for the upcoming 3.36 release is support for getting public transit route/itinerary planning using third-party providers. The basic support for public transit routing, based on OpenTripPlanner has been in place since 2017 with the original plan to find funding/hosting to set up a GNOME-specific instance of OTP fed with a curated set of GTFS feed. But since this plan didn't come to fruition, I repurposed the existing support so that it can fetch a list of known providers with defined geographical regions. First by utilising the existing OpenTripPlanner implementation (but rewritten to be instanciated per third-party provider). Later I have added plugins for the Swedish Resrobot and Swiss opendata.ch online API. These have yet not been activated in the service file (it's using the same service file as for tile and search providers). But this will soon be there, so stay tuned.

And since it kinda mandatory with a screenshot, here's one showing a case from Paris:


Happy holidays!

December 21, 2019

scikit-survival 0.11 featuring Random Survival Forests released

Today, I released a new version of scikit-survival which includes an implementation of Random Survival Forests. As it’s popular counterparts for classification and regression, a Random Survival Forest is an ensemble of tree-based learners. A Random Survival Forest ensures that individual trees are de-correlated by 1) building each tree on a different bootstrap sample of the original training data, and 2) at each node, only evaluate the split criterion for a randomly selected subset of features and thresholds. Predictions are formed by aggregating predictions of individual trees in the ensemble.

For a full list of changes in scikit-survival 0.11, please see the release notes.

The latest version can be downloaded via conda or pip. Pre-built conda packages are available for Linux, OSX and Windows via

 conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

 pip install -U scikit-survival

Using Random Survival Forests

To demonstrate Random Survival Forest, I’m going to use data from the German Breast Cancer Study Group (GBSG-2) on the treatment of node-positive breast cancer patients. It contains data on 686 women and 8 prognostic factors:

  1. age,
  2. estrogen receptor (estrec),
  3. whether or not a hormonal therapy was administered (horTh),
  4. menopausal status (menostat),
  5. number of positive lymph nodes (pnodes),
  6. progesterone receptor (progrec),
  7. tumor size (tsize,
  8. tumor grade (tgrade).

The goal is to predict recurrence-free survival time.

The code to reproduce the results below is available in this notebook.

First, we need to load the data and transform it into numeric values.

X, y = load_gbsg2()
grade_str = X.loc[:, "tgrade"].astype(object).values[:, np.newaxis]
grade_num = OrdinalEncoder(categories=[["I", "II", "III"]]).fit_transform(grade_str)
X_no_grade = X.drop("tgrade", axis=1)
Xt = OneHotEncoder().fit_transform(X_no_grade)
Xt = np.column_stack((Xt.values, grade_num))
feature_names = X_no_grade.columns.tolist() + ["tgrade"]

Next, the data is split into 75% for training and 25% for testing so we can determine how well our model generalizes.

X_train, X_test, y_train, y_test = train_test_split(
Xt, y, test_size=0.25, random_state=random_state)

Training

Several split criterion have been proposed in the past, but the most widespread one is based on the log-rank test, which you probably now from comparing survival curves among two or more groups. Using the training data, we fit a Random Survival Forest comprising 1000 trees.

rsf = RandomSurvivalForest(n_estimators=1000,
min_samples_split=10,
min_samples_leaf=15,
max_features="sqrt",
n_jobs=-1,
random_state=random_state)
rsf.fit(X_train, y_train)

We can check how well the model performs by evaluating it on the test data.

rsf.score(X_test, y_test)

This gives a concordance index of 0.68, which is a good a value and matches the results reported in the Random Survival Forests paper.

Predicting

For prediction, a sample is dropped down each tree in the forest until it reaches a terminal node. Data in each terminal is used to non-parametrically estimate the survival and cumulative hazard function using the Kaplan-Meier and Nelson-Aalen estimator, respectively. In addition, a risk score can be computed that represents the expected number of events for one particular terminal node. The ensemble prediction is simply the average across all trees in the forest.

Let’s first select a couple of patients from the test data according to the number of positive lymph nodes and age.

a = np.empty(X_test.shape[0], dtype=[("age", float), ("pnodes", float)])
a["age"] = X_test[:, 0]
a["pnodes"] = X_test[:, 4]
sort_idx = np.argsort(a, order=["pnodes", "age"])
X_test_sel = pd.DataFrame(
X_test[np.concatenate((sort_idx[:3], sort_idx[-3:]))],
columns=feature_names)
age estrec horTh menostat pnodes progrec tsize tgrade
0 33.0 0.0 0.0 0.0 1.0 26.0 35.0 2.0
1 34.0 37.0 0.0 0.0 1.0 0.0 40.0 2.0
2 36.0 14.0 0.0 0.0 1.0 76.0 36.0 1.0
3 65.0 64.0 0.0 1.0 26.0 2.0 70.0 2.0
4 80.0 59.0 0.0 1.0 30.0 0.0 39.0 1.0
5 72.0 1091.0 1.0 1.0 36.0 2.0 34.0 2.0

The predicted risk scores indicate that risk for the last three patients is quite a bit higher than that of the first three patients.

pd.Series(rsf.predict(X_test_sel))
0 91.477609
1 102.897552
2 75.883786
3 170.502092
4 171.210066
5 148.691835
dtype: float64

We can have a more detailed insight by considering the predicted survival function. It shows that the biggest difference occurs roughly within the first 750 days.

surv = rsf.predict_survival_function(X_test_sel)
for i, s in enumerate(surv):
plt.step(rsf.event_times_, s, where="post", label=str(i))
plt.ylabel("Survival probability")
plt.xlabel("Time in days")
plt.grid(True)
plt.legend()

Alternatively, we can also plot the predicted cumulative hazard function.

surv = rsf.predict_cumulative_hazard_function(X_test_sel)
for i, s in enumerate(surv):
plt.step(rsf.event_times_, s, where="post", label=str(i))
plt.ylabel("Cumulative hazard")
plt.xlabel("Time in days")
plt.grid(True)
plt.legend()

Permutation-based Feature Importance

The implementation is based on scikit-learn’s Random Forest implementation and inherits many features, such as building trees in parallel. What’s currently missing is feature importances via the feature_importance_ attribute. This is due to the way scikit-learn’s implementation computes importances. It relies on a measure of impurity for each child node, and defines importance as the amount of decrease in impurity due to a split. For traditional regression, impurity would be measured by the variance, but for survival analysis there is no per-node impurity measure due to censoring. Instead, one could use the magnitude of the log-rank test statistic as an importance measure, but scikit-learn’s implementation doesn’t seem to allow this.

Fortunately, this is not a big concern though, as scikit-learn’s definition of feature importance is non-standard and differs from what Leo Breiman proposed in the original Random Forest paper. Instead, we can use permutation to estimate feature importance, which is preferred over scikit-learn’s definition. This is implemented in the ELI5 library, which is fully compatible with scikit-survival.

import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(rsf, n_iter=15, random_state=random_state)
perm.fit(X_test, y_test)
eli5.show_weights(perm, feature_names=feature_names)
Weight Feature
0.0676 ± 0.0229 pnodes
0.0206 ± 0.0139 age
0.0177 ± 0.0468 progrec
0.0086 ± 0.0098 horTh
0.0032 ± 0.0198 tsize
0.0032 ± 0.0060 tgrade
-0.0007 ± 0.0018 menostat
-0.0063 ± 0.0207 estrec

The result shows that the number of positive lymph nodes (pnodes) is by far the most important feature. If its relationship to survival time is removed (by random shuffling), the concordance index on the test data drops on average by 0.0676 points. Again, this agrees with the results from the original Random Survival Forests paper.

December 20, 2019

Testing D-Bus clients with libglib-testing

I’ve always found it a bit of a pain to write unit tests for D-Bus client libraries, where you’re testing that your code calls methods on a D-Bus service appropriately and, in particular, correctly handles a variety of return values and errors. Writing unit tests like this traditionally involves writing a mock D-Bus service for them to talk to, which validates the input it receives and provides appropriate responses. That often goes most of the way towards reimplementing the entirety of the real D-Bus service.

Part of the difficulty of testing D-Bus clients like this is synchronising the state of the mock D-Bus service with the test code, and part of the difficulty is the fact that you have to write mock service code for each D-Bus method before you can test it — which is a lot of investment in writing code before you can even start writing your unit tests themselves.

As an experiment in finding a better way of doing this kind of testing, I’ve written GtDBusQueue in libglib-testing, and I think it might be ready for some wider use. Thanks a lot to Endless for allowing me to work on such projects! I’ve used it in a couple of projects now, particularly in libmalcontent (which handles implementing parental controls policy on the desktop, and needs to talk to the accountsservice D-Bus service).

GtDBusQueue basically implements a queue for D-Bus messages received from your D-Bus client code. Each D-Bus message is typically a method call: your unit test can inspect the queue, and will typically pop messages off the front of the queue to assert they match a certain method call, and then send a reply to that call.

A key feature of GtDBusQueue is that it operates as a queue of D-Bus messages, rather than as a collection of D-Bus object proxies (typically GDBusObjectProxy), which means that it can be used to handle method calls to arbitrary D-Bus object paths without having to implement a new proxy class for each of them.

Message matching is typically implemented using gt_dbus_queue_assert_pop_message() (though other methods are available which give you finer-grained control over message matching and removal from the queue). It blocks until the queue is not empty, pops the first message off the front, asserts that its D-Bus object path, interface name and method name are as expected, and then returns the method call parameters to your unit test code using the same syntax as g_variant_get(). Your unit test code can then check the values of those parameters how it pleases.

If your D-Bus client code is asynchronous, GtDBusQueue can be used inline in your unit test. Your client code will start a method call asynchronously, then the test code will pop the method call off the GtDBusQueue, check it and reply, and then your client code will asynchronously finish its method call and handle the results. You can see an example of this in the test_app_filter_bus_get_error_disabled() test in libmalcontent which, in a single function, tests that the mct_manager_get_app_filter_async() API can correctly handle a D-Bus InvalidArgs error returned by the second D-Bus call it makes.

If your D-Bus client code is synchronous, GtDBusQueue needs to run in a thread using gt_dbus_queue_set_server_func(), since otherwise it would block your D-Bus client code. The unit test and the server thread take turns at blocking on pushing messages onto the queue or popping them off. You can see an example of this (which also works for asynchronous client code, testing both the synchronous and asynchronous code paths in a single test) in the GtDBusQueue usage example in its documentation.

That’s a brief introduction to GtDBusQueue; hopefully it’s given you a bit of an idea about where it’s appropriate and how it can be used. There’s documentation in the source code (including some usage examples), and a load of usage examples in libmalcontent. Feedback, questions and improvements are always welcome!

More on Flatpak updates

The last time I talked about flatpak updates, I explained how flatpak apps can detect that a newer version has been installed, and restart themselves. That is great, and may almost be good enough when you have automatic updates. But that is not always the case.

Thankfully, we can do better. Since 1.5, Flatpak has a portal API that lets applications monitor for updates, and request updating themselves.

Here is how this looks when it is all put together:

In the terminal, I’m building a new version of the the portal test app, and update my (local) repository. The flatpak portal is noticing that the update appeared (I’m running it with a short poll timeout here, instead of the usual 30 minutes), and sends out a D-Bus signal to the application, which requests to be updated, and then restarts itself.

Using the portal API directly is not very convenient, since you have to listen to D-Bus signals and whatnot. Therefore, we now have a library called libportal, which is providing simple async wrappers for most portals. That is what the portal test app in the demo is using, and you should be using it too in your applications.

The first stable release of libportal will appear very soon, with Flatpak 1.6, and then it will find its way into runtimes.

Update: Since this is a portal, users are in control of what apps are allowed to do. If you don’t want an application to update itself, you can put an end to it with

flatpak permission-set flatpak updates $APPID no

Use ‘ask’ instead of ‘no’ to get a confirmation dialog. The permission-set command is new in flatpak 1.6.

December 18, 2019

ATK, GTK, and plans for 2020

The GNOME Project is built by a vibrant community and supported by the GNOME Foundation, a 501(c)(3) non-profit charity registered in California (USA). The GNOME community has spent more than 20 years creating a desktop environment designed for the user. We‘re asking you to become Friend of GNOME, with a recommended donation of $25/month ($5/month for students). We’re working to have 100 new Friends of GNOME join by January 6, 2020.

GNOME is about so much more than a desktop environment. In addition to the eponymous GNOME desktop, we work on projects like GStreamer, GTK, and Flatpak. We have a mostly complete list of technologies you can read on our web site. While the Foundation largely works on support, we also do development and outreach for GTK and GNOME core application development platform.

A group of people around a conference room table, covered in things. Everyone is smiling.
West Coast Hackfest 2019

In addition to routine, and some not so routine, fixes, Emmanuele Bassi, GTK Core Developer, led development initiatives across the GNOME ecosystem. A far from complete list of work includes:

  • reliability and usability of continuous integration (CI) for Glib and GTK;
  • completed constraints layout work for GTK4;
  • progress on the animation framework API for GTK, a necessary step for the GTK 4 release; and
  • reviewing contributions and closing of numerous bugs.

Emmanuele mentored Ravgeet Dhillon in Google Summer of Code working on updates to the GTK web site. Additionally, Xiang Fan worked on GTK 4 Rust bindings.

Additionally, Emmanuele worked on the migration of the various GTK mailing lists to the new Discourse support forum.

We are already working on projects for 2020. Notably, there will be a hackfest in Brussels before FOSDEM, focused on GTK4, serving as a checkpoint for the 2020 release and accessibility (a11y).

A11y work is very important to us at the GNOME Foundation. We believe software needs to be for everyone, which means it needs to work for people who have physical disabilities, including those who are blind. In general, we plan to do a major a11y overhaul in 2020, focusing on developing our Accessibility Tool Kit (ATK). We are auditing what exists right now, and are currently seeking expert help with this. We hope to partner with other projects, to come together to create a11y support that rivals that of proprietary options.

In order to push these projects forward, we need your help. Please consider becoming a Friend of GNOME in order to support our work on new accessibility development, community building around a11y, and getting GTK 4 out the door.

December 17, 2019

GMemoryMonitor (low-memory-monitor, 2nd phase)

TL;DR

Use GMemoryMonitor in glib 2.63.3 and newer in your applications to lower overall memory usage, and detect low memory conditions.

low-memory-monitor

To start with, let's come back to low-memory-monitor, announced at the end of August.

It's not really a “low memory monitor”. I know, the name is deceiving, but it actually monitors memory pressure stalls, and how hard it is for the kernel to allocate memory when applications need it. The longer it takes to allocate memory, the longer the kernel takes to allocate it, usually because it needs to move memory around to make room for a big allocation, when an application starts up for example, or prepares an in-memory buffer for saving.

It is not a daemon that will kill programs on low memory. It's not a user-space out-of-memory killer, and does not take those policy decisions. It can however be configured to ask the kernel to do that. The kernel doesn't really know what it's doing though, and user-space isn't helping either, so best disable that for now...

As listed in low-memory-monitor's README (and in the announcement post), there were a number of similar projects around, but none that would offer everything we needed, eg.:
  • Has a D-Bus interface to propagate low memory conditions
  • Requires Linux 5.2's kernel memory pressure stalls information (Android's lowmemorykiller daemon has loads of code to get the same information from the kernel for older versions, and it really is quite a lot of code)
  • Written in a compiled language to save on startup/memory usage costs (around 500 lines of C code, as counted by sloccount)
  • Built-in policy, based upon values used in Android and Endless OS
 GMemoryMonitor

Next up, in our effort to limit memory usage, we'll need some help from applications. That's where GMemoryMonitor comes in. It's simple enough, listen to the low-memory-warning signal and free some image thumbnails, index caches, or dump some data to disk, when you receive a signal.

The signal also gives you a “warning level”, with 255 being when low-memory-monitor would trigger the kernel's OOM killer, and lower values different levels of “try to be a good citizen”.

The more astute amongst you will have noticed that low-memory-monitor runs as root, on the system bus, and wonder how those new fangled (5 years old today!) sandboxed applications would receive those signals. Fear not! Support for a portal version of GMemoryMonitor landed in xdg-desktop-portal on the same day as in glib. Everything tied together with installed tests that use the real xdg-desktop-portal to test the portal and unsandboxed versions.

How about an OOM killer?

By using memory pressure stall information, we receive information about the state of the kernel before getting into swapping that'd cause the machine to become unusable. This also means that, as our threshold for keeping everything ticking is low, if we were to kill high memory consumers, we'd get a butter smooth desktop, but, based on my personal experience, your browser and your mail client would take it in turns disappearing from your desktop in a way that you wouldn't even notice.

We'll definitely need to think about our next step in application state management, and changing our running applications paradigm.

Distributions should definitely disable the OOM killer for now, and possibly try their hands at upstream some systemd OOMPolicy and OOMScoreAdjust options for system daemons.

Conclusion

Creating low-memory-monitor was easy enough, getting everything else in place was decidedly more complicated. In addition to requiring changes to glib, xdg-desktop-portal and python-dbusmock, it also required a lot of work on the glib CI to save me from having to write integration tests in C that would have required a lot of scaffolding. So thanks to all involved in particular Philip Withnall for his patience reviewing my changes.

December 16, 2019

Shell aliases for Flatpak applications

Although I gave up on tuning AwesomeWM configuration years ago and switched completely to GNOME, I still spend most of my time in the terminal. Instead of navigating to photos directory in Nautilus, I instinctively spawn a new terminal window with Super + Enter and type eog ~/pics/filename.jpg. This has become harder as I replaced almost all desktop applications provided by my distribution of choice with packages from Flathub.

Modern-day Flatpak generates simple shell wrappers at /var/lib/flatpak/exports/bin. To make it more secure, it uses application ID as the command name. It would not be fun to have sudo shadowed by a rogue application if bin directory had a higher priority in $PATH.

While it is easy to guess reverse DNS used for GNOME and KDE applications, guessing IDs of other applications may be burdensome. I am not only lazy so I wrote a script to generate shell aliases based on command defined by applications.

#!/bin/bash
declare -A aliases
for bin in /var/lib/flatpak/exports/bin/* ~/.local/share/flatpak/exports/bin/*; do
appid="$(basename $bin)"
cmd="$(flatpak info -m $appid | awk -F= '/^command=/ {print $2}')"
[[ -z $cmd ]] && continue
aliases[$appid]="$(basename ${cmd##*/})"
done
(
for appid in "${!aliases[@]}"; do
echo "alias ${aliases[$appid]}=$appid"
done
) > ~/.cache/flatpak-aliases

This approach is not ideal as command can be set arbitrarily. Some applications use helper scripts as the entry point, for example org.keepassxc.KeePassXC ends up aliased as command-wrapper.sh. I consider it good enough though.

December 15, 2019

GNOME Outreachy 2019

The Outreachy Program

The Outreachy program provides internship to work in Free and Open Source Software. This year I've proposed two projects as part of the GNOME project and we've two interns working for three months, so we'll have a lot of improvements in the following months!

I'll be mentoring these interns, so I will need to spend some time helping them to work on the existing codebase, but it worth it, if this makes more people to collaborate in free software development and if this help us to improve some useful apps.

These two projects are Fractal and the GNOME translation editor. You can take a look to the list of outreachy interns.

Fractal

Fractal is a Matrix.org gtk client, and I've proposed for this year program to implement a video player in the message list. We've a preview for images, for audio files but nothing for video files.

Sonja Heinze is the one that will be working on this during the next three months. She has been working during the past month in some small issues in Fractal so I'm really sure that she will be able to do great contributions to the project.

Jordan Petridis (alatiera) will be helping in this project as a co-mentor, I don't know a lot about gstreamer, so he'll be really helpful here with the gstreamer and rust.

GNOME translation editor (Gtranslator)

GNOME translation editor (gtranslator) is a simple .po editor. I've proposed to Rework the search and replace dialog. We've right now a simple find/replace modal dialog and I want to modernize the interface to integrate better in the window as a popover.

Priyanka Saggu is the one that will be working on this during the next three months. She has been working on gtranslator during the past month and she has done great contributions and improvements during this time.

Daniel Mustieles is the other co-mentor for this project. He's an experienced GNOME translator so he will help us a lot with the app user experience design and testing.

openSUSE Asia Summit 2019, Bali

When you travel for the very first time Internationally there are lot of things going in your head. Especially for someone like me, who is a vegetarian and is travelling all alone with no experience of flight. I was a lot nervous, was thinking about the culture of the place I am going, was nervous about flight itself, I watched a lot of “How to save yourselves” videos while travelling in flights. 🤫

So, yeah it was pretty messed up.

When, I was boarding the plane, my mom called me in last 10 mins, said news is going on about some “terrorists” entering Delhi (where my flight was), she was really really nervous and at one point I said “Do you want me to leave everything and abort?”, well no 😅

So, yeah a fully nervousness filled journey started, I boarded Singapore Airlines and damn it was A380, my first International flight and that too with a jumbo, I forgot all the things as soon as I entered the plane.

I gotta tell you, from the very start, I started realizing how well humanity can be, air hostess/host were taking so good care of the passengers, I felt so welcome and I guess indebted that my words aren’t just enough.

I mean, I can joke about this and say “At one point it felt like this much care I don’t even get from my parents”😉 

I was in the flight, slept for a while ( It was midnight flight) , and then it hit me, I saw that crew was up whole the time making sure that we sleep well, I was so touched by this, and I reached out to the crew and talked about this, they were very welcoming and talked about their job and I had a nice talk with them, All of the whole experience was just so nice.

In the end, they reached out to me, and shared a token of gratitude, they gave me “Singapore airlines playing cards and a ball point pen”, with a letter that they enjoyed having me as a passenger. Well, I was not aiming for any gifts or something, I just went to them and asked about their job and appreciated their hard-work genuinely.

I still manage to store this precious memory 🙂
Hope to see you again Mr. Westwood

I had a layover in Changi Airport, Singapore. And yes it’s absolutely worth the hype on Internet. I mean it’s really high tech, lots of recreational stuff, amazing gardens and you get to see “The Jewel”, also operations wise they do their best!

You will have a good time layover-ing in Singapore :p

Now, until this point, you should have got the idea that how comfortable I would have got so far. Yep!, that’s correct 🙂

I reach Bali, it was a 1-1.5 hour flight from Singapore. Comfort was gone … :/ well because I never experienced foreign exchange yet, and I had to reach the hotel somehow. So, I was thinking to myself can I pay online ?, Do I use my ATM ?, Do I buy sim card here ?, Do I exchange few dollars which I got with me here only ? ….. so on… after a lot of nervousness and search in and outside of the airport I exchanged few dollars just slightly above my taxi far and reach my hotel.

Pheww 😪 , First test over!

Now, I checked in rested for that day, on 5th October was my conference next day.

Day 1

I reach the venue 1.7 Km away on a sunny day (It was hot) , but foot :/ , well because It did not hit me that moment Motor bike ride app Gojek and Grab exist there.

Here, I was welcomed by the kind volunteers, after few paperwork of attendance, I got my welcome kit 😋

Woah, it was goodylicious 😉

:p There was a bottle too, not in this picture :/

I then attended few talks, one of them was about creating open source communities in Indonesia where the main focus was on students. Being a student myself I took part in that discussion. Then another one was by Mr. Segitz where from Suse, where he talked about how Suse takes security bugs, how they handle them, critical bugs etc etc..

I asked him about something similar to Android, where you restrict apps based on the permissions, he said it is a good idea and can be discussed. Niel added on about “Flatpak” on this discussion and yeah it’s a good idea 🙂 Flatpak has those things already :p

I met, Ahmad Haris for the very first time, can;t forget his custom shoes 😉 he was showing off that moment :p , he managed to get them done by “FANS” I mean he’s a CEO haha (As per him :p)

I met more GNOME folks there for the very first time, I mean that feeling when you meet the folks behind IRC and chats for the first time. Yeah it was phenomenal, just makes me more excited about upcoming GUADECs. I just want to meet everyone :p

So, there in openSUSE summit, we had a little GNOME world 🙂 , Ahamad, Kukuh, Rania, Shobha, Rosanna, Neil and others 🤗

It was a really great day.

I gotta say, I had doubts about Inodensia culture in general, my stereotypical family was worried because there it was majority of relegion which I do not belong to, and other stuff.

But, to be honest, I have never met such respectful, generous, helpful people in my entire little life that I have met in Indonesia. They are just the best humans out there!

Oh, that’s our little cute “minix” :p (I gave her a nickname) and the University campus was extremely beautiful with a really cool view of the Sea 🙂

I met many cool folks from Japan and China openSUSE community!, It’ just gets better and better when you meet folks from around the FOSS world. We all are working for same goal, to make the world a better a place! I have described in my previous GNOME Asia blog already that FOSS can make a huge impact in lives of people. It has tremendous possibilities, and when I personally saw the companies like “FANS” getting value out of it I was stunned and really happy 🙂

Day 2

I attended Niel’s talk, and loved how he swept in GNOME education challenge at the end of the talk because yeah the talk got over few minutes early 😉

Then, one of the really amazing talk was by Mr. Takeyama from Japanese community, they are doing a really great job out there to increase Japanese support on the magazing FOSS tools by personally contributing. And the coolest part is they have “Geeko Magazine” published every 6 months where they personally manage all the things. And have cool Japanese styled animation covers too 😋.

He generously gave me a sample, but sadly it does not have a Manga like animated cover :/ , but yeah still a cool stuff to have :p

If you happen to visit Japan and get a chance to attend Comiket festival they do have a stall there grab those before stock gets over (Really Limited)

Credits: https://blog.geeko.jp/category/geeko-magazine
My Treasure 😉

When I went back to hotel on the second day, the openSUSE bottle broke :/ , I got sad, really because it was cool. I asked Kukuh that will it be possible to have an extra left and if he can get that in GNOME Asia, and wow to my surprise it worked out and he arranged one for me. Now I have it safe with me 🙂

Thanks a lot Kukuh! , means a lot!

My cracked bottle :/

I am really astonishingly amazed by the great Indonesian People, their culture, their FOSS community specially, how well they manage to organize such beautiful events, and the fact that how linked they are, I mean most of the organizing committee in GNOME Asia and the participants were in openSUSE too 🙂

The best part is, even working people take out their time collaborate with the University staff and students and make these amazing events happen.

To be honest I am a bit Jealous of the community there, hope we have that presence in India too 🙂

Thank you everyone out there who made this event happen and took care of the participants and speakers! , I would definitely like to give a talk next time there 🙂

And for all of you did not make it there, yeah you missed a lot :p

*Note: I would really like to know in the comments if you are interested to know about Bali Tourism, this post got a bit long, if there is a good response I would really love to share the journey 🙂

Peace ✌ 

Hope to meet you all back again!

Outreachy week-2 progress report!

December 15, 2019

Task for the week:

  • Try to replicate the gnome-builder “search and replace bar” widget (just the wire-frame) in the Gtranslator project.
    • [sub-task] First try doing the above task in a seperate simple application.

Summary of the week:

It was a really productive week. I am almost done with the current tasks. I’ve finished replicating the wire-frame of gnome-builder’s search-and-replace-bar widget into the libdazzle-example application (although, it’s a complete mess piece of code right now, that I’ll refactor once I see it is actually working). There are a couple (or maybe a couple more) of final nitpicks to do to actually mark these as finished.

At the moment, I am far more comfortable with the project. Nothing seems really alien-sih now, rather most of the stuffs (from the project) looks quite familier (and imparts somewhat proper sense).

Compiling below each day’s progress in brief:

Day 01:

Day 02:

  • Had my first weekly-meeting with danigm. Discussed various standing doubts.
  • Then later in the day, attended the first outreachy zulip chat conversation.

Day 03:

  • Was able to build and isolate the gnome-builder’s development environment properly.
  • Read a couple of blogs from the gnome-builder developers. Thus, explored a couple of new (then unknown) features of the Builder-IDE.
  • Later, also started replicating the search-and-replace-bar widget in the example application.

Day 04:

  • No progress for the day. Wasn’t able to focus much.
  • So, spent most of the time in reading stuffs.

Day 05:

  • Finally succeeded in reverse engineering the gnome-builder’s search_and_replace_bar widgets into its units.
  • This time, started picking up the concerned widget’s source code (for recreation inside libdazzle’s example application), the right way.

Day 06:

  • Done recreating the widget into the example application. Still required to work more on the invoking action.
  • Have started testing the re-created widget files in the gtranslator project as well.