24 hours a day, 7 days a week, 365 days per year...

October 28, 2016

GNOME Core Apps Hackfest – Sponsors

As I mentioned in my previous blog post we organized a hackfest to discuss all about the core GNOME experience, with emphasis on core apps and taking into account its impact in 3rd party developers too.
But you can imagine, bringing together a not small amount of developer, designers and community in a single place involves travel costs, accommodation, an appropiate place where we can gather and discuss with internet and tables… and apart of that, small details that improves the overall experience like snacks and something to distract ourselves after a long journey, like a simple dinner all of us together.

This is not possible without the GNOME foundation and companies who believe this is important enough to sponsor people going and the organized events.
In this post I want to announce and thanks the (so far) two sponsors we have got already!

The sponsors

Kinvolk folks will provide us the venue and snacks every day, thanks!


Collabora will provide us a sponsored dinner the first day, thanks!

Collabora-Logo (1).png


And of course thanks the GNOME foundation and Red Hat who will help a big part of travel and accommodation costs.

Hope to see you all there!

October 27, 2016

Conservancy's First GPL Enforcement Feedback Session

[ This blog was crossposted on Software Freedom Conservancy's website. ]

As I mentioned in an earlier blog post, I had the privilege of attending Embedded Linux Conference Europe (ELC EU) and the OpenWrt Summit in Berlin, Germany earlier this month. I gave a talk (for which the video is available below) at the OpenWrt Summit. I also had the opportunity to host the first of many conference sessions seeking feedback and input from the Linux developer community about Conservancy's GPL Compliance Project for Linux Developers.

ELC EU has no “BoF Board” where you can post informal sessions. So, we scheduled the session by word of mouth over a lunch hour. We nevertheless got an good turnout (given that our session's main competition was eating food :) of about 15 people.

Most notably and excitingly, Harald Welte, well-known Netfilter developer and leader of, was able to attend. Harald talked about his work with enforcing his own copyrights in Linux, and explained why this was important work for users of the violating devices. He also pointed out that some of the companies that were sued during his most active period of are now regular upstream contributors.

Two people who work in the for-profit license compliance industry attended as well. Some of the discussion focused on usual debates that charities involved in compliance commonly have with the for-profit compliance industry. Specifically, one of them asked how much compliance is enough, by percentage? I responded to his question on two axes. First, I addressed the axis of how many enforcement matters does the GPL Compliance Program for Linux Developers do, by percentage of products violating the GPL? There are, at any given time, hundreds of documented GPL violating products, and our coalition works on only a tiny percentage of those per year. It's a sad fact that only that tiny percentage of the products that violate Linux are actually pursued to compliance.

On the other axis, I discussed the percentage on a per-product basis. From that point of view, the question is really: Is there a ‘close enough to compliance’ that we can as a community accept and forget about the remainder? From my point of view, we frequently compromise anyway, since the GPL doesn't require someone to prepare code properly for upstream contribution. Thus, we all often accept compliance once someone completes the bare minimum of obligations literally written in the GPL, but give us a source release that cannot easily be converted to an upstream contribution. So, from that point of view, we're often accepting a less-than-optimal outcome. The GPL by itself does not inspire upstreaming; the other collaboration techniques that are enabled in our community because of the GPL work to finish that job, and adherence to the Principles assures that process can work. Having many people who work with companies in different ways assures that as a larger community, we try all the different strategies to encourage participation, and inspire today's violators to become tomorrow upstream contributors — as Harald mention has already often happened.

That same axis does include on rare but important compliance problem: when a violator is particularly savvy, and refuses to release very specific parts of their Linux code (as VMware did), even though the license requires it. In those cases, we certainly cannot and should not accept anything less than required compliance — lest companies begin holding back all the most interesting parts of the code that GPL requires them to produce. If that happened, the GPL would cease to function correctly for Linux.

After that part of the discussion, we turned to considerations of corporate contributors, and how they responded to enforcement. Wolfram Sang, one of the developers in Conservancy's coalition, spoke up on this point. He expressed that the focus on for-profit company contributions, and the achievements of those companies, seemed unduly prioritized by some in the community. As an independent contractor and individual developer, Wolfram believes that contributions from people like him are essential to a diverse developer base, that their opinions should be taken into account, and their achievements respected.

I found Wolfram's points particularly salient. My view is that Free Software development, including for Linux, succeeds because both powerful and wealthy entities and individuals contribute and collaborate together on equal footing. While companies have typically only enforce the GPL on their own copyrights for business reasons (e.g., there is at least one example of a major Linux-contributing company using GPL enforcement merely as a counter-punch in a patent lawsuit), individual developers who join Conservancy's coalition follow community principles and enforce to defend the rights of their users.

At the end of the session, I asked two developers who hadn't spoken during the session, and who aren't members of Conservancy's coalition, their opinion on how enforcement was historically carried out by, and how it is currently carried out by Conservancy's GPL Compliance Program for Linux Developers. Both responded with a simple response (paraphrased): it seems like a good thing to do; keep doing it!

I finished up the session by inviting everyone to the join the principles-discuss list, where public discussion about GPL enforcement under the Principles has already begun. I also invited everyone to attend my talk, that took place an hour later at the OpenWrt Summit, which was co-located with ELC EU.

In that talk, I spoke about a specific example of community success in GPL enforcement. As explained on the OpenWrt history page, OpenWrt was initially made possible thanks to GPL enforcement done by BusyBox and Linux contributors in a coalition together. (Those who want to hear more about the connection between GPL enforcement and OpenWrt can view my talk.)

Since there weren't opportunities to promote impromptu sessions on-site, this event was a low-key (but still quite nice) start to Conservancy's planned year-long effort seeking feedback about GPL compliance and enforcement. Our next session is an official BoF session at Linux Plumbers Conference, scheduled for next Thursday 3 November at 18:00. It will be led by my colleagues Karen Sandler and Brett Smith.

2016-10-27 Thursday.

  • Mail chew; very fuzzy head cold, bits of planning, and ordering. Poked at POSS travel, late lunch with Lydia; Grace over to play with M. ESC call, minutes, reviewed JMux's idle re-work.

October 26, 2016

GObject and SVG

GSVG is a project to provide a GObject API, using Vala. It has almost all, with some complementary, interfaces from W3C SVG 1.1 specification.

GSVG is LGPL library. It will use GXml as XML engine. SVG 1.1 DOM interfaces relays on W3C DOM, then using GXml is a natural choice.

SVG is XML and its DOM interfaces, requires to use Object’s properties and be able to add child DOM Elements; then, we need a new set  of classes.

GXml, have a Serialization framework, it can be used to provide GObject properties to XML Element properties and collection of child nodes as GObject. I’ve created some other projects, like LibreSCL, using it.

Serialization framework, requires to create an XML tree first, then fill out GObject properties. This could add some delays on large files.

Considering LibreSCL have to deal with files about 10 MB to 60 MB, with thousand of XML nodes, this process XML Tree -> GObject properties, could take 10 to 20 seconds.

A few time ago, I imagined to have a GObject class as a XML Node. This is, an XML Element node, represent a GObject, XML Element’s properties should be mapped directly to GObject’s ones and XML Element’s child nodes, should be a collection inside GObject’s properties.

Now with SVG and GXml supporting DOM4, I face the opportunity to create a GObject class you can derive from, to convert your classes in XML nodes, making serialization/deserialization faster and reducing memory footprint.

Let’s see what is coming and how they evolve. As always, any help is welcome.

PD. As a side note, I’ve able to copy/paste with little modifications, W3C’s interfaces definitions to Vala ones in a short time, because Vala’s syntax.

2016-10-26 Wednesday.

  • Mail chew; updated git repos. Drove south, while chewing mail & bits in the car. Terrible traffic, lunch at Clumber Park. Onwards home. Nailed a particularly silly unit test issue of my own creation.
  • Home, signed and scanned paperwork, built ESC bug stats.

Shotwell moving along

I have released 0.25.0 on Monday.

Contrast Tool

contrast-toolA new feature that was included is a contrast slider in the enhancement tool, moving on with integrating patches hanging around on Bugzilla for quite some time.

SSL certificate handlingcertificate-warning

A second enhancement that was introduced is the option to override invalid SSL certificates. This is currenly only available in the Piwigo publisher and might be added to Gallery3 in future, i.e. all services that might be self-hosted on self-signed cerfificates.


Enhanced ACDSEE support

0.25 introduces the support of reading ACDSEE’s proprietary image tags such as titles, categories and hierarchical tags!

If you want to try it, there’s a new, unstable, PPA for Shotwell at


Dual-GPU integration in GNOME

Thanks to the work of Hans de Goede and many others, dual-GPU (aka NVidia Optimus or AMD Hybrid Graphics) support works better than ever in Fedora 25.

On my side, I picked up some work I originally did for Fedora 24, but ended up being blocked by hardware support. This brings better integration into GNOME.

The Details Settings panel now shows which video cards you have in your (most likely) laptop.

dual-GPU Graphics

The second feature is what Blender and 3D video games users have been waiting for: a contextual menu item to launch the application on the more powerful GPU in your machine.

Mooo Powaa!

This demonstration uses a slightly modified GtkGLArea example, which shows which of the GPUs is used to render the application in the title bar.

on the integrated GPU

on the discrete GPU

Behind the curtain

Behind those 2 features, we have a simple D-Bus service, which runs automatically on boot, and stays running to offer a single property (HasDualGpu) that system components can use to detect what UI to present. This requires the "switcheroo" driver to work on the machine in question.

Because of the way applications are launched on the discrete GPU, we cannot currently support D-Bus activated applications, but GPU-heavy D-Bus-integrated applications are few and far between right now.

Future plans

There's plenty more to do in this area, to polish the integration. We might want applications to tell us whether they'd prefer being run on the integrated or discrete GPU, as live switching between renderers is still something that's out of the question on Linux.

Wayland dual-GPU support, as well as support for the proprietary NVidia drivers are also things that will be worked on, probably by my colleagues though, as the graphics stack really isn't my field.

And if the hardware becomes more widely available, we'll most certainly want to support hardware with hotpluggable graphics support (whether gaming laptop "power-ups" or workstation docks).


All the patches necessary to make this work are now available in GNOME git (targeted at GNOME 3.24), and backports are integrated in Fedora 25, due to be released shortly.

Distributing spotify as a flatpak

One of the features in the recent flatpak relase is described as:

Applications can now list a set of URIs that will be downloaded with the application

This seems a bit weird. Downloading an application already means your loading data from a URI. What is the usecase for this?

Normally all the data that is needed for your application will be bundled with it and this feature is not needed. However, in some cases applications are freely downloadable, but not redistributable. This only affects non-free software, but Flatpak wants to support both free and non-free software for pragmatic reasons.

Common examples of this are Spotify and Skype. I hope that they eventually will be available as native flatpaks, but in order to bootstrap the flatpak ecosystem we want to make it possible to distribute these right now.

So, how does this work? Lets take Spotify as an example. It is available as a binary debian package. I’ve created a wrapper application for it which contains all the dependencies it needs. It also specifies the URL of the debian package and its sha256 checksum, plus a script to unpack it.

When the user downloads the application it also downloads the deb, and then (in a sandbox) it runs the unpack script which extracts it and puts the files in the right place. Only then is the installation considered done, and from thereon it is used read-only.

I put up a build of the spotify app on S3, so to install spotify all you need to do is:

flatpak install --from

(Note: This requires flatpak 0.6.13 and the remote with the freedesktop runtime configured)

Here is an example of installing spotify from scratch:

New features in GNOME To Do

It’s been a while, folks.

Some of you might have noticed that GNOME To Do wasn’t released with GNOME 3.22. There is a reason for that: I didn’t have enough time to add new features, or fix any bugs. But that changed, and in fact big things happened.

Lets check it out.

More extensible than ever

You guys know that GNOME To Do has built-in support for plugins, and is highly extensible. You can add new panels, hook up your amazing new feature in various ways, and add new data sources, all without touching the core of the application.

For this release, I ported the ‘Today’ and the ‘Scheduled’ panels to be plugins. That means that you can customize most of your experience withing GNOME To Do now – we ship various plugins and a default experience, and the user selects what fits him/her the best.

captura-de-tela-de-2016-10-26-00-26-35Many plugins are available now – and the list will grow.

I, for one, don’t use the ‘Scheduled’ panel that much, so I simply wiped it out of my way.

captura-de-tela-de-2016-10-26-00-27-07GNOME To Do without the ‘Scheduled’ panel

It feels very fresh to customize it to perfectly fit my workflow. But that’s all that happened!


Pretty much a requirement to me, subtasks are very important to my workflow. I usually have a big task (usually a project) with many subtasks. That required a huge work to make it happen in GNOME To Do, but the result is quite nice.

sem-tituloManagement of subtasks is done by Drag and Drop

You can drag a row over another one, and tada!, you just made the dragged row subtask of the hovered row.

captura-de-tela-de-2016-10-26-00-49-40Subtasks in GNOME To Do

One more step into making GNOME To Do my default task list app.

Dark theme support

Dark theme was basically non-functional on GNOME To Do. Because it heavily relies on custom theming, many CSS code it used to ship only worked well on light themes. Since I don’t override my dark theme override setting, and because no one ever provided a patch, this was never fixed until now.

And to demonstrate how the plugin system is powerful, I added a tiny plugin that switches between dark and light variants, hoping that it can be used as an example plugin for newcomers.

captura-de-tela-de-2016-10-26-00-53-37The new Dark Theme plugin in action.

What about the release?

I will release GNOME To Do 3.22 after some more testing – I’m pretty sure there are very specific corner cases that I didn’t managed to reproduce, and someone will fall into it.

Overall, I think this is a nice feature set to land, and I’m quite happy with the current state of GNOME To Do. Of course it can use some improvements, but I’ll focus my development on new plugins, new data sources (Todoist someone? Remember the milk? Here we go!) and exotic features like statistics about your productivity, RPG-like task management, etc.

And, as tradition dictates, a nice new video showing this hotness:

Excited? Join us in creating the best tasklist management application ever. New contributors are always welcomed. Join #gnome-todo room at the GNOME IRC server, test the application, file bugs – every single contribution matters.


October 25, 2016

Builder Rust

With Federico’s wonderful post on Rust’ifying librsvg I guess it makes sense to share what I’ve been doing the last couple of days.

I’ve been keeping my eye on Rust for quite a while. However, I’ve been so heads down with Builder the last two years that I haven’t really gotten to write any or help on integration into our platform. Rust appears to take a very pragmatic stance on integration with systems code (which is primarily C). The C calling convention is not going anywhere, so at some point, you will be integrating with some part of a system that is “C-like”. Allowing us to piecemeal upgrade the “Safety” of our systems is much smarter than rewrite-the-universe. This pragmatism is likely due to the realities of Rust’s birth at Mozilla. It’s a huge code-base, and incrementally modernizing it is the only reality that is approachable.

We too have a lot of code. And like many other projects, we care about being language agnostic to a large degree. In the early days we might have chosen C because it was the only toolchain that worked reliably on Linux (remember C++ pre-2000? or LinuxThreads?) but today we still care about “Language Interopability” and C is the undeniable common denominator.

One way in which we can allow the interopability that we desire and the Safety we need is to start approaching some of our problems like Federico has done. The C calling convention is “Safe”. That is not where zero-day bugs come from. They come from hastily written C code. It is perfectly fine to write our interfaces in C (which can be our basis for GObject Introspection) but implement Safety critical portions in Rust.

This is exactly what I’d like to see from the hundreds of thousands of lines of C I’ve written over the years. We need to identify and fix the Satefy Critical portions. GStreamer is already on the right track here by looking at codec and plugin implementations in Rust. I’m not sure how far they will go with adapting to Rust, but it will be one of the best case-studies we will have to learn from.

So because of this desire to look at building a Safer Platform for developers and users alike, I’ve decided to start adding Rust support to Builder. Thanks to the hard work of the Rust team, it’s a fairly easy project from our side. There is a new Language Server Protocol 2.0 that was worked on by various people at Microsoft, Red Hat, and elsewhere. The new rustls was announced last week and uses this protocol. So I implemented support for both as quick as I could, and now we have something to play with.

Because of the Language Server Protocol, our Rust plugin is tiny. It is essentially a glorified supervisor to handle subprocess crashes and some binding code to connect our stdin/stdout streams to the Language Server Protocol client in Builder. See for yourself.

There is a bunch more work for us in Builder to make it a great Rust IDE. But if people start playing with the language and are willing to work on Builder and GNOME to improve things as a community, we can build a Modern, Safe, and Elegant developer platform.

The big ticket next steps for those that want to contribute to Rust support would include:

  • Cargo build system support. I do believe that ebassi started on something here. But I need to circle back around and sync up.
  • Symbol Tree needs improvements.
  • Semantic highlighter (which we can implement using fuzzy symbols from the symbol tree until a real protocol comes along).
  • Upstream rustls needs work too to get us the features we want. So Rustaceans might want to spend some time helping out upstream.
  • We need to simplify the glue code from Rust←→GObject so that it is dead simple to wrap Rust code in a GObject-based library (where we get our free language interopability).
  • … and of course all the other Builder plumbing that needs to happen in general. See our list of projects that need work.
This image depicts control+clicking on a symbol to jump to its definition. +period or :gd in Vim mode also work.This image depicts control+clicking on a symbol to jump to its definition. <alt>+period or :gd in Vim mode also work.
This image shows diagnostics displayed over source code.This image shows diagnostics displayed over source code.
This image shows completion of fields from a struct.This image shows completion of fields from a struct.
This image shows the Symbol Tree on the right containing elements from the Rust document.This image shows the Symbol Tree on the right containing elements from the Rust document.

Tue 2016/Oct/25

  • Librsvg gets Rusty

    I've been wanting to learn Rust for some time. It has frustrated me for a number of years that it is quite possible to write GNOME applications in high-level languages, but for the libraries that everything else uses ("the GNOME platform"), we are pretty much stuck with C. Vala is a very nice effort, but to me it never seemed to catch much momentum outside of GNOME.

    After reading this presentation called "Rust out your C", I got excited. It *is* possible to port C code to Rust, small bits at a time! You rewrite some functions in Rust, make them linkable to the C code, and keep calling them from C as usual. The contortions you need to do to make C types accessible from Rust are no worse than for any other language.

    I'm going to use librsvg as a testbed for this.

    Librsvg is an old library. It started as an experiment to write a SAX-based parser for SVG ("don't load the whole DOM into memory; instead, stream in the XML and parse it as we go"), and a renderer with the old libart (what we used in GNOME for 2D vector rendering before Cairo came along). Later it got ported to Cairo, and that's the version that we use now.

    Outside of GNOME, librsvg gets used at Wikimedia to render the SVGs all over Wikipedia. We have gotten excellent bug reports from them!

    Librsvg has a bunch of little parsers for the mini-languages inside SVG's XML attributes. For example, within a vector path definition, "M10,50 h20 V10 Z" means, "move to the coordinate (10, 50), draw a horizontal line 20 pixels to the right, then a vertical line to absolute coordinate 10, then close the path with another line". There are state machines, like the one that transforms that path definition into three line segments instead of the PostScript-like instructions that Cairo understands. There are some pixel-crunching functions, like Gaussian blurs and convolutions for SVG filters.

    It should be quite possible to port those parts of librsvg to Rust, and to preserve the C API for general consumption.

    Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it's all due to using C. We've gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That's the kind of 1970s bullshit that Rust prevents.

    I also hope that this will make it easier to actually write unit tests for librsvg. Currently we have some pretty nifty black-box tests for the whole library, which essentially take in complete SVG files, render them, and compare the results to a reference image. These are great for smoke testing and guarding against regressions. However, all the fine-grained machinery in librsvg has zero tests. It is always a pain in the ass to make static C functions testable "from the outside", or to make mock objects to provide them with the kind of environment they expect.

    So, on to Rustification!

    I've started with a bit of the code from librsvg that is fresh in my head: the state machine that renders SVG markers.

    SVG markers

    This image with markers comes from the official SVG test suite:

    SVG reference image        with markers

    SVG markers let you put symbols along the nodes of a path. You can use them to draw arrows (arrowhead as an end marker on a line), points in a chart, and other visual effects.

    In the example image above, this is what is happening. The SVG defines four marker types:

    • A purple square that always stays upright.
    • A green circle.
    • A blue triangle that always stays upright.
    • A blue triangle whose orientation depends on the node where it sits.

    The top row, with the purple squares, is a path (the black line) that says, "put the purple-square marker on all my nodes".

    The middle row is a similar path, but it says, "put the purple-square marker on my first node, the green-circle marker on my middle nodes, and the blue-upright-triangle marker on my end node".

    The bottom row has the blue-orientable-triangle marker on all the nodes. The triangle is defined to point to the right (look at the bottommost triangles!). It gets rotated 45 degrees at the middle node, and 90 degrees so it points up at the top-left node.

    This was all fine and dandy, until one day we got a bug about incorrect rendering when there are funny paths paths. What makes a path funny?

    SVG image with funny        arrows

    For the code that renders markers, a path is not in the "easy" case when it is not obvious how to compute the orientation of nodes. A node's orientation, when it is well-behaved, is just the average angle of the node's incoming and outgoing lines (or curves). But if a path has contiguous coincident vertices, or stray points that don't have incoming/outgoing lines (imagine a sequence of moveto commands), or curveto commands with Bézier control points that are coincident with the nodes... well, in those cases, librsvg has to follow the spec to the letter, for it says how to handle those things.

    In short, one has to walk the segments away from the node in question, until one finds a segment whose "directionality" can be computed: a segment that is an actual line or curve, not a coincident vertex nor a stray point.

    Librsvg's algorithm has two parts to it. The first part takes the linear sequence of PostScript-like commands (moveto, lineto, curveto, closepath) and turns them into a sequence of segments. Each segment has two endpoints and two tangent directions at those endpoints; if the segment is a line, the tangents point in the same direction as the line. Or, the segment can be degenerate and it is just a single point.

    The second part of the algorithm takes that list of segments for each node, and it does the walking-back-and-forth as described in the SVG spec. Basically, it finds the first non-degenerate segment on each side of a node, and uses the tangents of those segments to find the average orientation of the node.

    The path-to-segments code

    In the C code I had this:

    typedef struct {
        gboolean is_degenerate; /* If true, only (p1x, p1y) are valid.  If false, all are valid */
        double p1x, p1y;
        double p2x, p2y;
        double p3x, p3y;
        double p4x, p4y;
    } Segment;

    P1 and P4 are the endpoints of each Segment; P2 and P3 are, like in a Bézier curve, the control points from which the tangents can be computed.

    This translates readily to Rust:

    struct Segment {
        is_degenerate: bool, /* If true, only (p1x, p1y) are valid.  If false, all are valid */
        p1x: f64, p1y: f64,
        p2x: f64, p2y: f64,
        p3x: f64, p3y: f64,
        p4x: f64, p4y: f64

    Then a little utility function:

    /* In C */
    #define EPSILON 1e-10
    #define DOUBLE_EQUALS(a, b) (fabs ((a) - (b)) < EPSILON)
    /* In Rust */
    const EPSILON: f64 = 1e-10;
    fn double_equals (a: f64, b: f64) -> bool {
        (a - b).abs () < EPSILON

    And now, the actual code that transforms a cairo_path_t (a list of moveto/lineto/curveto commands) into a list of segments. I'll interleave C and Rust code with commentary.

    /* In C */
    typedef enum {
    } SegmentState;
    static void
    path_to_segments (const cairo_path_t *path,
                      Segment **out_segments,
                      int *num_segments)
    /* In Rust */
    enum SegmentState {
    fn path_to_segments (path: cairo::Path) -> Vec<Segment> {

    The enum is pretty much the same; Rust prefers CamelCase for enums instead of CAPITALIZED_SNAKE_CASE. The function prototype is much nicer in Rust. The cairo::Path is courtesy of gtk-rs, the budding Rust bindings for GTK+ and Cairo and all that goodness.

    The C version allocates the return value as an array of Segment structs, and returns it in the out_segments argument (... and the length of the array in num_segments). The Rust version returns a mentally easier vector of Segment structs.

    Now, the variable declarations at the beginning of the function:

    /* In C */
        int i;
        double last_x, last_y;
        double cur_x, cur_y;
        double subpath_start_x, subpath_start_y;
        int max_segments;
        int segment_num;
        Segment *segments;
        SegmentState state;
    /* In Rust */
        let mut last_x: f64;
        let mut last_y: f64;
        let mut cur_x: f64;
        let mut cur_y: f64;
        let mut subpath_start_x: f64;
        let mut subpath_start_y: f64;
        let mut has_first_segment : bool;
        let mut segment_num : usize;
        let mut segments: Vec<Segment>;
        let mut state: SegmentState;

    In addition to having different type names (double becomes f64), Rust wants you to say when a variable will be mutable, i.e. when it is allowed to change value after its initialization.

    Also, note that in C there's an "i" variable, which is used as a counter. There isn't a similar variable in the Rust version; there, we will use an iterator. Also, in the Rust version we have a new "has_first_segment" variable; read on to see its purpose.

        /* In C */
        max_segments = path->num_data; /* We'll generate maximum this many segments */
        segments = g_new (Segment, max_segments);
        *out_segments = segments;
        last_x = last_y = cur_x = cur_y = subpath_start_x = subpath_start_y = 0.0;
        segment_num = -1;
        state = SEGMENT_END;
        /* In Rust */
        cur_x = 0.0;
        cur_y = 0.0;
        subpath_start_x = 0.0;
        subpath_start_y = 0.0;
        has_first_segment = false;
        segment_num = 0;
        segments = Vec::new ();
        state = SegmentState::End;

    No problems here, just initializations. Note that in C we pre-allocate the segments array with a certain size. This is not the actual minimum size that the array will need; it is just an upper bound that comes from the way Cairo represents paths internally (it is not possible to compute the minimum size of the array without walking it first, so we use a good-enough value here that doesn't require walking). In the Rust version, we just create an empty vector and let it grow as needed.

    Note also that the C version initializes segment_num to -1, while the Rust version sets has_first_segment to false and segment_num to 0. Read on!

        /* In C */
        for (i = 0; i < path->num_data; i += path->data[i].header.length) {
            last_x = cur_x;
            last_y = cur_y;
        /* In Rust */
        for cairo_segment in path.iter () {
            last_x = cur_x;
            last_y = cur_y;

    We start iterating over the path's elements. Cairo, which is written in C, has a peculiar way of representing paths. path->num_data is the length of the path->data array. That array has elements in path->data[] that can be either commands, or point coordinates. Each command then specifies how many elements you need to "eat" to take in all its coordinates. Thus the "i" counter gets incremented on each iteration by path->data[i].header.length; this is the "how many to eat" magic value.

    The Rust version is more civilized. Get a path.iter() which feeds you Cairo path segments, and boom, you are done. That civilization is courtesy of the gtk-rs bindings. Onwards!

        /* In C */
            switch (path->data[i].header.type) {
            case CAIRO_PATH_MOVE_TO:
                g_assert (segment_num < max_segments);
        /* In Rust */
            match cairo_segment {
                cairo::PathSegment::MoveTo ((x, y)) => {
                    if has_first_segment {
                        segment_num += 1;
                    } else {
                        has_first_segment = true;

    The C version switch()es on the type of the path segment. It increments segment_num, our counter-of-segments, and checks that it doesn't overflow the space we allocated for the results array.

    The Rust version match()es on the cairo_segment, which is a Rust enum (think of it as a tagged union of structs). The first match case conveniently destructures the (x, y) coordinates; we will use them below.

    If you recall from the above, the C version initialized segment_num to -1. This code for MOVE_TO is the first case in the code that we will hit, and that "segment_num++" causes the value to become 0, which is exactly the index in the results array where we want to place the first segment. Rust *really* wants you to use an usize value to index arrays ("unsigned size"). I could have used a signed size value starting at -1 and then incremented it to zero, but then I would have to cast it to unsigned — which is slightly ugly. So I introduce a boolean variable, has_first_segment, and use that instead. I think I could refactor this to have another state in SegmentState and remove the boolean variable.

            /* In C */
                g_assert (i + 1 < path->num_data);
                cur_x = path->data[i + 1].point.x;
                cur_y = path->data[i + 1].point.y;
                subpath_start_x = cur_x;
                subpath_start_y = cur_y;
             /* In Rust */
                    cur_x = x;
                    cur_y = y;
                    subpath_start_x = cur_x;
                    subpath_start_y = cur_y;

    In the C version, I assign (cur_x, cur_y) from the path->data[], but first ensure that the index doesn't overflow. In the Rust version, the (x, y) values come from the destructuring described above.

            /* In C */
                segments[segment_num].is_degenerate = TRUE;
                segments[segment_num].p1x = cur_x;
                segments[segment_num].p1y = cur_y;
                state = SEGMENT_START;
             /* In Rust */
                    let seg = Segment {
                        is_degenerate: true,
                        p1x: cur_x,
                        p1y: cur_y,
                        p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0 // these are set in the next iteration
                    segments.push (seg);
                    state = SegmentState::Start;

    This is where my lack of Rust idiomatic skills really starts to show. In C I put (cur_x, cur_y) in the (p1x, p1y) fields of the current segment, and since it is_degenerate, I'll know that the other p2/p3/p4 fields are not valid — and like any C programmer who wears sandals instead of steel-toed boots, I leave their memory uninitialized. Rust doesn't want me to have uninitialized values EVER, so I must fill a Segment structure and then push() it into our segments vector.

    So, the C version really wants to have a segment_num counter where I can keep track of which index I'm filling. Why is there a similar counter in the Rust version? We will see why in the next case.

            /* In C */
            case CAIRO_PATH_LINE_TO:
                g_assert (i + 1 < path->num_data);
                cur_x = path->data[i + 1].point.x;
                cur_y = path->data[i + 1].point.y;
                if (state == SEGMENT_START) {
                    segments[segment_num].is_degenerate = FALSE;
                    state = SEGMENT_END;
                } else /* SEGMENT_END */ {
                    g_assert (segment_num < max_segments);
                    segments[segment_num].is_degenerate = FALSE;
                    segments[segment_num].p1x = last_x;
                    segments[segment_num].p1y = last_y;
                segments[segment_num].p2x = cur_x;
                segments[segment_num].p2y = cur_y;
                segments[segment_num].p3x = last_x;
                segments[segment_num].p3y = last_y;
                segments[segment_num].p4x = cur_x;
                segments[segment_num].p4y = cur_y;
             /* In Rust */
                cairo::PathSegment::LineTo ((x, y)) => {
                    cur_x = x;
                    cur_y = y;
                    match state {
                        SegmentState::Start => {
                            segments[segment_num].is_degenerate = false;
                            state = SegmentState::End;
                        SegmentState::End => {
                            segment_num += 1;
                            let seg = Segment {
                                is_degenerate: false,
                                p1x: last_x,
                                p1y: last_y,
                                p2x: 0.0, p2y: 0.0, p3x: 0.0, p3y: 0.0, p4x: 0.0, p4y: 0.0  // these are set below
                            segments.push (seg);
                    segments[segment_num].p2x = cur_x;
                    segments[segment_num].p2y = cur_y;
                    segments[segment_num].p3x = last_x;
                    segments[segment_num].p3y = last_y;
                    segments[segment_num].p4x = cur_x;
                    segments[segment_num].p4y = cur_y;

    Whoa! Buts let's piece it apart bit by bit.

    First we set cur_x and cur_y from the path data, as usual.

    Then we roll the state machine. Remember we got a LINE_TO. If we are in the state START ("just have a single point, possibly a degenerate one"), then we turn the old segment into a non-degenerate, complete line segment. If we are in the state END ("we were already drawing non-degenerate lines"), we create a new segment and fill it in. I'll probably change the names of those states to make it more obvious what they mean.

    In C we had a preallocated array for "segments", so the idiom to create a new segment is simply "segment_num++". In Rust we grow the segments array as we go, hence the "segments.push (seg)".

    I will probably refactor this code. I don't like it that it looks like

        case move_to:
            start possibly-degenerate segment
        case line_to:
            are we in a possibly-degenerate segment?
                yes: make it non-degenerate and remain in that segment...
                no: create a new segment, switch to it, and fill its first fields...
    	... for both cases, fill in the last fields of the segment

    That is, the "yes" case fills in fields from the segment we were handling in the *previous* iteration, while the "no" case fills in fields from a *new* segment that we created in the present iteration. That asymmetry bothers me. Maybe we should build up the next-segment's fields in auxiliary variables, and only put them in a complete Segment structure once we really know that we are done with that segment? I don't know; we'll see what is more legible in the end.

    The other two cases, for CURVE_TO and CLOSE_PATH, are analogous, except that CURVE_TO handles a bunch more coordinates for the control points, and CLOSE_PATH goes back to the coordinates from the last point that was a MOVE_TO.

    And those tests you were talking about?

    Well, I haven't written them yet! This is my very first Rust code, after reading a pile of getting-started documents.

    Already in the case for CLOSE_PATH I think I've found a bug. It doesn't really create a segment for multi-line paths when the path is being closed. The reftests didn't catch this because none of the reference images with SVG markers uses a CLOSE_PATH command! The unit tests for this path_to_segments() machinery should be able to find this easily, and closer to the root cause of the bug.

    What's next?

    Learning how to link and call that Rust code from the C library for librsvg. Then I'll be able to remove the corresponding C code.

    Feeling safer already?

New flatpak command line

Today I released version 0.6.13 of flatpak which has a lot of nice new features. One that I’d like to talk a bit about is the new command line argument format.

The flatpak command line was always a bit low-level and hard to use. Partly this was because of lack of focus on this, and partly due to the fact that the expected UI for flatpak for most people would be a graphical user interface like gnome-software. However, with this new release this changed, and flatpak is now much nicer to use from the commandline.

So, what is new?

Flatpakrepo files

Before you can really use flatpak you have to configure it so it can find the applications and runtimes. This means setting up one or more remotes. Historically you did this by manually specifying all the options for the remote as arguments to the flatpak remote-add command. To make this easier we added a file format (.flatpakrepo) to describe a remote, and made it easy to use it.

The new canonical example to configure the gnome stable repositories is:

$ flatpak remote-add --from gnome \
$ flatpak remote-add --from gnome-apps \

Alternatively you can just click on the above links and they should open in gnome-software (if you have new enough versions installed).

Multiple arguments to install/update/uninstall

Another weakness of the command line has been that commands like install, uninstall and update only accepted one application name argument (and optionally a branch name). This made it hard to install multiple apps in one go, and the separate branch name made it hard to cut-and-paste from output of e.g. flatpak list.

Instead of the separate branch name all the commands now take multiple “partial refs” as arguments. These are partial versions of the OSTree ref format that flatpak uses internally. So, for an internal reference like app/org.gnome.gedit/x86_64/stable, one can now specify one of these:


And flatpak will automatically fill in the missing part in a natural way, and give a detailed error if you need to specify more details:

$ flatpak install gnome org.gnome.Platform
error: Multiple branches available for org.gnome.Platform, you must specify one of: org.gnome.Platform//3.20, org.gnome.Platform//3.22, org.gnome.Platform//3.16, org.gnome.Platform//3.18

Automatic dependencies

The other problem with the CLI has been that it is not aware of runtime dependencies. To run an app you generally had to know what runtime it used and install that yourself. The idea here was that the commandline should be simple, non-interactive and safe. If you instead use the graphical frontend it will install dependencies interactively so you can control where things get installed from.

However, this just made the CLI a pain to use, and you could easily end up in situations where things didn’t work. For instance, if you updated gedit from 3.20 to 3.22 it suddenly depended on a new runtime version, and if you weren’t aware of this it probably just stopped working.

Of course, we still can’t just install dependencies from wherever we find them, because that would be a security issue (any configured remote can supply a runtime for any applications). So, the solution here is for flatpak to become interactive:

$ flatpak update org.gnome.gedit
Looking for updates...
Required runtime for org.gnome.gedit/x86_64/stable (org.gnome.Platform/x86_64/3.22) is not installed, searching...
Found in remote gnome, do you want to install it? [y/n]: y
Installing: org.gnome.Platform/x86_64/3.22 from gnome
Installing: org.gnome.Platform.Locale/x86_64/3.22 from gnome
Updating: org.gnome.gedit/x86_64/stable from gnome-apps
Updating: org.gnome.gedit.Locale/x86_64/stable from gnome-apps

If you have remotes you never want to install dependencies from, you can install them with --no-use-for-deps, and they will not be used. Flatpakrepo files for app-only repositories should set NoDeps=true.

Note that this is not a package-system-like dependency solver that can solve sudoku. It is still a very simple two-way split.

Flatpakref files

The primary way Flatpak is meant to be used is that you configure a few remotes that has most of the applications that you use, then you install from these either on the command line, or via a graphical installer. However, sometimes it is nice to have a single link you can put on a website to install your application. Flatpak now supports that via .flatpakref files. These are very similar to flatpakrepo files, in that they describe a repository, but they additionally contain a particular application in that repository which will be installed.

Such files can be installed by just clicking on them in your web-browser (which will open them for installation in gnome-software) or on the command line:

flatpak install --from

This will try to install the required runtime, so you first need to add the remote with the runtimes.

Launching runtimes

During development and testing it is often common to launch commands in a sandbox to experiment with how the runtime works. This was always possible by running a custom command in an application that used the runtime, like so:

$ flatpak run --command=sh org.gnome.gedit
sh-4.3$ ls /app
bin lib libexec manifest.json share

You can even specify a custom runtime with --runtime. However, there really should be no need to have an application installed to do this, so the new version allows you to directly run a runtime:

$ flatpak run org.gnome.Platform//3.22
sh-4.3$ ls /app

2015 Annual Reports mailed out

While the latest GNOME annual reports sold like hotcakes at GUADEC, there is still a need to send some of them by snailmail, like I did last year. I was under a pretty big rush from July to mid-October, and since nobody was available to help me determine the list of recipients, I had to wait for the end of the rush to allow myself to spend time devising that list. And so I did, this month.


Packaged reports just before I shipped them out.

The shipping cost will be roughly half of last year’s, but it still turned out to be a surprising amount of copies to be sent out. Thanks to Adelia, Cassandra and Nuritzi (from the Engagement team) for writing postcards to accompany the reports, that saved me quite a bit of time!

You thought the annual reports weighted 3 MB? You were wrong. They weigh 3.8 kg.

You thought the annual reports weighted 3 MB?
You were wrong. They weigh 3.8 kg.

October 24, 2016

GNOME Core Apps Hackfest

It’s official now, we will have a GNOME Core Apps hackfest happening in Berlin, Germany on November 25-27!


The hackfest is aimed to raise the standard of the overall core experience in GNOME, this includes the core apps like Documents, Files, Music, Photos and Videos, etc. In particular, we want to identify missing features and sore points that needs to be addressed and the interaction between apps and the desktop.

Making the core apps push beyond the limits of the framework and making them excellent will not only be helpful for the GNOME desktop experience, but also for 3rd party apps, where we will implement what they are missing and also serve as an example of what an app could be.

In case you are interested, here is the wiki page with more information. All of you are welcome to attend.

Thanks to Kinvolk folks that will host us in their office!

This Week in GTK+ – 21

In this last week, the master branch of GTK+ has seen 335 commits, with 13631 lines added and 37699 lines removed.

Planning and status
  • Emmanuele merged his wip/ebassi/gsk-renderer branch into the master branch, effectively adding GSK to the API; there is an ongoing effort in improving its performance profile, as well as porting more widgets to the GskRenderNode API
  • Benjamin added new GdkWindow constructors for input and child windows, which will eventually replace the generic gdk_window_new() API
  • Timm removed more deprecated API from GTK+
  • Timm also replaced all the get_preferred_* family of virtual functions with a single GtkWidgetClass.measure virtual function, thus simplifying the implementation of widgets
  • Matthias started a new migration guide for application developers that wish to port their code from GTK+ 3.x to GTK+ 4.x
  • Chun-wei Fan updated the Windows backend of GDK following the deprecations and API changes
  • The GTK+ road map is available on the wiki.
Notable changes
  • GDK now tries to do a better job at detecting if a GL context is using OpenGL ES, a core OpenGL profile, or a legacy OpenGL profile.
  • New deprecations in the gtk-3-22 branch for API that has been removed from the master branch:
    • gdk_window_set_debug_updates() — will be replaced by appropriate rendering in GSK
    • GtkContainer:child — no replacement, as it’s just a C convenience property for use in variadic arguments functions
    • gdk_window_set_background* family of functions — no replacement
    • gdk_window_set_wmclass() — no replacement, as it was already marked as “do not use”
    • gdk_drag_dest_set_proxy() — no replacement
    • various GdkScreen API — replaced by GdkMonitor
  • Jaime Velasco Juan vastly improved the “native” Windows theme in the gtk-3-22 branch, to better match the Windows 7 visuals
  • Lapo Calamandrei has fixed the appearance of circular buttons in Adwaita
Bugs fixed
  • 772922 GtkMenu: Try using gdk_window_move_to_rect() more often
  • 773029 [gucharmap] style-set signal problem
  • 773246 Typo in css color definitions documentation
  • 773180 Don’t second-guess whether our GDK GL context is GLES
  • 773113 tests: fix clipboard test by loading correct icon
  • 771694 GtkSourceView completion popup window not shown, no grabbed seat found
  • 771205 Buttons with circular style class have a suddenly clipped shadow at the bottom
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on

October 23, 2016


Say you have a shared library with versioned symbols that has the function init(int *param) defined in two versions, where new_init is made the default for init (that’s what the  @@ means):

#include "shared1.h"
void new_init (int *param)
    *param = 0;
void old_init (int *param)
    *param = -1;
__asm__(".symver new_init,init@@VERS_2");
__asm__(".symver old_init,init@VERS_1");

Say you have a second shared library that provides a function do_something() uses that init from the first shared library:

#include "shared1.h"
int do_something ()
    int var;
    init (&var);
    return var;

And you have an app that just prints the result:

#include "shared2.h"
#include <stdio .h>
int main(int argc, char *argv[])
    printf ("Result: %d\n", do_something ());
    return 0;

And all of this is cobbled together with the following Makefile:

all: app shared1.c shared1.h
    cc -o $@ $< -shared -fPIC -Wl,--version-script=version.script shared2.c
    cc -o $@ shared2.c -shared -fPIC
app: app.c
    cc -o $@ -L. app.c -lmoreshared -lshared

What’s the output of


Update: Try to answer it without compiling and running it at first.
Update2: I forgot the version.script, sorry.

VERS_1 {
VERS_2 {
} VERS_1;

my voxxed days belgrade talk

i often have the impression that the current state of the computer world is the climax of our great progress. nevertheless i also have the impression that all these advancements don't really advance humanity as a whole. i have written a lot about these issues in recent years and if you follow my summing up series (shameless plug: put your email in the box below this article to get it and much more directly to your inbox) you'll find many people following similar lines of thought. it mostly comes down to the notion that we're not really thinking about how we can use the medium computer to augment our human capabilities. this was my starting point a few years ago.

in the mean time i was invited by the kind folks of voxxed days belgrade to speak about this topic. i gladly accepted and luckily i did so.

it was, simply put, an amazing experience: one of the biggest and most inspiring technology & entrepreneurship conferences in eastern europe, excellent speakers from all over the world, about 800+ participants, and well... me in the middle giving a talk about how we can augment our human capabilities with the use of computers.

i have spent an insane amount of time preparing this talk over the last year and was very happy with the feedback i got. actually, scratch that. i was exceptionally happy as my talk was voted best rated talk of the conference. phew!

if you're interested in the slides along with my speaker notes and recommended reading material, visit this link. feel free to share it and if you have a question, feedback or critique i'd love to hear from you!

lastly i want to thank daniel bader, jan peuker, marian edmunds and philipp pamer for many hours of great conversation about these issues and for feedback while these thoughts were solidifying.

October 22, 2016

JCON – Part 2

I just finished the JCON_EXTRACT() support and it is already making my consumption of JSON easier. Here is an example:

g_autoptr(JsonNode) node = NULL;
JsonArray *ar = NULL;
gboolean success;

node = JCON_NEW (
  "foo", "{",
    "bar", "[", JCON_INT (1), JCON_INT (2), "]",

success = JCON_EXTRACT (node,
  "foo", "{",
    "bar", JCONE_ARRAY (ar),

And for now, you can just copy/paste the jcon.c and jcon.h files into your project, but I’d expect to come up with a patch we can push into json-glib at some point. It really belongs there.

Tweaking GNOME Shell with extensions

Actually, this post should have been a part (the last one) of my "Need for PC" blog posts (1, 2, 3), but this also deserves a separate blog post on its own.

So, I have a fresh install on a fresh PC, what do I do next (and why)? Here's a list of the GNOME Shell extensions I use, and the (highly opinionated) motivation for me using them.
    • dash to dock - I need an always-visible or intelligent-autohide icon-only window list to be able to see my open windows all the time, and to launch my favorites. That is an old habit of mine, but I simply can't live without it. I usually set dash to dock to expand vertically on the left side, as I come from a Unity world, and this made the transition easier, but with the settings available you can make yourself comfortable even if you're transitioning from MacOS X or Windows 7-8-10, with a couple of clicks.
    • alternatetab - I need window switching with Alt-Tab to any of my running application windows. I don't want to think about "which workspace is this window on?" or "do I want to switch to another instance of the same app or another app?". It helps me tidy up my window list from time to time, and keeps me productive. coverflow alt-tab is another option here, for people who like eye-candy, for me the animations and reflections are a bit too much, but if you like that, it's also a good replacement for the default tabbing behaviour.
      • applications menu - I rarely use it, I mostly got used to search for apps in GNOME Shell, but the Activities button is not for me, I access that using the META key, and removing the Activities button leaves an empty space on the top left corner. It's the perfect place for a "Start menu". The applications menu is a good option, installed by default for the GNOME classic session, but if you need a more complex menu, with search, recents, web bookmarks, places, and a lot more (resembling the Start menu, but without ads ;) ), then gno-menu is the way to go.
        • pump up/down the volume - I think this habit of mine also comes from Unity, but I use middle-click the sound icon to mute, and also would like to see a visual feedback when I am adjusting my volume via scrolling over the sound icon. A small tooltip, which I have to stare at to read doesn't count here. Better volume indicator does exacly what I need, no less, no more. Just perfect. I just wish it would be the default GNOME Shell behaviour.
          • selecting sound output device - I usually have multiple possible output devices (speakers and headphones) and multiple possible input devices (webcam microphone, jack microphone, etc), and I need to switch between these : switch to speakers/headphones fast, when receiving a call, switch the microphone. Opening the sound setting, selecting the input and output devices would take too much time, but "there's an app for that" (understand: extension), called Sound output device chooser, which can also choose the sound input device, and it's nicely integrated with the sound menu. Perfect for the job.
            • monitoring the system - information at a glance about my computer, CPU usage. I prefer to have a chart in the top bar, so there's only one option. This plugin has lots of settings, the preferences are kind of chaotic, but once you set it up, it just works. I only have a 200 px wide CPU chart in my top bar, that's all I need to see if something is misbehaving (firefox/flash/gnome-shell/some others happen to use 50%+ CPU just because they can)
            • tray area - although tray icons have been "deprecated" quite some time ago, there are some applications which can not/will not forget them. Most notable ones are Skype and Dropbox. The fallback notification area (bottom left corner) kindof conflicts with my left-side expanded dash to dock extension, so I use topIcons plus to move them back to the right corner.
            • top bar dropdown arrows - with Application menu/Gno-Menu an application and a keyboard layout switcher, the number of small triangles eating up space in the top bar goes up to 4. I understand that I have to know that the menu, the application name (appmenu), the keyboard layout switcher and the power/sound/network menu are clickable and will expand on click, but the triangles are too much. So, I remove the dropdown arrows.
            These tend to be the most important ones. A short list of other extensions I use, but are not a bare necessity:
            • Freon - for keeping an eye on the temperatures/fan speeds of your PC
            • Switcher - keyboard-only application launcher/switcher
            • Dynamic panel transparency - for making the top bar transparent without full-screen apps, but making it solid if an app is maximized. Eye-candy, but looks nice ( ssssht, secret - it might become the default behavior ) . It would be even nicer if it could also affect dash to dock.
              With these tweaks, I can use GNOME Shell, and can be fairly productive. How about you? Which extensions are you using? What would you change in GNOME Shell?

              GTK+ happenings

              I haven’t written about GTK+ development in some time. But now there are some exciting things happening that are worth writing about.


              Back in June, a good number of GTK+ developers came together for a hackfest in Toronto,  It was a very productive gathering. One of the topics we discussed there was the (lack of) stability of GTK+ 3 and versioning. We caused a bit of a firestorm by blogging about this right away… so we went back to the drawing board and had another long discussion about the pros and cons of various versioning schemes at GUADEC.

              GTK+ BOF in Karlsruhe

              The final, agreed-on plan was published on the GTK+ blog, and you can read it there.


              Fast-forward to today, and we’ve made good progress on putting this plan into place.

              GTK+ has been branched for 3.22, and all future GTK+ 3 releases will come from this branch. This is very similar to GTK+ 2, where we have the forever-stable 2.24 branch.  We plan to maintain the 3.22 branch for several years, so applications can rely on a stable GTK+ 3.

              One activity that you can see in the branch currently is that we are deprecating APIs that will go away in GTK+ 4. Most deprecations have been in place for a while (some even from 3.0!),  but some functions have just been forgotten. Marking them as deprecated now will make it easier to port to GTK+ 4 in the future. Keep in mind that deprecations are an optional service – you don’t have to rush to act on them unless you want to port to the next version.

              To avoid unnecessary heartburn and build breakage, we’ve switched  jhbuild, GNOME continuous and the flatpak runtimes over to using the 3.22 branch before opening the master branch for  new development, and did the necessary work to make the two branches parallel-installable.

              With all these preparations in place, Benjamin and Timm went to work and did a big round of deprecation cleanup. Altogether,  this removed some 80.000 lines of code. Next, we’ve merged Emmanueles GSK work.  And there is a lot more work queued up, from modernizing the GDK layer, to redoing input handling, to building with meson.

              The current git version of GTK+ calls itself 3.89, and we’re aiming to do a 3.90 release in spring, ideally keeping the usual 6 months cadence.

              …and you

              We hope that at least some of the core GNOME applications will switch to using 3.90 by next spring, since we need testing and validation. But… currently things are a still a bit rough in master. The GSK port will need some more time to shake out rendering issues and make it as fast as it should be.

              Therefore, we recommend that you stick with the 3.22 branch until we do a 3.89.1 release. By that time, the documentation should also have a 3 → 4 migration guide to help you with porting.

              If you are eager to get ready for GTK+ 4 now, you can prepare your application by eliminating the deprecations that show up when you build against the latest 3.22 release.


              This is an exciting time for GTK+ ! We will post regular updates as things are landing, but just following the weekly updates on the GTK+ blog should give you a good idea of what is going on.

              Fixing the IoT isn't going to be easy

              A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

              To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

              We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

              Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

              These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

              Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

              Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

              If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

              We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

              Right. I'm off to portscan another smart socket.

              [1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

              [2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

              [3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign

              comment count unavailable comments

              October 21, 2016

              Need for PC - Part 3 - Setting up the system

              As promised, after a long wait, here's some details about the operating system and software I have installed from day-0. This is a shortlist I usually install on each of my computers, so I will also provide a short why for each bullet.
              A side-note is that, although I tend to use the command-line a lot, the setup contains (only) a single cut-and-paste terminal command, the rest is entirely done using the
                1. Base system: Fedora (latest release of Workstation - 24 at installation time).
                  Reasons for choosing Fedora:
                  • user-friendly and developer-friendly
                  • includes latest stable GNOME stack - contains latest bugfixes and latest features - relevant also from both user and GNOME developer perspective
                  • most developer tools I use are bundled by default
                2. Fedy : a simple tool for configuring Fedora, installing the proprietary software I need to use.
                  The items I always install from Fedy:
                  • Archive formats - support for RAR and the likes, not installed by default
                  • Multimedia codecs - support for audio and video formats, MP3 and the likes
                  • Steam - for the child inside me
                  • ?Adobe flash - I wish this wasn't necessary, but sometimes it is
                  • Better font rendering - this could also be default, and may become obsolete in the near future
                  • Disk I/O scheduler - Advertised as a Performance-boost for SSD and HDD
                3. Media players
                  • Kodi - the media player I install on all my devices, be it tablet, PC,
                    laptop, Raspberry PI - extensible, supports library management, sharing on the local network, remote control, "Ambilight" clone for driving RGB leds behind my TV
                  • VLC - for one-shot video playback - Kodi is the best, but too heavy for basic video playback
                  • Audacious - for one-shot audio playback and playing sets of songs - as I grew up with WinAmp,  and audacious has support for Classic WinAmp skins, but also a standard GTK interface
                4. Graphics
                  • GIMP - photo editing and post-processing
                  • Inkscape - vector graphics editor
                  • Inkscape-sozi - extension for Inkscape presentation editing - whenever I need a good presentation, I create a vector-graphics presentation with inkscape+sozi, because it's so much better than a plain libreoffice(powerpoint) presentation - more like prezi
                With these installed, my system is ready to be used. Time for tweaking the user interface a bit, so next up is customizing GNOME Shell with extensions.

                Consequences of the HACK CAMP 2016 FEDORA + GNOME

                I used to do install parties in order to promote the use of FEDORA and GNOME project since five years ago. As you can see more details in the Release Party FEDORA 17 for Fedora, and Linux Camp 2012, GNOME PERU 2013, GNOME PERU 2014:

                peru01To the fifth edition of GNOME PERU 2015 I asked FEDORA to be a sponsor and they kindly accepted, it was included in a video and some pictures of the support as follow:

                peru02Fedora and GNOME work different, we have in Perú many ambassador of FEDORA and only three members of the GNOME Foundation. That is why I have seen more FEDORA events than GNOME, and I was trying to reach more users of GNOME on FEDORA.

                Anytime I have to do a local or international workshop or a class in universities, I do spread the FEDORA + GNOME word. Some pictures of the DVDs I have given for free:

                peru05As well I have seen many students that now are working with Linux-related technology, but there was not a real contributor, neither for FEDORA or GNOME community that I brought to these project. I have also been involved in many Linux groups in Lima too, with no success in bringing new real contributor. These efforts were not enough…

                peru07This year, 2016, I decided to focus directly to developers and instead of having long quantity of audience, I chose to keep the eye on quality.

                I found a group in Lima called Hack Space where they train people to code, and this summer they gathered students from many parts of Peru. These students were not just students at universities, they were extraordinary students at self-training and also “leaders” in their communities. I decided to take them to a Hack Camp and it was a coincidence that HackSpace were also planing this activity! So we joined to use code on Linux though FEDORA and GNOME. This was possible thanks to Alvaro Concha and Martin Vuelta, and of course the sponsor of the GNOME Foundation. We did have 5 days:

                peru08After the HACK CAMP 2016, I was be able to be part of the conference CONEISC 2016 because the participants of the HACK CAMP 2016: Jean Pierr Chino Lurita and Wilbur Naike Chiuyari Veramendi were part of the organization in his local community, at Pucallpa. Then, I traveled to Pucallpa (jungle of Peru) in order to spread the FEDORA+ GNOME word (this time I was not sponsored neither by GNOME or FEDORA).

                peru09Because CONEISC 2016 congregated business and professionals related to computer and science. There was an opportunity to interact to other formal entrepreneurs interested in promoting Linux technologies such as DevAcademy and Backtrack Academy.

                I am arranging special editions related to FEDORA and GNOME. Both channels have achieved more than 2,762 views. DevAcademy has 6,750 subscribers and BacktracAcademy had reached  5,393.

                • Another talk I have gotten because some participants at CONEISC 2016 enjoyed my workshop, is going to be presented at the SEISCO event that will be held at UPN:

                14714804_188926978215598_7074006320136480389_oA couple of events that are also going to be fruit of the HACK CAMP 2016 are the LINUX at UNI (with the effort of the participants of the HACK CAMP: Briggette Roman and Lizeth Quispe) and LINUX at San Marcos (the organizers are the participants of the HACK CAMP: Manuel Merino, Angel Mauricio and Fiorella Effio.

                And finally, the most important part of all these effort for more than five years is Martin Vuelta, who supported the organization of HACK CAMP 2016! He is a potential contributor to FEDORA and then, because we both attended the FUDcon LATAM 2016 event, we saw also an opportunity to build the GNOME PUNO initiative!😀

                There are lots of plans to grow in my local community! Thanks FEDORA and GNOME!❤

                Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, GNOME, Julita Inca, Julita Inca Chiroque, Peru effort

                October 20, 2016

                GNOME at Linux Install Fest

                On Saturday (a week ago, I know, I know, I had a full week😦 ) there was this event called Linux Install Fest held at my university. It’s an event organized in order to help first year students install a Linux distro on their laptops (here at our uni, we work almost entirely on Linux, so we need to help those that have never used it and set up their distros🙂 ).

                Us, the 5 GNOMiEs (Iulian, Gabriel, Alex, Razvan and I), were just a few of the helpers that were running around (45 in total) trying to respond to the huge request of students wanting to hurry up and have a running (usually in dual boot mode) environment. There were around 220 students (new record actually😀 ) that were present throughout the day, so it was pretty intense.



                Of course, some of them were really desperate, as we weren’t enough helpers to accommodate the demand:


                We used the L.I.F. as a first chance to promote GNOME in Bucharest. And we did. We handed fliers (that were designed by Bastian) and we talked about what GNOME is and what we do.

                The two distros that we installed were Ubuntu GNOME and Ubuntu (the standard one with Unity). It’s worth mentioning that there were some laptops that wouldn’t boot the GNOME Live USB (for unknown reasons). Some of them wanted Ubuntu specifically, while others chose GNOME. There were also some unfortunate laptops that simply wouldn’t allow us to configure a dual boot environment (and we ended up either installing a virtual machine or giving up entirely😦 ).

                All in all, I’d say it was a pretty good event and it was also our first try at promoting GNOME here, which worked out pretty well. We are actually looking forward for the next events (we have some in mind, but we need to dive into the details of them).

                So stay tuned for more!

                The Electoral College

                Episode 4 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

                A US presidential election year is a wondrous thing. There are few places around the world where the campaign for head of state begins in earnest 18 months before the winner will take office. We are now in the home straight, with the final Presidential debate behind us, and election day coming up in 3 weeks, on the Tuesday after the first Monday in November (this year, that’s November 8th). And as with every election cycle, much time will be spent explaining the electoral college. This great American institution is at the heart of how America elects its President. Every 4 years, there are calls to reform it, to move to a different system, and yet it persists. What is it, where did it come from, and why does it cause so much controversy?

                In the US, people do not vote for the President directly in November. Instead, they vote for electors – people who represent the state in voting for the President. A state gets a number of electoral votes equal to its number of senators (2) and its number of US representatives (this varies based on population). Sparsely populated states like Alaska and Montana get 3 electoral votes, while California gets 55. In total, there are 538 electors, and a majority of 270 electoral votes is needed to secure the presidency. What happens if the candidates fail to get a majority of the electors is outside the scope of this blog post, and in these days of a two party system, it is very unlikely (although not impossible).

                State parties nominate elector lists before the election, and on election day, voters vote for the elector slate corresponding to their preferred candidate. Electoral votes can be awarded differently from state to state. In Nebraska, for example, there are 2 statewide electors for the winner of the statewide vote, and one elector for each congressional district, while in most states, the elector lists are chosen on a winner take all basis. After the election, the votes are counted in the local county, and sent to the state secretary for certification.

                Once the election results are certified (which can take up to a month), the electors meet in their state in mid December to record their votes for president and vice president. Most states (but not all!) have laws restricting who electors are allowed to vote for, making this mostly a ceremonial position. The votes are then sent to the US senate and the national archivist for tabulation, and the votes are then cross referenced before being sent to a joint session of Congress in early January. Congress counts the electoral votes and declares the winner in the presidency. Two weeks later, the new President takes office (those 2 weeks are to allow for the process where no-one gets a majority in the electoral college).

                Because it is possible to win heavily in some states with few electoral votes, and lose narrowly in others with a lot of electoral votes, it is possible to win the presidency without having a majority of Americans vote for you (as George W. Bush did in 2000). In modern elections, the electoral college can result in a huge difference of attention between “safe” states, and “swing” states – the vast majority of campaigning is done in only a dozen or so states, while states like Texas and Massachusetts do not get as much attention.

                Why did the founding fathers of the US come up with such a convoluted system? Why not have people vote for the President directly, and have the counts of the states tabulated directly, without the pomp and ceremony of the electoral college vote?

                First, think back to 1787, when the US constitution was written. The founders of the state had an interesting set of principles and constraints they wanted to uphold:

                • Big states should not be able to dominate small states
                • Similarly, small states should not be able to dominate big states
                • No political parties existed (and the founding fathers hoped it would stay that way)
                • Added 2016-10-21: Different states wanted to give a vote to different groups of people (and states with slavery wanted slaves to count in the population)
                • In the interests of having presidents who represented all of the states, candidates should have support outside their own state – in an era where running a national campaign was impractical
                • There was a logistical issue of finding out what happened on election day and determining the winner

                To satisfy these constraints, a system was chosen which ensured that small states had a proportionally bigger say (by giving an electoral vote for each Senator), but more populous states still have a bigger say (by getting an electoral vote for each congressman). In the first elections, electors voted for 2 candidates, of which only one could be from their state, meaning that winning candidates had support from outside their state. The President was the person who got the most electoral votes, and the vice president was the candidate who came second – even if (as was the case with John Adams and Thomas Jefferson) they were not in the same party. It also created the possibility (as happened with Thomas Jefferson and Aaron Burr) that a vice presidential candidate could get the same number of electoral votes as the presidential candidate, resulting in Congress deciding who would be president. The modern electoral college was created with the 12th amendment to the US constitution in 1803.

                Another criticism of direct voting is that populist demagogues could be elected by the people, but electors (being of the political classes) could be expected to be better informed, and make better decisions, about who to vote for. Alexander Hamilton wrote in The Federalist #68 that: “It was equally desirable, that the immediate election should be made by men most capable of analyzing the qualities adapted to the station, and acting under circumstances favorable to deliberation, and to a judicious combination of all the reasons and inducements which were proper to govern their choice. A small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations.” These days, most states have laws which require their electors to vote in accordance with the will of the electorate, so that original goal is now mostly obsolete.

                A big part of the reason for having over two months between the election and the president taking office (and prior to 1934, it was 4 months) is, in part, due to the size of the colonial USA. The administrative unit for counting, the county, was defined so that every citizen could get to the county courthouse and home in a day’s ride – and after an appropriate amount of time to count the ballots, the results were sent to the state capital for certification, which could take up to 4 days in some states like Kentucky or New York. And then the electors needed to be notified, and attend the official elector count in the state capital. And then the results needed to be sent to Washington, which could take up to 2 weeks, and Congress (which was also having elections) needed to meet to ratify the results. All of these things took time, amplified by the fact that travel happened on horseback.

                So at least in part, the electoral college system is based on how long, logistically, it took to bring the results to Washington and have Congress ratify them. The inauguration used to be on March 4th, because that was how long it took for the process to run its course. It was not until 1934 and the 20th amendment to the constitution that the date was moved to January.

                Incidentally, two other constitutionally set constraints for election day are also based on constraints that no longer apply. Elections happen on a Tuesday, because of the need not to interfere with two key events: sabbath (Sunday) and market (Wednesday). And the elections were held in November primarily so as not to interfere with harvest. These dates and reasoning, set in stone in 1845, persist today.

                October 19, 2016

                Attending a FUDcon LATAM 2016

                From my experience I will share my days at FUDcon 2016 held on Puno last week. There were 3 core days, and 2 more days to visit around.

                Day 0: 

                Martin & I departed from Lima to Puno around noon to find FEDORA guys as Chino, Athos, Wolnei and Brian. After walking around to reach our hotel, we found FEDORA posters on streets. Then we shared a traditional food called “Cheese Fries” and went together to the “hair comb mission” during the afternoon. Got altitude sickness at night😦

                fedora1Day 1:

                FEDORA contributors were presented in the auditorium and then the BarCam took place. I have helped to count the votes in “Dennis Spanish”. We shared food at fancy restaurant for lunch. Then, I attended three talk according to my interests: Kiara’s talk since I do research with IoT, David’s talk because I am a professor at university and Chino Soliard’s talk about SElinux.  At night we shared a delicious “pizza de la casa” served by “Sylvia” 😀

                fedora2Day 2:

                During the morning I have tried to cheer up audience by pricing with FEDORA stickers, pens and pins during the gaps of the presentations of Eduardo Mayorga (Fedora Documentation Project), Jose Reyes (How to start with FEDORA) and Martin Vuelta (Python, Arduino y FEDORA). To lunch I chose “Trucha Frita” and during the afternoon I presented “Building GNOME with JHBUILD”. I introduced people into the GNOME project and encourage people to install FEDORA 25. For dinner we shared “Chaufa” and bills from many countries. Thanks to Mayorga for guiding my steps to the FEDORA planet

                fedora3Day 3:

                This was a pretty long day…

                It started by helping in translating Brian’s talk as well it was done by Mayorga and Kiara, people was interested in clarify the differences among FEDORA, CentOS, FEDORA and UBUNTU. Brian mentioned us the ATOMIC project and explaining kindly the current status of FEDORA. (We are growing! at least in one percent🙂

                fedora4During the lunch, local people get in touch with me to start the GNOME PUNO project, I found enthusiastic young people that I did not imagine to find! I decided to give some minutes during my second talk DNF(Dandified YUM) to present the GNOME customization in the login part. After my talk the GNOME PUNO effort got two more members! We decided to prove our attempts by registering our tasks in our blogs!😀

                fedora5For the closing event we received some presents from the organizers and we shared our passion to the FEDORA project to locals. We celebrated also with typical dance of Puno.

                fedora6For dinner we shared “Pollo a la brasa” and then I helped in the preparation of the barbacue for the next day!… the following pictures belong to Tonet and Abhadd:

                fedora7Day 4:

                This was another long day, but a touristic one. We started by visiting Sillustani (pre-Inca cementery on the shores of Lake Umayo). Then we shared the barbacue with the group for lunch. At the afternoon we went to the Chucuito Fertility Temple (famous for phalluses). Special thanks to Gonzalo from FEDORA Chile who kindly agrees in sharing his photo:

                fedora9At night we visited the effort of ESCUELAB PUNO and shared our last dinner together😦

                fedora10Day 5:

                Last day of the journey and this is the picture of the survivors and our last breakfast at CozyLucky hotel. Then we went to the port to visit the floating straw Island of Uros. The good bye was recording in a interview lead by Abadh and saying good bye to new FEDORA friends! I renewed my vows to FEDORA and GNOME as they gave more than Linux knowledge, they gave me unforgettable and unique experiences in my life!

                fedora11Thank you so much to the FEDORA community my impressions:

                • Brian: humble and wise (Escuelab Puno), helpful and mischievous (.planet)
                • Dennis: greedy (3 familiar pizzas and one entire chicken), sweet (Spanish talk)
                • Echevemaster: father of newcomers (teaching packaging to Bolivians)
                • Itamar: quiet, smart and funny (we need to translate his Spanish to Spanish)
                • Mayorga: genius kid that not like so much pictures of him
                • Martin: intelligent, smart, my support and my truly friend
                • Wolnei: only happy with pizza
                • Tonet: funny, clever and hardworking guy!
                • Aly: little funny, willing to learn and good support of Tonet
                • Neider: a genuine crazy Linux guy
                • Chino Soliard: cheerful, talkative and very social guy
                • Jose: enthusiastic young Linux guy
                • Kiara: a Linux geek girl
                • Athos: spreading good energy to others
                • Antonio and Gonzalo: great cookers
                • Abadh: hard hard hard working Peruvian guy
                • David: kind, helpful, inspirational and sweet
                • Gonzalo Nina and Rigoberto: smart, practical and responsible guys
                • Yohan and Jose Laya: skillful and talented guys

                fedora12and you especially! for taking your time to read my post :3

                Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: dnf, fedora, FEDORA LATAM, FUDCon, FUDcon 2016, gnome puno, install jhbuild, Julita Inca, Julita Inca Chiroque, Puno

                FOSDEM SDN & NFV DevRoom Call for Content

                We are pleased to announce the Call for Participation in the FOSDEM 2017 Software Defined Networking and Network Functions Virtualization DevRoom!

                Important dates:

                • Nov 16: Deadline for submissions
                • Dec 1: Speakers notified of acceptance
                • Dec 5: Schedule published

                This year the DevRoom topics will cover two distinct fields:

                • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
                • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

                We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

                This year, the DevRoom will focus on low-level networking and high performance packet processing, network automation of containers and private cloud, and the management of telco applications to maintain very high availability and performance independent of whatever the world can throw at their infrastructure (datacenter outages, fires, broken servers, you name it).

                A representative list of the projects and topics we would like to see on the schedule are:

                • Low-level networking and switching: IOvisor, eBPF, XDP,, Open vSwitch, OpenDataplane, …
                • SDN controllers and overlay networking: OpenStack Neutron, Canal, OpenDaylight, ONOS, Plumgrid, OVN, OpenContrail, Midonet, …
                • NFV Management and Orchestration: Open-O, ManageIQ, Juju, OpenBaton, Tacker, OSM, network management,, …
                • NFV related features: Service Function Chaining, fault management, dataplane acceleration, security, …

                Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects.

                Please include the following information when submitting a proposal:

                • Your name
                • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
                • Short abstract of one or two paragraphs
                • Short bio (with photo)

                The deadline for submissions is November 16th 2016. FOSDEM will be held on the weekend of February 4-5, 2017 and the SDN/NFV DevRoom will take place on Saturday, February 4, 2017 (Updated 2016-10-20: an earlier version incorrectly said the DevRoom was on Sunday). Please use the following website to submit your proposals: (you do not need to create a new Pentabarf account if you already have one from past years).

                You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: (subscription page:

                – The Networking DevRoom 2016 Organization Team

                October 17, 2016


                GUI toolkits have different ways to lay out the elements that compose an application’s UI. You can go from the fixed layout management — somewhat best represented by the old ‘90s Visual tools from Microsoft; to the “springs and struts” model employed by the Apple toolkits until recently; to the “boxes inside boxes inside boxes” model that GTK+ uses to this day. All of these layout policies have their own distinct pros and cons, and it’s not unreasonable to find that many toolkits provide support for more than one policy, in order to cater to more use cases.

                For instance, while GTK+ user interfaces are mostly built using nested boxes to control margins, spacing, and alignment of widgets, there’s a sizeable portion of GTK+ developers that end up using GtkFixed or GtkLayout containers because they need fixed positioning of children widget — until they regret it, because now they have to handle things like reflowing, flipping contents in right-to-left locales, or font size changes.

                Additionally, most UI designers do not tend to “think with boxes”, unless it’s for Web pages, and even in that case CSS affords a certain freedom that cannot be replicated in a GUI toolkit. This usually results in engineers translating a UI specification made of ties and relations between UI elements into something that can be expressed with a pile of grids, boxes, bins, and stacks — with all the back and forth, validation, and resources that the translation entails.

                It would certainly be easier if we could express a GUI layout in the same set of relationships that can be traced on a piece of paper, a UI design tool, or a design document:

                • this label is at 8px from the leading edge of the box
                • this entry is on the same horizontal line as the label, its leading edge at 12px from the trailing edge of the label
                • the entry has a minimum size of 250px, but can grow to fill the available space
                • there’s a 90px button that sits between the trailing edge of the entry and the trailing edge of the box, with 8px between either edges and itself

                Sure, all of these constraints can be replaced by a couple of boxes; some packing properties; margins; and minimum preferred sizes. If the design changes, though, like it often does, reconstructing the UI can become arbitrarily hard. This, in turn, leads to pushback to design changes from engineers — and the cost of iterating over a GUI is compounded by technical inertia.

                For my daily work at Endless I’ve been interacting with our design team for a while, and trying to get from design specs to applications more quickly, and with less inertia. Having CSS available allowed designers to be more involved in the iterative development process, but the CSS subset that GTK+ implements is not allowed — for eminently good reasons — to change the UI layout. We could go “full Web”, but that comes with a very large set of drawbacks — performance on low end desktop devices, distribution, interaction with system services being just the most glaring ones. A native toolkit is still the preferred target for our platform, so I started looking at ways to improve the lives of UI designers with the tools at our disposal.

                Expressing layout through easier to understand relationships between its parts is not a new problem, and as such it does not have new solutions; other platforms, like the Apple operating systems, or Google’s Android, have started to provide this kind of functionality — mostly available through their own IDE and UI building tools, but also available programmatically. It’s even available for platforms like the Web.

                What many of these solutions seem to have in common is using more or less the same solving algorithm — Cassowary.

                Cassowary is:

                an incremental constraint solving toolkit that efficiently solves systems of linear equalities and inequalities. Constraints may be either requirements or preferences. Client code specifies the constraints to be maintained, and the solver updates the constrained variables to have values that satisfy the constraints.

                This makes it particularly suited for user interfaces.

                The original implementation of Cassowary was written in 1998, in Java, C++, and Smalltalk; since then, various other re-implementations surfaced: Python, JavaScript, Haskell, slightly-more-modern-C++, etc.

                To that collection, I’ve now added my own — written in C/GObject — called Emeus, which provides a GTK+ container and layout manager that uses the Cassowary constraint solving algorithm to compute the allocation of each child.

                In spirit, the implementation is pretty simple: you create a new EmeusConstraintLayout widget instance, add a bunch of widgets to it, and then use EmeusConstraint objects to determine the relations between children of the layout:

                simple-grid.js [Lines 89-170] download
                        let button1 = new Gtk.Button({ label: 'Child 1' });
                        this._layout.pack(button1, 'child1');
                        let button2 = new Gtk.Button({ label: 'Child 2' });
                        this._layout.pack(button2, 'child2');
                        let button3 = new Gtk.Button({ label: 'Child 3' });
                        this._layout.pack(button3, 'child3');
                            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button1,
                                                   source_attribute: Emeus.ConstraintAttribute.START,
                                                   constant: -8.0 }),
                            new Emeus.Constraint({ target_object: button1,
                                                   target_attribute: Emeus.ConstraintAttribute.WIDTH,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button2,
                                                   source_attribute: Emeus.ConstraintAttribute.WIDTH }),
                            new Emeus.Constraint({ target_object: button1,
                                                   target_attribute: Emeus.ConstraintAttribute.END,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button2,
                                                   source_attribute: Emeus.ConstraintAttribute.START,
                                                   constant: -12.0 }),
                            new Emeus.Constraint({ target_object: button2,
                                                   target_attribute: Emeus.ConstraintAttribute.END,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_attribute: Emeus.ConstraintAttribute.END,
                                                   constant: -8.0 }),
                            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button3,
                                                   source_attribute: Emeus.ConstraintAttribute.START,
                                                   constant: -8.0 }),
                            new Emeus.Constraint({ target_object: button3,
                                                   target_attribute: Emeus.ConstraintAttribute.END,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_attribute: Emeus.ConstraintAttribute.END,
                                                   constant: -8.0 }),
                            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button1,
                                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                                   constant: -8.0 }),
                            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button2,
                                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                                   constant: -8.0 }),
                            new Emeus.Constraint({ target_object: button1,
                                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button3,
                                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                                   constant: -12.0 }),
                            new Emeus.Constraint({ target_object: button2,
                                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button3,
                                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                                   constant: -12.0 }),
                            new Emeus.Constraint({ target_object: button3,
                                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button1,
                                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
                            new Emeus.Constraint({ target_object: button3,
                                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_object: button2,
                                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
                            new Emeus.Constraint({ target_object: button3,
                                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                                   relation: Emeus.ConstraintRelation.EQ,
                                                   source_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                                   constant: -8.0 }),

                A simple grid

                This obviously looks like a ton of code, which is why I added the ability to describe constraints inside GtkBuilder XML:

                centered.ui [Lines 28-45] download
                              <constraint target-object="button_child"
                              <constraint target-object="button_child"
                              <constraint target-object="button_child"

                Additionally, I’m writing a small parser for the Visual Format Language used by Apple for their own auto layout implementation — even though it does look like ASCII art of Perl format strings, it’s easy to grasp.

                The overall idea is to prototype UIs on top of this, and then take advantage of GTK+’s new development cycle to introduce something like this and see if we can get people to migrate from GtkFixed/GtkLayout.

                This Week in GTK+ – 20

                In this last week, the master branch of GTK+ has seen 191 commits, with 4159 lines added and 64248 lines removed.

                Planning and status
                • Benjamin merged his wip/otte/gtk4 branch, which removed various deprecated GDK and GTK+ API, into master
                • Timm merged parts of his wip/baedert/box branch, which removed the deprecated themeing engine API and other old style API, into master
                • The GTK+ road map is available on the wiki.
                Notable changes
                • Emmanuele added various compiler flags to the default build in master, in order to catch issues earlier — and, hopefully, before pushing to the remote repository — during development
                • Matthias added new compiler requirements for GTK+; if you wish to build GTK+, your compiler must support at least a known subset of C99
                Bugs fixed
                • 772683 Usage of FALSE instead of gint in glarea demo
                • 772926 shortcutswindow: working version of set_section_name()
                • 772775 menu bindings needs attribute to force LTR for horizontal-buttons display-hint
                • 771320 [Wayland] Maps widget is displayed at wrong position inside gnome-contacts
                • 767713 Fullscreen in wayland is buggy
                Getting involved

                Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on

                October 16, 2016

                GNOME 3.22/KDE Plasma 5.8 release party in Brno

                Last Thursday, we organized a regular Linux Desktop Meetup in Brno and because two major desktop environments had had their releases recently we also added a release party to celebrate them.

                The meetup itself took place in the Red Hat Lab at FIT BUT (venue of GUADEC 2013) and it consisted of 4 talks. I spoke on new things in GNOME 3.22, our KDE developer Jan Grulich spoke on new things in Plasma 5.8, then Oliver Gutierrez spoke on Fleet Commander and the last talk was given by Lucie Karmova who is using Fedora as a desktop in a public organization and shared her experiences with the Linux desktop.


                After the talks, we moved to the nearby Velorex restaurant to celebrate the releases. The whole event attracted around 25 visitors which is definitely above the average attendance of our meetups. Let’s see if we can get the same number of people to the meetup next month.

                Last, but not least I’d like to thank the Desktop QA team of Red hat for sponsoring the food and drinks at the release party.


                GNOME outreach flyer for local groups and events

                One of my very early contributions to GNOME was a flyer. FOSDEM 2014 was one of the first conferences I attended and with me I had brought printouts of this flyer which we handed out to people from the GNOME stand.


                That was 2 and a half year ago and the flyer has started to show its age. GNOME shell has received a lot of changes style-wise since then. We now have more well-defined brand colors and I learned a lot more about communication and visual design. So over the last three days I have worked out a new flyer.

                gnome-flyerThe flyers as seen when printed (Image by Rares Visalom)

                The flyer focus on presenting GNOME and our vision with the ultimate goal of reaching out to users and give them the initial motivation to participate in our awesome community.



                In the first two sections I focused on communicating GNOME in a tangible way. This is important because I feel the other information (vision, community etc) requires that you know what the fundamentals of what GNOME is in order to have impact. Furthermore attention is put on the user-visible aspects of GNOME – the shell and our applications. Hopefully someone who picks up a flyer might already recognize a few of the application icons shown in there.


                The rest of the flyer then moves onto explaining our vision and values. First section is there to introduce our community structure in GNOME. Second section is a (very) friendly version describing exactly what “free” means when we say that GNOME is a free desktop.


                The last section is about the more personal goals you can set for yourself which you can use when getting started as motivational factor for getting involved with GNOME. Initially you might want to do this in your free time to improving your skills, get more experience doing team work etc. My experience is that this motivation later is replaced by relational motivation factors as you start to bond with the community and partake in the team activities. The great photos shown in this section are taken by Carla Quintana Carrasco (thanks!).


                Finally, the flyer contains information on how to get involved – online and possibly via a local GNOME group.

                Getting the flyer

                Is your local free software group doing an outreach event? Are you having a GNOME booth at a conference? Then this flyer is for you. I have uploaded it as a folder on engagement team’s gnome cloud where the source SVG’s and sample PDF’s are freely available for download (photos under CC-BY-NC 2.0, the rest CC-BY-SA 4.0).

                If you have a local group and plan on printing this for an event, I encourage you to edit the gnome-outreach-flyer-middle-pages.svg and add links to your web pages, contact information, social media etc. there before printing. As an example see the gnome_flyer_bucharest.pdf.

                This iteration of the flyer is made to be printed out on a duplex A4 printer. I definitely recommend getting it printed at a print shop on some proper paper but otherwise the flyer is possible to print on any printer – you can specify 0mm bleed when saving the SVG as PDF from Inkscape. In other cases I have made margin big enough for 4-5 mm bleed (although I’m not sure how to make Inkscape do crop marks). Notice that the A4 page containing front and back has to be upside down for the end result to be correct. If you need to edit the front page you can use “Ctrl+A” and afterwards “V” will flip the contents vertically back and forth in Inkscape. Use a tool like PdfMod to join the front-back pdf with the middle-contents pdf and you are good to go.

                In the future I’m thinking it might be nice to expand the flyer a bit. I have some ideas including:

                • Make a separate page showing off pictures from our GNOME conferences and talk a bit about them.
                • Have sections dedicated to talking about the different teams in GNOME such as translation, engagement, coding, etc.
                • Have a dedicated page to possible local groups with more elaborate information and pictures of key persons in the group a newcomer might want to talk to.
                • At the moment the flyer targets people who are already familiar with Linux. That might not always be the case so it might be wise to have (optional) pages dedicated to talk about Linux, gnu’s and penguins.

                If you have feedback or ideas yourself, do feel free to share them – this is first iteration. Thanks to Rares, Nuritzi and Alexandre and the rest of the engagement team for giving comments, feedback and encouragement for working on this!

                October 13, 2016

                Software Build Topologies

                In recent months, I’ve found myself discussing the pros and cons of different approaches used for building complete operating systems (desktops or appliances), or lets say software build topologies. What I’ve found, is that frequently I lack vocabulary to categorize existing build topologies or describe some common characteristics of build systems, the decisions and tradeoffs which various projects have made. This is mostly just a curiosity piece; a writeup of some of my observations on different build topologies.

                Self Hosting Build Topologies

                Broadly, one could say that the vast majority of build systems use one form or another of self hosting build topology. We use this term to describe tools which build themselves, wikipedia says that self hosting is:

                the use of a computer program as part of the toolchain or operating system that produces new versions of that same program

                While this term does not accurately describe a category of build topology, I’ve been using this term loosely to describe build systems which use software installed on the host to build the source for that same host, it’s a pretty good fit.

                Within this category, there are, I can observe 2 separate topologies used, lets call these the Mirror Build and the Sequential Build for lack of any existing terminology I can find.

                The Mirror Build

                This topology is one where the system has already been built once, either on your computer or another one. This build process treats the bootstrapping of an operating system as an ugly and painful process for the experts, only to be repeated when porting the operating system to a new architecture.

                The basic principle here is that once you have an entire system that is already built, you can use that entire system to build a complete new set of packages for the next version of that system. Thus the next version is a sort of reflection of the previous version.

                One of the negative results of this approach is that circular dependencies tend to crop up unnoticed, since you already have a complete set of the last version of everything. For example: it’s easy enough to have perl require autotools to build, even though you needed perl to build autotools in the first place. This doesn’t matter because you already have both installed on the host.

                Of course circular dependencies become a problem when you need to bootstrap a system like this for a new architecture, and so you end up with projects like this one, specifically tracking down cropped up circular dependencies to ensure that a build from scratch actually remains possible.

                One common characteristic of build systems which are based on the Mirror Build is that they are usually largely non-deterministic. Usually, whatever tools and library versions happen to be lying around on the system can be used to build a new version of a given module, so long as each dependency of that module is satisfied. A dependency here is usually quite loosely specified as a lower minimal bound dependency: the oldest version of foo which can possibly be used to build or link against, will suffice to build bar.

                This Mirror Build is historically the most popular, born of the desire to allow the end user to pick up some set of sources and compile the latest version, while encountering the least resistance to do so.

                While the famous RPM and Debian build systems have their roots in this build topology, it’s worth noting that surrounding tooling has since evolved to build RPMs or Debian packages under a different topology. For instance, when using OBS to build RPMs or Debian packages: each package is built in sequence, staging only the dependencies which the next package needs from previous builds into a minimal VM. Since we are bootstrapping often and isolating the environment for each build to occur in sequence from a predefined manifest of specifically versioned package, it is much more deterministic and becomes a Sequential Build instead.

                The Sequential Build

                The Sequential Build, again for the lack of any predefined terminology I can find, is one where the entire OS can be built from scratch. Again and again.

                The LFS build, without any backing build system per se, I think is a prime example of this topology.

                This build can still be said to be self hosting, indeed; one previously built package is used to build the next package in sequence. Aside from the necessary toolchain bootstrapping: the build host where all the tools are executed is also the target where all software is intended to run. The distinction I make here is that only packages (and those package versions) which are part of the resulting OS are ever used to build that same OS, so a strict order must be enforced, and in some cases the same package needs to be built more than once to achieve the end result, however determinism is favored.

                It’s also noteworthy that this common property, where host = target, is what is generally expected by most project build scripts. While cross compiles (more on that below) typically have to struggle and force things to build in some contorted way.

                While the Ports, Portage, and Pacman build systems, which encourage the build to occur on your own machine, seem to lend themselves better to the Sequential Build, this only seems to be true at bootstrap time (I would need to look more closely into these systems to say more). Also, these system are not without their own set of problems. With gentoo’s Portage, one can also fall into circular dependency traps where one needs to build a given package twice while tweaking the USE flags along the way. Also with Portage, package dependencies are not strictly defined but again loosely defined as lower minimal bound dependencies.

                I would say that a Sequential Self Hosted Build lends itself better to determinism and repeatability, but a build topology which is sequential is not inherently deterministic.

                Cross Compiles

                The basic concept of Cross Compiling is simple: Use a compiler that runs on host and outputs binary to be later run on target.

                But the activity of cross compiling an entire OS is much more complex than just running a compiler on your host and producing binary output for a target.

                Direct Cross Build

                It is theoretically possible to compile everything for the target using only host tools and a host installed cross compiler, however I have yet to encounter any build system which uses such a topology.

                This is probably primarily because it would require that many host installed tools be sysroot aware beyond just the cross compiler. Hence we resort to a Multi Stage Cross Build.

                Multi Stage Cross Build

                This Multi Stage Cross Build, which can be observed in projects such as Buildroot and Yocto shares some common ground with the Sequential Self Hosted Build topology, except that the build is run in multiple stages.

                In the first stage, all the tools which might be invoked during the cross build are built into sysroot prefix for host runnable tooling. This is where you will find your host -> target cross compiler along with autotools, pkg-config, flex, bison, and basically every tool you may need to run on your host during the build. These tools installed in your host tooling sysroot are specially configured so that when they are run they find their comrades in the same sysroot but look for other payload assets (like shared libraries) in the eventual target sysroot.

                Only after this stage, which may have involved patching some tooling to make it behave well for the next stage, do we really start cross compiling.

                In the second stage we use only tools built into the toolchain’s sysroot to build the target. Starting by cross compiling a C library and a native compiler for your target architecture.

                Asides from this defining property, that a cross compile is normally done in separate stages, there is the detail that pretty much everything under the sun besides the toolchain itself (which must always support bootstrapping and cross compiling) needs to be coerced into cooperation with added obscure environment variables, or sometimes beaten into submission with patches.

                Virtual Cross Build

                While a cross compile will always be required for the base toolchain, I am hopeful that with modern emulation, tools like Scratchbox 2 and approaches such as Aboriginal Linux; we can ultimately abolish the Multi Stage Cross Build topology entirely from existence. The added work involved in maintaining build scripts which are cross build aware and constant friction with downstream communities which insist on cross building upstream software is just not worth the effort when a self hosting build can be run in a virtual environment.

                Some experimentation already exists, the Mer Project was successful in running OBS builds inside a Scratchbox 2 environment to cross compile RPMSs without having to deal with the warts of traditional cross compiles. I also did some experimentation this year building the GNU/Linux/GNOME stack with Aboriginal Linux.

                This kind of virtual cross compile does not constitute a unique build topology since it in fact uses one of the Self Hosting topologies inside a virtual environment to produce a result for a new architecture.


                In closing, there are certainly a great variety of build systems out there, all of which have made different design choices and share common properties. Not much vocabulary exists to describe these characteristics. This suggests that the area of building software remains somewhat unexplored, and that the tooling we use for such tasks is largely born of necessity, barely holding together with lots of applied duct tape. With interesting new developments for distributing software such as Flatpak, and studies into how to build software reliably and deterministically, such as the reproducible builds project, hopefully we can expect some improvements in this area.

                I hope you’ve enjoyed my miscellaneous ramblings of the day.

                GNOME Release party in Sydney !!

                We're hosting a GNOME release party !! Come celebrate with us at openlearning Sydney office.
                Anyone want to give short talk, please feel to contact me.

                ZeMarmot monthly report for September 2016

                The past month report will be short. Indeed Aryeom sprained the thumb from her drawing hand, as we already told a month ago. What we did not plan is that it would take that long to get better (the doctor initially said it should be better within 2 weeks… well she was wrong!). Aryeom actually tried to work again after 2-week rest (i.e. following doctor advice), but after a few days of work, the pain was pretty bad and she had to stop.

                Later Aryeom has started working from the left hand. Below is her first drawing with her left hand:

                Left-hand drawing by Aryeom for Simwoool magazine

                Left-hand drawing by Aryeom for Simwoool magazine

                I personally think it is very cool but she says it is not enough for professional work. Also she is a few times slower with this hand for the moment. Yet for ZeMarmot, she started animating again with the left hand (wouhou!), but not doing finale painting/render. She is waiting the right hand to get better for this.
                In the meantime, she has regular sessions with a physiotherapist and Friday, she’ll do a radiograph of the hand to make sure everything is OK (since pain lasted longer than expected).

                Because of this, the month was slow. We also decided to refuse a few conferences, and in particular the upcoming Capitole du Libre, quite a big event in France in November, because we wanted to focus on ZeMarmot instead, especially because of the lateness which this sprain generated on the schedule. We will likely participate to no public event until next year.

                Probably now is a time when your support will matter more than ever because it has been pretty hard, on Aryeom in particular, as you can guess. When your hand is your main work tool, you can imagine how it feels to have such an issue. :-/
                Do not hesitate to send her a few nice words through comments!
                Next month, hopefully the news will be a lot better.

                October 12, 2016

                Rough Notes for New FLOSS Contributors On The Scientific Method and Usable History

                Some thrown-together thoughts towards a more comprehensive writeup. It's advice on about how to get along better as a new open source participant, based on the fundamental wisdom that you weren't the first person here and you won't be the last.

                We aren't just making code. We are working in a shared workplace, even if it's an online place rather than a physical office or laboratory, making stuff together. The work includes not just writing functions and classes, but experiments and planning and coming up with "we ought to do this" ideas. And we try to make it so that anyone coming into our shared workplace -- or anyone who's working on a different part of the project than they're already used to -- can take a look at what we've already said and done, and reuse the work that's been done already.

                We aren't just making code. We're making history. And we're making a usable history, one that you can use, and one that the contributor next year can use.

                So if you're contributing now, you have to learn to learn from history. We put a certain kind of work in our code repositories, both code and notes about the code. git grep idea searches a code repository's code and comments for the word "idea", git log --grep="idea" searches the commit history for times we've used the word "idea" in a commit message, and git blame shows you who last changed every line of that codefile, and when. And we put a certain kind of work into our conversations, in our mailing lists and our bug/issue trackers. We say "I tried this and it didn't work" or "here's how someone else should implement this" or "I am currently working on this". You will, with practice, get better at finding and looking at these clues, at finding the bits of code and conversation that are relevant to your question.

                And you have to learn to contribute to history. This is why we want you to ask your questions in public -- so that when we answer them, someone today or next week or next year can also learn from the answer. This is why we want you to write emails to our mailing lists where you explain what you're doing. This is why we ask you to use proper English when you write code comments, and why we have rules for the formatting and phrasing of commit messages, so it's easier for someone in the future to grep and skim and understand. This is why a good question or a good answer has enough context that other people, a year from now, can see whether it's relevant to them.

                Relatedly: the scientific method is for teaching as well as for troubleshooting. I compared an open source project to a lab before. In the code work we do, we often use the scientific method. In order for someone else to help you, they have to create, test, and prove or disprove theories -- about what you already know, about what your code is doing, about the configuration on your computer. And when you see me asking a million questions, asking you to try something out, asking what you have already tried, and so on, that's what I'm doing. I'm generally using the scientific method. I'm coming up with a question and a hypothesis and I'm testing it, or asking you to test it, so we can look at that data together and draw conclusions and use them to find new interesting questions to pursue.


                • Expected result: doing on your machine will give you the same results as on mine.
                • Actual observation: you get a different result, specifically, an error that includes a permissions problem.
                • Hypothesis: the relevant directories or users aren't set up with the permissions they need.
                • Next step: Request for further data to prove or disprove hypothesis.
                So I'll ask a question to try and prove or disprove my hypothesis. And if you never reply to my question, or you say "oh I fixed it" but don't say how, or if you say "no that's not the problem" but you don't share the evidence that led you to that conclusion, it's harder for me to help you. And similarly, if I'm trying to figure out what you already know so that I can help you solve a problem, I'm going to ask a lot of diagnostic questions about whether you know how to do this or that. And it's ok not to know things! I want to teach you. And then you'll teach someone else.

                In our coding work, it's a shared responsibility to generate hypotheses and to investigate them, to put them to the test, and to share data publicly to help others with their investigations. And it's more fruitful to pursue hypotheses, to ask "I tried ___ and it's not working; could the reason be this?", than it is to merely ask "what's going on?" and push the responsibility of hypothesizing and investigation onto others.

                This is a part of balancing self-sufficiency and interdependence. You must try, and then you must ask. Use the scientific method and come up with some hypotheses, then ask for help -- and ask for help in a way that helps contribute to our shared history, and is more likely to help ensure a return-on-investment for other people's time.

                So it's likely to go like this:

                1. you try to solve your problem until you get stuck, including looking through our code and our documentation, then start formulating your request for help
                2. you ask your question
                3. someone directs you to a document
                4. you go read that document, and try to use it to answer your question
                5. you find you are confused about a new thing
                6. you ask another question
                7. now that you have demonstrated that you have the ability to read, think, and learn new things, someone has a longer talk with you to answer your new specific question
                8. you and the other person collaborate to improve the document that you read in step 4 :-)

                This helps us make a balance between person-to-person discussion and documentation that everyone can read, so we save time answering common questions but also get everyone the personal help they need. This will help you understand the rhythm of help we provide in livechat -- including why we prefer to give you help in public mailing lists and channels, instead of in one-on-one private messages or email. We prefer to hear from you and respond to you in public places so more people have a chance to answer the question, and to see and benefit from the answer.

                We want you to learn and grow. And your success is going to include a day when you see how we should be doing things better, not just with a new feature or a bugfix in the code, but in our processes, in how we're organizing and running the lab. I also deeply want for you to take the lessons you learn -- about how a group can organize itself to empower everyone, about seeing and hacking systems, about what scaffolding makes people more capable -- to the rest of your life, so you can be freer, stronger, a better leader, a disruptive influence in the oppressive and needless hierarchies you encounter. That's success too. You are part of our history and we are part of yours, even if you part ways with us, even if the project goes defunct.

                This is where I should say something about not just making a diff but a difference, or something about the changelog of your life, but I am already super late to go on my morning jog and this was meant to be a quick-and-rough braindump anyway...

                An incomplete history of language facilities for concurrency

                I have lately been in the market for better concurrency facilities in Guile. I want to be able to write network servers and peers that can gracefully, elegantly, and efficiently handle many tens of thousands of clients and other connections, but without blowing the complexity budget. It's a hard nut to crack.

                Part of the problem is implementation, but a large part is just figuring out what to do. I have often thought that modern musicians must be crushed under the weight of recorded music history, but it turns out in our humble field that's also the case; there are as many concurrency designs as languages, just about. In this regard, what follows is an incomplete, nuanced, somewhat opinionated history of concurrency facilities in programming languages, with an eye towards what I should "buy" for the Fibers library I have been tinkering on for Guile.

                * * *

                Modern machines have the raw capability to serve hundreds of thousands of simultaneous long-lived connections, but it’s often hard to manage this at the software level. Fibers tries to solve this problem in a nice way. Before discussing the approach taken in Fibers, it’s worth spending some time on history to see how we got here.

                One of the most dominant patterns for concurrency these days is “callbacks”, notably in the Twisted library for Python and the Node.js run-time for JavaScript. The basic observation in the callback approach to concurrency is that the efficient way to handle tens of thousands of connections at once is with low-level operating system facilities like poll or epoll. You add all of the file descriptors that you are interested in to a “poll set” and then ask the operating system which ones are readable or writable, as appropriate. Once the operating system says “yes, file descriptor 7145 is readable”, you can do something with that socket; but what? With callbacks, the answer is “call a user-supplied closure”: a callback, representing the continuation of the computation on that socket.

                Building a network service with a callback-oriented concurrency system means breaking the program into little chunks that can run without blocking. Whereever a program could block, instead of just continuing the program, you register a callback. Unfortunately this requirement permeates the program, from top to bottom: you always pay the mental cost of inverting your program’s control flow by turning it into callbacks, and you always incur run-time cost of closure creation, even when the particular I/O could proceed without blocking. It’s a somewhat galling requirement, given that this contortion is required of the programmer, but could be done by the compiler. We Schemers demand better abstractions than manual, obligatory continuation-passing-style conversion.

                Callback-based systems also encourage unstructured concurrency, as in practice callbacks are not the only path for data and control flow in a system: usually there is mutable global state as well. Without strong patterns and conventions, callback-based systems often exhibit bugs caused by concurrent reads and writes to global state.

                Some of the problems of callbacks can be mitigated by using “promises” or other library-level abstractions; if you’re a Haskell person, you can think of this as lifting all possibly-blocking operations into a monad. If you’re not a Haskeller, that’s cool, neither am I! But if your typey spidey senses are tingling, it’s for good reason: with promises, your whole program has to be transformed to return promises-for-values instead of values anywhere it would block.

                An obvious solution to the control-flow problem of callbacks is to use threads. In the most generic sense, a thread is a language feature which denotes an independent computation. Threads are created by other threads, but fork off and run independently instead of returning to their caller. In a system with threads, there is implicitly a scheduler somewhere that multiplexes the threads so that when one suspends, another can run.

                In practice, the concept of threads is often conflated with a particular implementation, kernel threads. Kernel threads are very low-level abstractions that are provided by the operating system. The nice thing about kernel threads is that they can use any CPU that is the kernel knows about. That’s an important factor in today’s computing landscape, where Moore’s law seems to be giving us more cores instead of more gigahertz.

                However, as a building block for a highly concurrent system, kernel threads have a few important problems.

                One is that kernel threads simply aren’t designed to be allocated in huge numbers, and instead are more optimized to run in a one-per-CPU-core fashion. Their memory usage is relatively high for what should be a lightweight abstraction: some 10 kilobytes at least and often some megabytes, in the form of the thread’s stack. There are ongoing efforts to reduce this for some systems but we cannot expect wide deployment in the next 5 years, if ever. Even in the best case, a hundred thousand kernel threads will take at least a gigabyte of memory, which seems a bit excessive for book-keeping overhead.

                Kernel threads can be a bit irritating to schedule, too: when one thread suspends, it’s for a reason, and it can be that user-space knows a good next thread that should run. However because kernel threads are scheduled in the kernel, it’s rarely possible for the kernel to make informed decisions. There are some “user-mode scheduling” facilities that are in development for some systems, but again only for some systems.

                The other significant problem is that building non-crashy systems on top of kernel threads is hard to do, not to mention “correct” systems. It’s an embarrassing situation. For one thing, the low-level synchronization primitives that are typically provided with kernel threads, mutexes and condition variables, are not composable. Also, as with callback-oriented concurrency, one thread can silently corrupt another via unstructured mutation of shared state. It’s worse with kernel threads, though: a kernel thread can be interrupted at any point, not just at I/O. And though callback-oriented systems can theoretically operate on multiple CPUs at once, in practice they don’t. This restriction is sometimes touted as a benefit by proponents of callback-oriented systems, because in such a system, the callback invocations have a single, sequential order. With multiple CPUs, this is not the case, as multiple threads can run at the same time, in parallel.

                Kernel threads can work. The Java virtual machine does at least manage to prevent low-level memory corruption and to do so with high performance, but still, even Java-based systems that aim for maximum concurrency avoid using a thread per connection because threads use too much memory.

                In this context it’s no wonder that there’s a third strain of concurrency: shared-nothing message-passing systems like Erlang. Erlang isolates each thread (called processes in the Erlang world), giving each it its own heap and “mailbox”. Processes can spawn other processes, and the concurrency primitive is message-passing. A process that tries receive a message from an empty mailbox will “block”, from its perspective. In the meantime the system will run other processes. Message sends never block, oddly; instead, sending to a process with many messages pending makes it more likely that Erlang will pre-empt the sending process. It’s a strange tradeoff, but it makes sense when you realize that Erlang was designed for network transparency: the same message send/receive interface can be used to send messages to processes on remote machines as well.

                No network is truly transparent, however. At the most basic level, the performance of network sends should be much slower than local sends. Whereas a message sent to a remote process has to be written out byte-by-byte over the network, there is no need to copy immutable data within the same address space. The complexity of a remote message send is O(n) in the size of the message, whereas a local immutable send is O(1). This suggests that hiding the different complexities behind one operator is the wrong thing to do. And indeed, given byte read and write operators over sockets, it’s possible to implement remote message send and receive as a process that serializes and parses messages between a channel and a byte sink or source. In this way we get cheap local channels, and network shims are under the programmer’s control. This is the approach that the Go language takes, and is the one we use in Fibers.

                Structuring a concurrent program as separate threads that communicate over channels is an old idea that goes back to Tony Hoare’s work on “Communicating Sequential Processes” (CSP). CSP is an elegant tower of mathematical abstraction whose layers form a pattern language for building concurrent systems that you can still reason about. Interestingly, it does so without any concept of time at all, instead representing a thread’s behavior as a trace of instantaneous events. Threads themselves are like functions that unfold over the possible events to produce the actual event trace seen at run-time.

                This view of events as instantaneous happenings extends to communication as well. In CSP, one communication between two threads is modelled as an instantaneous event, partitioning the traces of the two threads into “before” and “after” segments.

                Practically speaking, this has ramifications in the Go language, which was heavily inspired by CSP. You might think that a channel is just a an asynchronous queue that blocks when writing to a full queue, or when reading from an empty queue. That’s a bit closer to the Erlang conception of how things should work, though as we mentioned, Erlang simply slows down writes to full mailboxes rather than blocking them entirely. However, that’s not what Go and other systems in the CSP family do; sending a message on a channel will block until there is a receiver available, and vice versa. The threads are said to “rendezvous” at the event.

                Unbuffered channels have the interesting property that you can select between sending a message on channel a or channel b, and in the end only one message will be sent; nothing happens until there is a receiver ready to take the message. In this way messages are really owned by threads and never by the channels themselves. You can of course add buffering if you like, simply by making a thread that waits on either sends or receives on a channel, and which buffers sends and makes them available to receives. It’s also possible to add explicit support for buffered channels, as Go, core.async, and many other systems do, which can reduce the number of context switches as there is no explicit buffer thread.

                Whether to buffer or not to buffer is a tricky choice. It’s possible to implement singly-buffered channels in a system like Erlang via an explicit send/acknowlege protocol, though it seems difficult to implement completely unbuffered channels. As we mentioned, it’s possible to add buffering to an unbuffered system by the introduction of explicit buffer threads. In the end though in Fibers we follow CSP’s lead so that we can implement the nice select behavior that we mentioned above.

                As a final point, select is OK but is not a great language abstraction. Say you call a function and it returns some kind of asynchronous result which you then have to select on. It could return this result as a channel, and that would be fine: you can add that channel to the other channels in your select set and you are good. However, what if what the function does is receive a message on a channel, then do something with the message? In that case the function should return a channel, plus a continuation (as a closure or something). If select results in a message being received over that channel, then we call the continuation on the message. Fine. But, what if the function itself wanted to select over some channels? It could return multiple channels and continuations, but that becomes unwieldy.

                What we need is an abstraction over asynchronous operations, and that is the main idea of a CSP-derived system called “Concurrent ML” (CML). Originally implemented as a library on top of Standard ML of New Jersey by John Reppy, CML provides this abstraction, which in Fibers is called an operation1. Calling send-operation on a channel returns an operation, which is just a value. Operations are like closures in a way; a closure wraps up code in its environment, which can be later called many times or not at all. Operations likewise can be performed2 many times or not at all; performing an operation is like calling a function. The interesting part is that you can compose operations via the wrap-operation and choice-operation combinators. The former lets you bundle up an operation and a continuation. The latter lets you construct an operation that chooses over a number of operations. Calling perform-operation on a choice operation will perform one and only one of the choices. Performing an operation will call its wrap-operation continuation on the resulting values.

                While it’s possible to implement Concurrent ML in terms of Go’s channels and baked-in select statement, it’s more expressive to do it the other way around, as that also lets us implement other operations types besides channel send and receive, for example timeouts and condition variables.

                1 CML uses the term event, but I find this to be a confusing name. In this isolated article my terminology probably looks confusing, but in the context of the library I think it can be OK. The jury is out though.

                2 In CML, synchronized.

                * * *

                Well, that's my limited understanding of the crushing weight of history. Note that part of this article is now in the Fibers manual.

                Thanks very much to Matthew Flatt, Matthias Felleisen, and Michael Sperber for pushing me towards CML. In the beginning I thought its benefits were small and complication large, but now I see it as being the reverse. Happy hacking :)

                October 11, 2016

                Unix Userland as C++ Coroutines

                The Unix userland concept is simple. It has standalone programs that do only one thing which are then chained together. A classical example is removing duplicates from a file, which looks like this:

                cat filename.txt | sort | uniq

                This is a very powerful concept. However there is a noticeable performance overhead. In order to do this, the shell needs to launch three different processes. In addition all communication between these programs happens over Unix pipes, meaning a round trip call into the kernel for each line of text transferred (roughly, not an exact number in all cases). Even though processes and pipes are cheap on Unix they still cause noticeable slowdown (as anyone who has run large shell scripts knows).

                Meanwhile there has been a lot of effort to put coroutines into C++. A very simple (and not fully exhaustive) way of describing coroutines is to say that they basically implement the same functionality as Python generators. That is, this Python code:

                def gen():
                    yield 1
                    yield 2
                    yield 3

                could be written in C++ like this (syntax is approximate and might not even compile):

                int gen() {
                    co_yield 1;
                    co_yield 2;
                    co_yield 3;

                If we look more closely into the Unix userland specification we find that even though it is (always?) implemented with processes and pipes, it is not required to. If we use C's as if rule we are free to implement shell pipelines in any way we want as long as the output is the same. This gives us a straightforward guide to implement the Unix userland in C++ coroutines.

                We can model each Unix command line application as a coroutine that reads input data, either stdout or stderr, one line at a time, operates on it and outputs the result in the same way. Then we parse a shell pipeline into its constituent commands, connect the output of element n as the input of element n + 1 and keep reading the output of the last pipeline element until EOF is reached. This allows us to run the entire pipeline without spawning a single process.

                A simple implementation with a few basic commands is available on Github. Unfortunately real coroutines were not properly available on Linux at the time of writing, only a random nonmerged branch of Clang. Therefore the project contains a simple class based implementation that mimics coroutines. Interested readers are encouraged to go through the code and work out how much boilerplate code a proper coroutine implementation would eliminate.


                This implementation does not handle background processes/binary data/file redirection/invoking external programs/yourthinghere!

                That is correct.

                Isn't this just a reinvented Busybox?

                Yes and no. Busybox provides all executables in a single binary but still uses processes to communicate between different parts of the pipeline. This implementation does not spawn any processes.

                Couldn't you run each "process" in its own thread to achieve parallelism?

                Yes. However I felt like doing this instead.

                October 10, 2016

                Vala and Reproducibility

                Reproducibility, in Debian, is:

                With free software, anyone can inspect the source code for malicious flaws. But Debian provide binary packages to its users. The idea of “deterministic” or “reproducible” builds is to empower anyone to verify that no flaws have been introduced during the build process by reproducing byte-for-byte identical binary packages from a given source.

                Then, in order to provide reproducible binaries to Vala projects we need:

                1. Make sure you distribute generated C source code
                2. If you are a library, make sure to distribute VAPI and GIR files

                This will help build process to avoid call valac in order to generate C source code, VAPI and GIR files from your Vala sources.

                Because C source is distributed with a release’s tarball, any Vala project could be binary reproducible from sources.

                In order to produce development packages, you should distribute VAPI and GIR files, along with .h ones. They should be included in your tarball, to avoid valac produce them.

                GXml distribute all their C sources, but not GIR and VAPI files. This should be fixed next release.

                GNOME Clocks distributes just Vala sources; this is why bug #772717 against Clocks, has been filed.

                libgee distributes Vala sources also, but no Debian bug exists against it. May be its Vala source annotations helps, but may is a good idea to distribute C, VAPI and GIR files in future versions.

                My patches to GNOME Builder, produce Makefiles to generate C sources form Vala ones. They require to be updated  in order to distribute VAPI and GIR files with your Vala project.

                October 09, 2016

                Ease of 3D Printing in Fedora

                We had a Fedora booth at LinuxDays 2016 in Prague and one of our attractions was  Miro Hrončok‘s 3D printer Lulzbot Mini. Because Miro was busy helping organize the conference he just left the printer at the booth and I had to set it up myself. And it really surprised me how easy it is to 3D print using Fedora. Basically what I had to do was:

                1. installing Cura from the official repositories,
                2. plugging the printer into a USB (automatically connected due to Miro’s udev rules),
                3. starting Cura, choosing the model of the printer and material I’m going to print with,
                4. opening a file with the 3D model I wanted to print,
                5. hitting the “Print” button and watching the printer in action.

                Fedora has been known to be the best OS for 3D printing already for some time, mainly due to the work of Miro (he packaged all the available open source software for 3D printing, prepared udev rules to automatically connect to 3D printers etc.), but I was still surprised how easy it is to 3D print with Fedora these days. It really took just a couple of minutes from a stock system to start of the actual printing. It’s almost as simple as printing on papers.
                There is still room for improvements though. Some 3D printing apps (Cura Lulzbot Edition is one of them) are available in the official repositories of Fedora, but don’t have an appdata file, so they don’t show up in GNOME Software. And it would also be nice to have “3D Printing” category in GNOME Software, so that the software is more discoverable for users.

                Puppet/puppetdb/storeconfigs validation issues

                Over the past year I’ve chipped away at setting up new servers for apestaart and managing the deployment in puppet as opposed to a by now years old manual single server configuration that would be hard to replicate if the drives fail (one of which did recently, making this more urgent).

                It’s been a while since I felt like I was good enough at puppet to love and hate it in equal parts, but mostly manage to control a deployment of around ten servers at a previous job.

                Things were progressing an hour or two here and there at a time, and accelerated when a friend in our collective was launching a new business for which I wanted to make sure he had a decent redundancy setup.

                I was saving the hardest part for last – setting up Nagios monitoring with Matthias Saou’s puppet-nagios module, which needs External Resources and storeconfigs working.

                Even on the previous server setup based on CentOS 6, that was a pain to set up – needing MySQL and ruby’s ActiveRecord. But it sorta worked.

                It seems that for newer puppet setups, you’re now supposed to use something called PuppetDB, which is not in fact a database on its own as the name suggests, but requires another database. Of course, it chose to need a different one – Postgres. Oh, and PuppetDB itself is in Java – now you get the cost of two runtimes when you use puppet!

                So, to add useful Nagios monitoring to my puppet deploys, which without it are quite happy to be simple puppet apply runs from a local git checkout on each server, I now need storedconfigs which needs puppetdb which pulls in Java and Postgres. And that’s just so a system that handles distributed configuration can actually be told about the results of that distributed configuration and create a useful feedback cycle allowing it to do useful things to the observed result.

                Since I test these deployments on local vagrant/VirtualBox machines, I had to double their RAM because of this – even just the puppetdb java server by default starts with 192MB reserved out of the box.

                But enough complaining about these expensive changes – at least there was a working puppetdb module that managed to set things up well enough.

                It was easy enough to get the first host monitored, and apart from some minor changes (like updating the default Nagios config template from 3.x to 4.x), I had a familiar Nagios view working showing results from the server running Nagios itself. Success!

                But all runs from the other vm’s did not trigger adding any exported resources, and I couldn’t find anything wrong in the logs. In fact, I could not find /var/log/puppetdb/puppetdb.log at all…

                fun with utf-8

                After a long night of experimenting and head scratching, I chased down a first clue in /var/log/messages saying puppet-master[17702]: Ignoring invalid UTF-8 byte sequences in data to be sent to PuppetDB

                I traced that down to puppetdb/char_encoding.rb, and with my limited ruby skills, I got a dump of the offending byte sequence by adding this code:

                Puppet.warning "Ignoring invalid UTF-8 byte sequences in data to be sent to PuppetDB"
      '/tmp/ruby', 'w') { |file| file.write(str) }
                Puppet.warning "THOMAS: is here"

                (I tend to use my name in debugging to have something easy to grep for, and I wanted some verification that the File dump wasn’t triggering any errors)
                It took a little time at 3AM to remember where these /tmp files end up thanks to systemd, but once found, I saw it was a json blob with a command to “replace catalog”. That could explain why my puppetdb didn’t have any catalogs for other hosts. But file told me this was a plain ASCII file, so that didn’t help me narrow it down.

                I brute forced it by just checking my whole puppet tree:

                find . -type f -exec file {} \; > /tmp/puppetfile
                grep -v ASCII /tmp/puppetfile | grep -v git

                This turned up a few UTF-8 candidates. Googling around, I was reminded about how terrible utf-8 handling was in ruby 1.8, and saw information that puppet recommended using ASCII only in most of the manifests and files to avoid issues.

                It turned out to be a config from a webalizer module:

                webalizer/templates/webalizer.conf.erb: UTF-8 Unicode text

                While it was written by a Jesús with a unicode name, the file itself didn’t have his name in it, and I couldn’t obviously find where the UTF-8 chars were hiding. One StackOverflow post later, I had nailed it down – UTF-8 spaces!

                00004ba0 2e 0a 23 c2 a0 4e 6f 74 65 20 66 6f 72 20 74 68 |..#..Note for th|
                00004bb0 69 73 20 74 6f 20 77 6f 72 6b 20 79 6f 75 20 6e |is to work you n|

                The offending character is c2 a0 – the non-breaking space

                I have no idea how that slipped into a comment in a config file, but I changed the spaces and got rid of the error.

                Puppet’s error was vague, did not provide any context whatsoever (Where do the bytes come from? Dump the part that is parseable? Dump the hex representation? Tell me the position in it where the problem is?), did not give any indication of the potential impact, and in a sea of spurious puppet warnings that you simply have to live with, is easy to miss. One down.

                However, still no catalogs on the server, so still only one host being monitored. What next?

                users, groups, and permissions

                Chasing my next lead turned out to be my own fault. After turning off SELinux temporarily, checking all permissions on all puppetdb files to make sure that they were group-owned by puppetdb and writable for puppet, I took the last step of switching to that user role and trying to write the log file myself. And it failed. Huh? And then id told me why – while /var/log/puppetdb/ was group-writeable and owned by puppetdb group, my puppetdb user was actually in the www-data group.

                It turns out that I had tried to move some uids and gids around after the automatic assignment puppet does gave different results on two hosts (a problem I still don’t have a satisfying answer for, as I don’t want to hard-code uids/gids for system accounts in other people’s modules), and clearly I did one of them wrong.

                I think a server that for whatever reason cannot log should simply not start, as this is a critical error if you want a defensive system.

                After fixing that properly, I now had a puppetdb log file.

                resource titles

                Now I was staring at an actual exception:

                2016-10-09 14:39:33,957 ERROR [c.p.p.command] [85bae55f-671c-43cf-9a54-c149cede
                c659] [replace catalog] Fatal error on attempt 0
                java.lang.IllegalArgumentException: Resource '{:type "File", :title "/var/lib/p
                uppet/concat/thomas_vimrc/fragments/75_thomas_vimrc-\" allow adding additional
                config through .vimrc.local_if filereadable(glob(\"~_.vimrc.local\"))_\tsource
                ~_.vimrc.local_endif_"}' has an invalid tag 'thomas:vimrc-" allow adding additi
                onal config through .vimrc.local
                if filereadable(glob("~/.vimrc.local"))
                source ~/.vimrc.local
                '. Tags must match the pattern /\A[a-z0-9_][a-z0-9_:\-.]*\Z/.
                at com.puppetlabs.puppetdb.catalogs$validate_resources.invoke(catalogs.
                clj:331) ~[na:na]

                Given the name of the command (replace catalog), I felt certain this was going to be the problem standing between me and multiple hosts being monitored.

                The problem was a few levels deep, but essentially I had code creating fragments of vimrc files using the concat module, and was naming the resources with file content as part of the title. That’s not a great idea, admittedly, but no other part of puppet had ever complained about it before. Even the files on my file system that store the fragments, which get their filename from these titles, happily stored with a double quote in its name.

                So yet again, puppet’s lax approach to specifying types of variables at any of its layers (hiera, puppet code, ruby code, ruby templates, puppetdb) in any of its data formats (yaml, json, bytes for strings without encoding information) triggers errors somewhere in the stack without informing whatever triggered that error (ie, the agent run on the client didn’t complain or fail).

                Once again, puppet has given me plenty of reasons to hate it with a passion, tipping the balance.

                I couldn’t imagine doing server management without a tool like puppet. But you love it when you don’t have to tweak it much, and you hate it when you’re actually making extensive changes. Hopefully after today I can get back to the loving it part.

                flattr this!