GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

April 24, 2017

GNOME To Do 3.24 release, and it’s shining

GNOME To Do is a personal task manager for GNOME. It uses GNOME technologies and integrates very well with the desktop. And now, it’s finally being released!

The 3.24 version comes with a few nice features and, most importantly, whole load of bugfixes. Let’s get started!

Autostart & notifications

GNOME To Do now (optionally) autostarts once you log in and keeps track of your tasks! This is what I see now when I log into my laptop:

startup notificationThanks for letting me know, To Do!

To Do also detects when you put your machine to suspend, and if you resume in the next day, another notification is triggered. Isn’t it nice when your task management tool allows you to focus more on actually working on your tasks than in managing them? I certainly love not having to worry about the tasks for the next day 🙂

This behavior, of course, is optional and configurable. There is a new “Run in background” plugin that allows you to fine-tune this behavior:

Editing plugin optionsEditing “Run in background” plugin options

Improved panels

A lot of bugfixes landed for Today, Scheduled and Unscheduled panels. The most important one in my opinion is the ability to select which tasklist the new task will be added. Check this out:

selecting the tasklistSelecting the tasklist in Today panel

These panels also don’t show the date anymore. It’s not very useful to show “Today” in every task, when you’re already in Today panel, right? The same logic applies to the Scheduled panel, where you can see the date already:

scheduled panelIt looks very nice!

Todo.txt integration

Thanks to our awsome new contributor, Rohit Kaushik, we now have Todo.txt integration! This is disabled by default (it still need some polishing) but you can easily test it by running the autogen script like:

$ ./autogen.sh --enable-todo-txt-plugin

Now you can just enable it in the Extensions dialog:

Captura de tela de 2017-04-24 09-33-32Enabling the Todo.txt plugin

Select a Todo.txt file (you have to create on manually if you don’t have one):

Selecting a Todo.txt fileSelect a Todo.txt file

And you can use it just like a regular tasklist!

Todo.txt + GNOME To DoIt works 🙂

Thanks Rohit!

A few considerings

A few new features were showcased in this blog post, but to me, the topmost important change is the stability of GNOME To Do. I’ve been using it regularly for the past couple of weeks and it’s pretty stable, but I know that bugs are there, just waiting for someone to trigger them.

If you see To Do crashing or behaving oddly, please file a bug at the gnome-todo product in GNOME Bugzilla. And make sure to join #gnome-todo room at GNOME IRC and say hi. With your help, we can make GNOME To Do the best personal task manager out there!

2017-04-24 Monday.

  • Mail chew, consultancy call with Miklos. Submitted a brief paper for GUADEC deadline today.

About the Fedora and GNOME workshop at Flisol Lima Norte 2017

Yesterday we have celebrated the Flisol 2017 at UPN Lima Norte, as it was announced here

Thanks to the organisers that invited us to do a workshop of GNU/Linux commands

It has started around 9:00 o’clock and we were surprised that the attendees were earlier than 9 to get a sit in the workshop. I did a review of the History of GNU/Linux and then introduction to Fedora and GNOME applications. I also noticed that people that have been in previously events such as Install Fest were there to learn more about the projects and of course, commands on terminal ( as  was published in the Flisol ad).

I must thanks to the volunteers that helped with the attendees to complete the tasks in terminal. Newbies sometimes fail in the basic principles such as indentation, use of VIM, spaces, case sensitive and if you miss one step, it can let to files.swap among other issues. Students from different universities where GNU/Linux is not so popular were eager to learn and the three hours of the workshop were not enough. There were very talent people that easily got the commands and respond the questions and basic exercises .

I understan that each one has her or his pace, so I invited the audience to complete the Backtrackacademy course of Introduction to GNU/Linux using Fedora+ GNOME for free. As it is usual, I gave gifts to all the participants when they respond correctly my questions.

This is the final photo of the group after the workshop at Flisol Lima Norte 2017!

I also had the chance to wave Fabio Duran from GNOME Chile who did an online talk ❤

Even his face it is not clear at all in the picture, he is always supporting the Peruvian work!

At the end, we have shared a mini buffet thanks to the organisers again for this great experience at UPN and we hope that the Linux UPN community will grow up!

Nuritzi, the President of the GNOME board was also online to have nice greetings to Peru

and… some more pictures in the bellow mini gallery… enjoy it!


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, FLISoL, Flisol 2017, Flisol Lima. Flisol Lima Norte, FLISoL Perú, GNOME, GNU/Linux, Julita Inca, Julita Inca Chiroque, linux commands, Universidad Privada del Norte, UPN

April 23, 2017

2017-04-23 Sunday.

  • Fine breakfast, checked-out, swam & sauna-ified; off to pick up the babes - who had had a fine time with B&A. Lovely lunch, drove home. Picked up H. from Lynn's, back for some songs & bible study together - tea, put H. and others to bed. Watched Line of Duty - gripping.

Drag-and-drop in lists

I’ve recently had an occasion to implement reordering of a GtkListBox via drag-and-drop (DND). It was not that complicated. Since I haven’t seen drag-and-drop used much with list boxes, here is a quick summary of what is needed to get the basics working.

Setting up the drag source

There are two ways to make a GTK+ widget a drag source (i.e. a place where clicking and dragging will initiate a DND operation). You can decide dynamically to initiate a drag by calling gtk_drag_begin(). But we go for the simpler approach here: we just declare statically that our list rows should be drag sources, and let GTK+ handle all the details:

handle = gtk_event_box_new ();
gtk_container_add (GTK_CONTAINER (handle),
        gtk_image_new_from_icon_name ("open-menu-symbolic", 1));
gtk_drag_source_set (handle,
        GDK_BUTTON1_MASK, entries, 1, GDK_ACTION_MOVE);

Note that I choose to create a visible drag handle here instead of allowing drags to start anywhere on the row. It looks like this:

The entries tell GTK+ what data we want to offer via drags from this source. In our case, we will not offer a standard mime type like text/plain, but instead make up our own, private type, and also hint GTK+ that we don’t want to support drags to other applications:

static GtkTargetEntry entries[] = {
   { "GTK_LIST_BOX_ROW", GTK_TARGET_SAME_APP, 0 }
};

A little gotcha here is that the widget you set up as drag source must have a GdkWindow. A GtkButton or a GtkEventBox (as in this example) will work. GTK4 will offer a different API to create drag sources that avoids the need for a window.

With this code in place, you can already drag your rows, but so far, there’s nowhere to drop them. Lets fix that.

Accepting drops

In contrast to drags, where we created a visible drag handle to give users a hint that drag-and-drop is supported, we want to just accept drops anywhere in the list. The easiest way to do that is to just make each row a drop target (i.e. a place that will potentially accept drops).

gtk_drag_dest_set (row,
        GTK_DEST_DEFAULT_ALL, entries, 1, GDK_ACTION_MOVE);

The entries are the same that we discussed above. GTK_DEST_DEFAULT_ALL tells GTK+ to handle all aspects of the DND operation for us, so we can keep this example simple.

Now we can start a drag on the handle, and we can drop it on some other row. But nothing happens after that. We need to do a little bit of extra work to make the reordering happen. Lets do that next.

Transferring the data

Drag-and-drop is often used to transfer data between applications. GTK+ uses a data holder object called GtkSelectionData for this. To send and receive data, we need to connect to signals on both the source and the target side:

g_signal_connect (handle, "drag-data-get",
        G_CALLBACK (drag_data_get), NULL);
g_signal_connect (row, "drag-data-received",
        G_CALLBACK (drag_data_received), NULL);

On the source side, the drag-data-get signal is emitted when GTK+ needs the data to send it to the drop target. In our case, the function will just put a pointer to the source widget in the selection data:

gtk_selection_data_set (selection_data,
        gdk_atom_intern_static_string ("GTK_LIST_BOX_ROW"),
        32,
        (const guchar *)&widget,
        sizeof (gpointer));

On the target side, drag-data-received is emitted on the drop target when GTK+ passes the data it received on to the application. In our case, we will pull the pointer out of the selection data, and reorder the row.

handle = *(gpointer*)gtk_selection_data_get_data (selection_data);
source = gtk_widget_get_ancestor (handle, GTK_TYPE_LIST_BOX_ROW);

if (source == target)
  return;

source_list = gtk_widget_get_parent (source);
target_list = gtk_widget_get_parent (target);
position = gtk_list_box_row_get_index (GTK_LIST_BOX_ROW (target));

g_object_ref (source);
gtk_container_remove (GTK_CONTAINER (source_list), source);
gtk_list_box_insert (GTK_LIST_BOX (target_list), source, position);
g_object_unref (source);

The only trick here is that we need to take a reference on the widget before removing it from its parent container, to prevent it from getting finalized.

And with this, we have reorderable rows. Yay!

As a final step, lets make it look good.

A nice drag icon

So far, during the drag, you just see just the cursor, which is not very helpful and not very pretty. The expected behavior is to drag a visual representation of the row.

To make that happen, we connect to the drag-begin signal on the drag source:

g_signal_connect (handle, "drag-begin",
        G_CALLBACK (drag_begin), NULL);

…and do some extra work to create a nice ‘drag icon’:

row = gtk_widget_get_ancestor (widget, GTK_TYPE_LIST_BOX_ROW);
gtk_widget_get_allocation (row, &alloc);
surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32,
                                      alloc.width, alloc.height);
cr = cairo_create (surface);
gtk_widget_draw (row, cr);

gtk_drag_set_icon_surface (context, surface);

cairo_destroy (cr);
cairo_surface_destroy (surface);

This looks more complicated than it is – we are creating a cairo surface of the right size, render the row widget to it (the signal is emitted on the handle, so we have to find the row as an ancestor).

Unfortunately, this does not yet yield a perfect result, since list box rows generally don’t render a background or frame. To work around that, we can temporarily add a custom style class to the row’s style context, and use some custom CSS to ensure we get a background and frame:

context = gtk_widget_get_style_context (row);
gtk_style_context_add_class (context, "drag-icon");
gtk_widget_draw (row, cr);
gtk_style_context_remove_class (context, "drag-icon")

As an extra refinement, we can set an offset on the surface, to prevent a visual ‘jump’ at the beginning of the drag, by placing this code before the gtk_drag_set_icon_surface() call:

gtk_widget_translate_coordinates (widget, row, 0, 0, &x, &y);
cairo_surface_set_device_offset (surface, -x, -y);


Voila!

Next steps

This article just shows the simplest possible setup for row reordering by drag-and-drop. Many refinements are possible, some easy and some not so easy.

An obvious enhancement is to allow dragging between different lists in the same application. This is just a matter of being careful about the handling of the list widgets in the drag_data_received() call, and the code I have shown here should already work for this.

Another refinement would be to drop the row before or after the target row, depending on which edge is closer. Together with this, you probably want to modify the drop target highlighing to indicate the edge where the drop will happen. This could be done in different ways, but all of them will require listening to drag-motion events and juggling event coordinates, which is not something I wanted to get into here.

Finally, scrolling the list during the drag. This is important for long lists, if you want to drag a row from the top to bottom – if the list doesn’t scroll, you have to do this in page increments, which is just too cumbersome. Implementing this may be easiest by moving the drag target to be the list widget itself, instead of the individual rows.

References

April 22, 2017

Javascript news from GNOME 3.24

Welcome back to the latest news on GJS, the Javascript engine that powers GNOME Shell, Endless OS, Polari, GNOME Documents, and many other apps.

GNOME 3.24 has been released for about three weeks now, and with it went GJS 1.48.0. Here’s what’s new!

Javascript upgrade!

First of all, we have a more modern Javascript engine. GJS is based on Mozilla’s SpiderMonkey, the same Javascript engine that runs in the Firefox browser. Back in GNOME 3.22, GJS was based on version 24, which was released in September 2013. Now we’ve moved to version 38, which although still old, was released almost two years later in May 2015.

(The number of each SpiderMonkey release increases by 7 each time, because they make a standalone SpiderMonkey release for each Extended Support Release of Firefox, which is one out of every 7. That’s why you might also hear them referred to as “ESR 38”, etc.)

This brings a lot of new Javascript language features with it. Here are some of the ones I’m most excited about.

Promises

Promises allow you to do asynchronous operations (like reading files, or waiting, or fetching things from the network) in a much more intuitive way. With Promises, the code reads from top to bottom as if it were synchronous, instead of from nested level to nested level (often called “callback hell“.)

Here’s an example, a Promises version of examples/gio-cat.js that’s included in GJS’s source distribution:

This is much longer than the original program, but only the lower part of the program is actually the equivalent of the old callback-based code. The top part would ideally be provided by GJS itself. I’m still figuring out what is the best API for wrapPromise but it’s definitely a candidate for including in a future version of GJS.

This code calls loadContents, prints the contents, and exits the main loop. If an exception is thrown anywhere in the chain before .catch, then the function provided to the catch call will log the error message. In any case, no matter whether the operation succeeded or not, the last then call will make sure the main loop exits.

Template literals

Template literals will change your life if you work with text in your GJS program. They are regular strings in backticks, with interpolation. Say goodbye to this:

const Format = imports.format;
String.prototype.format = Format.format;
log("%s, %s!".format(greeting, name));

Also say goodbye to this:

log(greeting + ", " + name + "!");

Instead, from now on you’ll do this:

log(`${greeting}, ${name}!`);

It’s a lot more readable and intuitive.

Template literals can also cover more than one line, and they do real interpolation of expressions too, not just variable names:

const CSS = `
label {
    font-size: ${fontdesc.get_size()};
}`;

You can also “tag” templates which is out of scope of this blog post, but there is one built-in tag which serves the same purpose as r'' string literals do in Python:

String.raw`I'm writing some \LaTeX\ code here
and I \textbf{don't} want to deal with escaping it:
\[ E = mc^2 \]`

Generators

Generators are a great addition to the Javascript toolbox. They were actually already available in GJS, but only in Mozilla’s nonstandard extension form. They are introduced with the function* keyword instead of function, and they work a lot like Python’s generators. Here’s an example, implementing the xrange() function similar to the one in Python using a generator:

function* xrange(limit) {
    for(let count = 0; count < limit; count++)
        yield count;
}

The yield statement returns control back to the caller, while preserving the state of the generator until the next call. You can get all the values one by one, calling a generator’s next() method, but for...of loops will also deal with generators:

for (let ix of xrange(5))
    print(`Counting from 0 to 4: ${ix}`);

If you want to empty a generator into an array, you can also use the spread operator: [...xrange(5)] will give you an array of numbers from 0 to 4.

Here’s a more complicated example showing the yield* statement which allows you to compose more than one generator:

This code prints looks at the directory that it’s given, and prints all the files in it that are not themselves directories (the “leaf nodes”.) If one of the files is a directory, it will descend into that directory and repeat the process, thanks to yield*.

Want to know more?

Since there’s a lot more than I can cover in a comfortably readable blog post, I made a slide deck. I tried to put it together in such a way that you can use it as reference material.

For more information on all of these cool things, I highly recommend this “ES6 Explained” series of posts from the Mozilla Hacks blog. Some of these features, such as classes and modules, are still to come in GJS.

Maintainer life

The Javascript engine upgrade was the major feature, but I also spent some time on making things easier for myself as the maintainer. A well-tended garden will hopefully attract more gardeners. Happily, some other people joined in for this part.

I cleaned up the build system, using more modern and concise Autotools code. I also spent some time cleaning up compiler warnings, both on GCC and Clang. Now the build and test runs are faster, and the cleaner output makes it much easier to see when something goes wrong. I also made sure that GJS builds on macOS, or at least it did until my Apple hardware broke down. Chun-wei Fan made some improvements that ensure GJS builds on Windows with MSVC. Claudio André implemented continuous integration in a Docker container, with the intention to run it on Travis CI, but sadly we do not have permission to flip the bit to get Travis to build it.

Having written Jasmine GJS in order to bring some of that convenient unit testing technology from the Node world into GJS applications, I also wanted to use it for writing GJS’s own unit tests. I couldn’t use it directly because that would have been a circular dependency, of course, but I embedded a copy of upstream Jasmine plus a very stripped-down version of Jasmine GJS, and called it “Minijasmine”. It’s now a lot easier, and dare I say less of a drag, to write unit tests for GJS. Accordingly, we’ll now try to cover every bug fix with a regression test.

And I worked on getting the bug tracker down to a less daunting number of bugs. It was fun to make the bug chart in my last post, so here’s another one: this is the number of open bugs during the release cycle from 1.46.0 to 1.48.0.

Graphical report results

You can definitely see that November Bug Squash Month had an effect

Unfortunately the chart will not look like this again next time around. The big drop was me closing all the obsolete or already-fixed bugs during November Bug Squash Month. We are down from about 160 to about 100 bugs, but those were all the easy ones; there are only hard ones left now.

Thanks

Thanks to everyone who participated to bring GJS to GNOME 3.24: Chun-wei Fan, Claudio André, Florian Müllner, Alexander Larsson, Iain Lane, Jonh Wendell, and Lionel Landwerlin.

As well, this release incorporated a lot of patches that people contributed a long time ago, even up to 8 years, that for various reasons had not been reviewed yet. (Many from emeritus GJS maintainers!) Thanks to those people for participating in the past, and I’m glad we were able to finally bring your contributions into the project: Giovanni Campagna, Jasper St. Pierre, Sam Spilsbury, Havoc Pennington, Joe Shaw, Paolo Borelli, Shawn Walker, and Tim Lunn.

Luke Jones and Hussam Al-Tayeb identified a serious memory leak right before the final 1.48.0 release and without their contribution, it would have been a different and much sadder story. As it was, 1.48.0 still contained another serious bug that made GNOME Shell quite unusable for an unlucky few people. Thanks to Georges Stavracas for rewriting a happy ending for 1.48.2.

Special thanks to Cosimo Cecchi, for reviewing almost every single line of the code I wrote for this release: about 20000 lines, many of them boring and repetitive.

Thanks also to my employer Endless which sponsored most of the Javascript engine upgrade, and a good chunk of miscellaneous bug fixing time.

Looking forward

My next post will be about what’s to come in GJS for GNOME 3.26.


April 21, 2017

New chapter in my GLib/GTK+ getting started guide

It’s been a long time since the last chapter. I was busy with various programming projects as can be seen on this blog. But it’s important to share our knowledge. And a book scales much better than explaining again and again the same things to newcomers on IRC or mailing lists.

The guide follows a bottom-up approach. The last chapter was about writing semi-OOP classes in C. The new chapter is a small introduction to GObject. The next chapter that I’ve already started to write will finally be about GTK+, hopefully I’ll finish it soon.

Everything is on the webpage of “The GLib/GTK+ Development Platform – A Getting Started Guide”.

Good learning ;)

GUADEC call for talks ends this Sunday, 23rd April

GUADEC 2017 is just over three months away, which is a very long time in the future and leaves lots of time to organise everything (at least that’s what I keep telling myself).

However, the call for papers is closing this Sunday so if you have something you want to talk about in front of the GNOME community and you haven’t already submitted a talk then please head to the registration site and do so!

Once the call for papers closes, the Papers Committee will fetch their ceremonial robes and make their way to a cave deep in the Peak District for two weeks. There they will drink fresh spring water, hunt grouse on the moors and study your talk submissions in great detail. When two weeks is up, their votes are communicated back to Manchester using smoke signals and by Sunday 7th May you’ll be notified by email if your talk was accepted. From there we can organise travel sponsorship and finalize the schedule of the first 3 days of the conference, which should be available late next month.

We’ll put a separate call out for BoF sessions, workshops, and tutorial sessions to take place during the second half of GUADEC — the 23rd April deadline only applies to talks.


Netflix doesn’t block Fedora users any more!

Two weeks ago, I blogged about the fact that Netflix was blocking Chrome and Firefox with Fedora user agents although those browsers are now officially supported on Linux.  The blogpost got a lot of publicity, almost 5000 hits, and I was even accused of creating clickbaits on reddit 🙂 But it led to the wanted result – solving the issue.

Someone pointed me to Paul Adolph from Netflix. He no longer works in the department which is responsible for user agent filtering, but was very helpful and forwarded the issue to responsible engineers. They never told me why they were blocking Fedora (and it turned out other distributions such as CentOS, Debian, openSUSE too), but promised to fix it within the next couple of weeks. I assume it was just some outdated user agent filter.

I tested it today and it seems to be fixed, both for Chrome and Firefox. And also not only for Fedora, but also for other distributions (I tested CentOS, Debian, and openSUSE). So now you can watch Netflix on Fedora without any user agent tweaking. Just keep in mind that for Firefox you need to install ffmpeg Firefox is using for media playback, Chrome should work out of the box.

I’d like to thank Netflix for resolving the situation pretty quickly.


April 20, 2017

Meson considerations

A post with my GNOME release team hat on…

Meson is new and cool

A number of GNOME modules are switching to meson for 3.26. I myself was an early adopter for this: recipes has had meson build support since the beginning of the year, and after the 1.0 release, I’ve dropped autotools support on the master branch.

autotools are of course very familiar to most of us, and we know how to get most things done there. But it often isn’t pretty, and  using meson feels like a breath of fresh air. Others have been praising meson for its simplicity, ease of use and speed, so I am not going repeat that here.

Supported tools

jhbuild is our traditional build support tool, and it has well-working meson support for a while now. GNOME builder and flatpak-builder also both support meson and we include meson in the GNOME sdk for 3.24.

So, for developers, meson support is more or less there, and working well.

Transition woes

So, things are pretty awesome all around: we have a new build system, it is shiny and fast and supported. Sounds too good to be true. Whats the catch ?

One thing that meson does not do is building traditional ‘make dist’ style tarballs. The premise is that you can just build your software from a git tag or from a snapshot produced by git-archive.

While that is true, and maybe a direction we want to be going in for the future, there are plenty of build systems out there that expect you to provide a tarball or similar archive. That is true for Fedora’s koji, and it is also true for form in which we currently produce GNOME releases.

A GNOME release is essentially defined by a jhbuild module file (several of them, in fact) which refers to release tarballs for all of our modules, including checksums and sizes.  For core GNOME modules, these tarballs are generally put in place using a tool called ftpadmin. As I’ve recently found out, ftpadmin is a little picky. It expects the content in the tarball to be in a directory that’s named in module-version style and will error out if that is not the case.

Thankfully, git archive is up to the task. Here is what I did to produce a recent gnome-recipes release:

git tag -m 1.2.0 1.2.0
git archive --prefix gnome-recipes-1.2.0/ \
            -o gnome-recipes-1.2.0.tar.xz \
            1.2.0

Some unsolved problems remain. For example, we have not decided on how to handle library documentation in the new meson world. The way this works with autotools is that the tarballs include generated docs, which get extracted and post-processed by some scripts before they end up on developer.gnome.org. But git archive snapshots contain no generated documentation…  So far, no library that we host documentation for has made the jump to meson-only builds, so we still have some time to come up with a different solution.

Overall, I am really excited that we are embracing meson!

Update: My discussion of archives failed to consider git submodules. git-archive does not handle those, so my recommendation will not produce a working snapshot if you use submodules. See nautilus’ make_release.sh script for how to handle that.

Fedora 26 not connecting to wireless

This is a quick hint in case you suffer from the same issue I had while installing Fedora 26 (alpha).

The installer didn’t manage to connect to my wireless router, a D-Link one. More specifically, it was not getting an IP address from the router. Some problem with DHCP it seems.

If that’s the case, open a terminal (press the Windows key, then type “Terminal” and hit Enter), and type the following command:

sudo sh -c 'echo "send dhcp-client-identifier = hardware;" > /etc/dhcp/dhclient.conf'

Then reconnect to your wireless network. After the first boot – when the system is installed – repeat the command above, just once.

By the way, this is not a new issue. I experienced this on Fedora 25, but curiously at the time the installer (the live system actually) worked fine. Just the installed system suffered from it. Now, with F26, it happened since the beginning. Here’s the bugzilla entry: https://bugzilla.redhat.com/show_bug.cgi?id=1154200.

Hope that helps, happy Fedora!

Atreus: Building a custom ergonomic keyboard

As mentioned in my Working on Android post, I’ve been using a mechanical keyboard for a couple of years now. Now that I work on Flowhub from home, it was a good time to re-evaluate the whole work setup. As far as regular keyboards go, the MiniLa was nice, but I wanted something more compact and ergonomic.

The Atreus keyboard

My new Atreus

Atreus is a 40% ergonomic mechanical keyboard designed by Phil Hagelberg. It is an open hardware design, but he also sells kits for easier construction. From the kit introduction:

The Atreus is a small mechanical keyboard that is based around the shape of the human hand. It combines the comfort of a split ergonomic keyboard with the crisp key action of mechanical switches, all while fitting into a tiny profile.

My use case was also quite travel-oriented. I wanted a small keyboard that would enable me to work with it also on the road. There are many other small-ish DIY keyboard designs like Planck and Gherkin available, but Atreus had the advantage of better ergonomics. I really liked the design of the Ergodox keyboard, and Atreus essentially is that made mobile:

I found the split halves and relatively large size (which are fantastic for stationary use at a desk) make me reluctant to use it on the lap, at a coffee shop, or on the couch, so that’s the primary use case I’ve targeted with the Atreus. It still has most of the other characteristics that make the Ergodox stand out, like mechanical Cherry switches, staggered columns instead of rows, heavy usage of the thumbs, and a hackable microcontroller with flexible firmware, but it’s dramatically smaller and lighter

I had the opportunity to try a kit-built Atreus in the Berlin Mechanical Keyboard meetup, and it felt nice. It was time to start the project.

Sourcing the parts

When building an Atreus the first decision is whether to go with the kit or hand-wire it yourself. Building from a kit is certainly easier, but since I’m a member of a hackerspace, doing a hand-wired build seemed like the way to go.

To build a custom keyboard, you need:

  • Switches: in my case 37 Cherry MX blues and 5 Cherry MX blacks
  • Diodes: one 1N4148 per switch
  • Microcontroller: a Arduino Pro Micro on my keyboard
  • Keycaps: started with recycled ones and later upgraded to DSA blanks
  • Case: got a set of laset-cut steel plates

Even though Cherry — the maker of the most common mechanical key switches — is a German company, it is quite difficult to get switches in retail here. Luckily a fellow hackerspace member had just dismantled some old mechanical keyboards, and so I was able to get the switches I needed via barter.

Keyswitches

The Cherry MX blues are tactile clicky switches that feel super-nice to type on, but are quite loud. For modifiers I went with Cherry MX blacks that are linear. This way there is quite a clear difference in feel between keys you typically hold down compared to the ones you just press.

The diodes and the microcontroller I ordered from Amazon for about 20€ total.

Arduino Pro Micro

At first I used a set of old keycaps that I got with the switches, but once the keyboard was up and running I upgraded to a very nice set of blank DSA-profile keycaps that I ordered from AliExpress for 30€. That set came with enough keycaps that I’ll have myself covered if I ever build a second Atreus.

All put together, I think the parts ended up costing me around 100€ total.

Preparations

When I received all the parts, there were some preparation steps to be made. Since the key switches were 2nd hand, I had to start by dismantling them and removing old diodes that had been left inside some of them.

Opening the key switches

The keycaps I had gotten with the switches were super grimy, and so I ended up sending them to the washing machine. After that you could see that they were not new, but at least they were clean.

With the steel mounting plate there had been a slight misunderstading, and the plates I received were a few millimeters thicker than needed, so the switches wouldn’t “click” in place. While this could’ve been worked around with hot glue, we ended up filing the mounting holes down to the right thickness.

Filing the plate

Little bit of help

Wiring the keyboard

Once the mounting plate was in the right shape, I clicked the switches in and it was time to solder.

All switches in place

Hand-wiring keyboards is not that tricky. You have to attach a diode to each keyswitch, and then connect each row together via the diodes.

Connecting diodes

First row ready

The two thumb keys are wired to be on the same column, but different rows.

All rows ready diodes

Then each column is connected together via the other pin on the switches.

Soldering columns

This is how the matrix looks like:

Completed matrix

After these are done, connect a wire from each column, and each row to a I/O pin on the microcontroller.

Adding column wires

If you haven’t done it earlier, this is a good stage to test all connections with a multimeter!

Connecting the microcontroller

Firmware

After finishing the wiring, I downloaded the QMK firmware, changed the PIN mapping for how my Atreus is wired up, switched the layout to Colemak, and the keyboard was ready to go.

Atreus in use

Don’t mind the key labels in the picture above. These are the second-hand keycaps I started with. Since then I’ve switched to blank ones.

USB-C

The default Atreus design has the USB cable connected directly to the microcontroller, meaning that you’ll have to open the case to change the cable. To mitigate that I wanted to add a USB breakout board to the project, and this being 2017, it felt right to go with USB-C.

USB-C breakouts

I found some cheap USB-C breakout boards from AliExpress. Once they arrived, it was time to figure out how the spec works. Since USB-C is quite new, there are very few resources available on how to use it with microcontrollers. These tutorials were quite helpful:

Here is how we ended up wiring the breakout board. After these you only have four wires to connect to the microcontroller: ground, power, and the positive and negative data pins.

USB-C breakout with wiring

This Atreus build log was useful for figuring out where to connect the USB wires on the Pro Micro. Once all was done, I had a custom, USB-C keyboard!

USB-C keyboard

Next steps

Now I have the Atreus working nicely on my new standing desk. Learning Colemak is a bit painful, but the keyboard itself feels super nice!

New standing desk

However, I’d still like to CNC mill a proper wooden case for the keyboard. I may update this post once that happens.

I’m also considering to order an Atreus kit so I’d have a second, always packed for travel keyboard. The kit comes with a PCB, which might work better at airport security checks than the hand-wired build.

Another thing that is quite tempting is to make a custom firmware with MicroFlo. I have no complaints on how QMK works, but it’d be super-cool to use our visual programming tool to tweak the keyboard live.

Big thanks to Technomancy for the Atreus design, and to XenGi for all the help during the build!

gtkmm 4 progress

gtkmm 4 is alive

We (mostly Kjell Ahlstedt and myself) have been quietly working away on gtkmm 4 and an associated ABI-breaking version of glibmm. We’ve been tracking GTK+ 4 from git master, making sure that gtkmm builds against it, and making various API-breaking or ABI-breaking changes that had been left in the code as TODO comments waiting for a chance like this.

This includes simple ABI-breaking (but not API-breaking) stuff such as adding a base classes to widget esor changing the type sof a method parameters. Many other changes are about updating the code to be more modern C++, trying to do things the right way and avoiding duplication with new API in the C++ Standard library.

These changes pleases us as purists but, honestly, as an application developer it isn’t going to give you interesting new behaviour in your applications. If you too are enthusiastic about the C++ renaissance, porting to the new APIs might feel rewarding and correct, and you’ll have to do it someday anyway to get whatever new features arrive later, but I cannot claim there is a more compelling reason to do the work. You might get shiny new features via GTK+ 4, but so far it feels like a similar exercise in internal cleanup and removal of unloved API.

There are still people stuck on GTK+ 2, because most people only port code when they need to. I don’t think the transition to GTK+ 4 or gtkmm 4 will be any faster. Certainly not until the GTK+ 4 porting advice gets a lots better. Porting gtkmm and glom to GTK+ 4 has not been fun, particularly as the trend for API changes without explanation has only increased.

Anyway, here are some of the more significant changes so far in glibmm-2.54 (a horrible ABI name, but there are reasons) and gtkmm-4.0, both still thoroughly unstable and subject to API change:

(Lots of this is currently on in git master but will be in tarball releases soonish, when there are glib and gtk+ releases we can depend on.)

Deprecated API is gone

Anything that was deprecated in gtkmm 3 (or glibmm-2.4) is now completely removed from gtkmm 4 (or glibmm-2.54). You’ll want to build your application with gtkmm 3 with all deprecated API disabled before attempting to build against gtkmm 4.

In some cases this included deprecated base classes that we couldn’t let you optionally disable without breaking ABI, but now they are really really gone.

This is a perfect example of API changes that make us feel better but which are really not of much direct benefit to you as an application developer. It’s for your own good, really.

Glib::RefPtr is now std::shared_ptr

In gtkmm 3, Glib::RefPtr<> is a reference-counting smart pointer, mostly used to hide the manual GObject reference-counting that’s needed in C. It worked well, but C++11 introduced std::shared_ptr so we saw a chance to delete some code and  make our API even more “standard”. So, Glib::RefPtr is now just an alias for std::shared_ptr and we will probably change our APIs to use std::shared_ptr instead of Glib::RefPtr.

Glib::RefPtr is an intrusive smart pointer, meaning that it needs, and uses, a reference count in the underlying object rather than in the smartpointer itself. But std::shared_ptr is non-intrusive. So we now just take one reference when we instantiate the std::shared_ptr and release that one reference when the last std::shared_ptr is destroyed, via its Deleter callback. That means we always need to instantiate the std::shared_ptr via Glib::make_refptr_for_instance(), but we hide that inside glibmm/gtkmm code, and application developers would rarely need to do this anyway.

std::shared_ptr<> is a familiar type, so using it in our API should make it easier for the average person to reason about our API. And using it makes us part of the wider ongoing conversation in the C++ community about how to indicate and enforce ownership. For instance, we might start taking Thing* parameters instead of std::shared_ptr<Thing> parameters when we know that the called method will not need to share ownership. I mentioned this idea in an earlier post. However, we often cannot assume much about what the underlying C function really does.

This is a big change. Hopefully it will work.

Now uses libsigc++-3.0 instead of libsigc++-2.0

I rewrote libsigc++ for modern C++, using variadic templates and some other modern C++ techniques. It feels like libsigc++-2.0, but the code is much simpler. Compiler errors might be slightly less cryptic. This requires C++14.

Enums are inside related classes

Enums that were only used with a particular class are now inside that class. For instance, Gio::ApplicationFlags is now Gio::Application::Flags. This makes the API slightly clearer. This didn’t need C++11, but it did need an API break.

Enums are now C++11 enum classes

Enums are now declared as “enum class” (scoped enumerations) instead of “enum” (unscoped enumerations), so they cannot be implicitly converted to other types such as bool or int. That lets the compiler find some subtle programmer errors.

The enum values must now be prefixed by the enum name rather than having a prefix as part of the value name. For instance, we now use Gio::Application::Flags::HANDLES_OPEN instead of Gio::Application::FLAGS_HANDLES_OPEN (actually Gio::APPLICATION_FLAGS_HANDLES_OPEN before we put enums inside classes).

Gtk::TreeView and Gtk::TextView now have real const_iterators

This lets us make the API more const-correct, requiring less arbitrary const_casts<>s in application code.

Removed the old intermediate ListHandle/SListHandle/ArrayHandle/SArrayHandle types

Long ago, the gtkmm API used these intermediate types, wrapping glib types such as GList, GSList, and C arrays, so you didn’t need to choose between using a std::list, std::vector, or other standard container. Since gtkmm 3 we decided that this was more trouble than it was worth, and decided to just uses std::vector everywhere, but it’s only now that we’ve been able to remove them from the glibmm, pangomm, and atkmm APIs.

A possible change: Not using Glib::ustring

We are still considering whether to replace uses of Glib::ustring in our API with std::string, maybe just keeping Glib::ustring for when people really want to manipulate UTF-8 “characters”. I would much prefer standard C++ to have real UTF-8 support, for instance letting us step through and insert characters in a UTF-8 string, but that doesn’t look like it will happen in the foreseeable future.

Glib::ustring still wraps useful UTF-8 APIs in glib, in a std::string-like API, so we wouldn’t want to remove it.

Also, currently it’s useful to know that, for instance, a gtkmm method that returns a Glib::ustring is giving us a UTF-8 string (such as Gtk::FileChooser::get_current_name()), rather than something of unknown encoding (such as Gtk::FileChooser::get_filename()). We allow implicit conversions, for convenience, so we can’t use the compiler to check for awareness of these encoding differences, but having it in the method signature still feels nicer than having to read a method’s documentation.

April 18, 2017

3 things community managers can learn from the 50 state strategy

This is part of the opensource.com community blogging challenge: Maintaining Existing Community.

There are a lot of parallels between the world of politics and open source development. Open source community members can learn a lot about how political parties cultivate grass-roots support and local organizations, and empower those local organizations to keep people engaged. Between 2005 and 2009, Howard Dean was the chairman of the Democratic National Congress in the United States, and instituted what was known as the “50 state strategy” to grow the Democratic grass roots. That strategy, and what happened after it was changed, can teach community managers some valuable lessons about keeping community contributors. Here are three lessons community managers can learn from it.

Growing grass roots movements takes effort

The 50 state strategy meant allocating rare resources across parts of the country where there was little or no hope of electing a congressman, as well as spending some resources in areas where there was no credible opposition. Every state and electoral district had some support from the national organization. Dean himself travelled to every state, and identified and empowered young, enthusiastic activists to lead local organizations. This was a lot of work, and many senior democrats did not agree with the strategy, arguing that it was more important to focus effort on the limited number of races where the resources could make a difference between winning and losing (swing seats). Similarly, for community managers, we have a limited number of hours in the day, and investing in outreach in areas where we do not have a big community already takes attention away from keeping our current users happy. But growing the community, and keeping community members engaged, means spending time in places where the short-term return on that investment is not clear. Identifying passionate community users and empowering them to create local user groups, or to man a stand aty a small local conference, or speak at a local meet-up helps keep them engaged and feel like part of a greater community, and it also helps grow the community for the future.

Local groups mean you are part of the conversation

Because of the 50 state strategy, every political conversation in the USA had Democratic voices expressing their world-view. Every town hall meeting, local election, and teatime conversation had someone who could argue and defend the Democratic viewpoint on issues of local and national importance. This means that people were aware of what the party stood for, even in regions where that was not a popular platform. It also meant that there was an opportunity to get a feel for how national platform messaging was being received on the ground. And local groups would take that national platform and “adjust” it for a local audience – emphasizing things which were beneficial to the local community. Open source projects also benefit from having a local community presence, by raising awareness of your project to free software enthusiasts who hear about it at conferences and meet-ups. You also have an opportunity to improve your project, by getting feedback from users on their learning curve in adopting and using it. And you have an increasing number of people who can help you understand what messaging resonates with people, and which arguments for adoption are damp squibs which do not get traction, helping you promote your project more effectively.

Regular contact maintains engagement

After Howard Dean finished his term as head of the DNC in 2009, and Debbie Wasserman-Schultz took over as the DNC chair, the 50 state strategy was abandoned, in favour of a more strategic and focussed investment of efforts in swing states. While there are many possible reasons that can be put forward, it is undeniable that the local Democratic party structures which flourished under Dean have lost traction. The Democratic party has lost hundreds of state legislature seats, dozens of state senate seats, and a number of governorships  in “red” states since 2009, in spite of winning the presidency in 2012. The Democrats have lost control of the House and the Senate nationally, in spite of winning the popular vote in 2016 and 2012. For community managers, it is equally important to maintain contact with local user groups and community members, to ensure they feel empowered to act for the community, and to give the resources they need to be successful. In the absence of regular maintenance, community members are less inclined to volunteer their time to promote the project and maintain a local community.

Summary

Growing local user groups and communities is a lot of work, but it can be very rewarding. Maintaining regular contact, empowering new community members to start a meet-up or a user group in their area, and creating resources for your local community members to speak about and promote your project is a great way to grow the community, and also to make life-long friends. Political organizations have a long history of organizing people to buy into a broader vision and support and promote it in their local communities.

What other lessons can community managers and organizers learn from political organizations?

 

A better March Madness script?

Last year, I wrote an article for Linux Journal describing how to create a Bash script to build your NCAA "March Madness" brackets. I don't really follow basketball, but I have friends that do, so by filling out a bracket at least I can have a stake in the games.

Since then, I realized my script had a bug that prevented any rank 16 team from winning over a rank 1 team. So this year, I wrote another article for Linux Journal with an improved Bash script to build a better NCAA "March Madness" bracket. In brief, the updated script builds a custom random "die roll" based on the relative strength of each team. My "predictions" this year are included in the Linux Journal article.

Since the games are now over, I figured this was a great time to see how my bracket performed. If you followed the games, you know that there were a lot of upsets this year. No one really predicted the final two teams for the championship. So maybe I shouldn't be too surprised if my brackets didn't do well either. Next year might be a better comparison.

In the first round of the NCAA March Madness, you start with teams 1–16 in four regions, so that's 64 teams that compete in 32 games. In that "round of 64," my shell script correctly predicted 21 outcomes. That's not a bad start.

March Madness is single-elimination, so for the second round, you have 32 teams competing in 16 games. My shell script correctly guessed 7 of those games. So just under half were predicted correctly. Not great, but not bad.

In the third round, my brackets suffered. This is the "Sweet Sixteen" where 16 teams compete in 8 games, but my script only predicted 2 of those games.

And in the fourth round, the "Elite Eight" round, my script didn't predict any of the winners. And that wrapped up my brackets.

Following the standard method for how to score "March Madness" brackets, each round has 320 possible points. In round one, assign 10 points for each correctly selected outcome. In round two, assign 20 points for each correct outcome. And so on, double the possible points at each round. From that, the math is pretty simple.

round one:21 × 10 =210
round two:7 × 20 =140
round three:1 × 40 =40
round four:0 × 80 =0
390
My total score this year is 390 points. As a comparison, last year's script (the one with the bug) scored 530 in one instance, and 490 in another instance. But remember that there were a lot of upsets in this year's games, so everyone's brackets fared poorly this year, anyway.

Maybe next year will be better.

Did you use the Bash script to help fill out your "March Madness" brackets? How did you do?

I’m not looking forward 3.24

Not until it’s fixed.

Those who follow my work are used to read my “Looking forward ” posts, and they appear to be quite popular. This cycle, however, I’m not looking forward the next GNOME release.

That’s because I’m disappointed.

Very disappointed.

😦

#1 – GNOME Shell

UPDATE: I was told this is an Arch-specific issue.

UPDATE 2: This is not an Arch-specific issue. This is now fixed in bug 781194, and will be available in the next GNOME stable release.

Let’s start with GNOME Shell, which is the single topmost important piece of software for end users. I was super excited with the new Night Lights mode, because as you may know, I suffer from insomnia (and fixed many bugs when I couldn’t sleep!).

When it landed on Arch Linux’s gnome-unstable repos, I’m pretty sure I was one of the first ones to test it.

But Shell keeps crashing every ~4 minutes. After 5 or 6 crashes, it logs me out. Needless to say, I lost work multiple times. That’s incredibly annoying.

If anyone is experiencing that, please join us in bug 781194 for we’re trying to find out why those crashes are happening. Fortunately, Philip Chimento is super nice and is working day and night to find out what’s going on.

But that’s still disappointing.

#2 – WebKit2GTK+

UPDATE: This was a regression in WebKit2GTK+.

I try to use Epiphany. I really try. But no matter how much I try, it fails me every time.

Looks like I can’t use keyboard to handle my Google Inbox mails. That’s a show-stopper to me.

#3 – Calendar

UPDATE: Thanks Debarshi for explaining this. He states:

There is a reason why crashes reported by ABRT are marked as ‘private’. Like all backtraces, the ones on those bugs often have passwords and private data in them. I have seen a few over the years. Hence those bugs are only accessible to Fedora contributors by default. Which makes sense, because, as you pointed out, they are Fedora bugs, and users already trust the Fedora community to ship secure software.

Why am I disappointed with my own piece of software? Well, I’m not disappointed with Calendar itself, but with the bugs around it.

It all started with bug 778419.

As you might know, I’m very urgent when it comes to crashes on Calendar. Sometimes I stop urgent tasks to fix crashers as quick as possible. Recently, I received many complaints that Calendar was crashing, but I couldn’t reproduce any one of them and worse, the debug logs weren’t helpful.

Here enters bug 778419.

Appearently, there is a issue management thing called FAF in Fedora. Nice. And looks like it catches many bugs. Super nice! But then, why am I disappointed?

Well, it starts out with me not being a Fedora user, nor watching Red Hat’s bug tracker. GNOME is agnostic to distros, and should stay that way. That’s why we have GNOME Bugzilla instance running, right? So people can report GNOME bugs, in… well, GNOME bug tracker.

But that’s ok – I can eventually see FAF and have some downstream feedback. But here is the catch: appearently, some bugs are private. Isn’t is nice when you can’t see the issues of your own app? Even nicer when appearently downstream doesn’t really care to report those issues upstream. This is clearly stated in the bug.

Let me restate this: GNOME is agnostic to distros. I refuse to watch Red Hat’s bug tracker.

#4 – Software

I was super, super excited to try GNOME Software’s Flatpak integration. I never really used Software since it (i) does not behaves super great in Arch, and (ii) isn’t better than Arch’s pacman.

But I thought Flatpak would change this scenario. Flatpak is not great at command line, building a Flatpak repo is still way too hard for humans, and OSTree’s progress reports don’t really report the progress, but throw random numbers for you to figure out what’s going on. But I was hopeful that all we needed was a good UI for it.

Do I have to say how disappointed I was to see Software falling apart when installing and updating Flatpaks?

I won’t waste any paragraph describing how it fails. You just need to have Software, Flatpak and a repository to see the action.

At Last

Some of you might think this is a rage post. It is not. I use GNOME every day, and I am fixing GNOME every single day of the past 3 or 4 years. I wouldn’t use it if I didn’t love it. It’s indeed the best desktop environment for Linux to me. And I of course will fix every single issue I described here.

But I think we can do better. Much better. Of course, everyone can come here and say “well, you can fix that by yourself until I don’t”, but is that a good approach to this situation? We’re failing in maintaining and improving the platform, and that’s a serious, collective issue. We have unbelievably good hackers around, it’s not the lack of skilled people, nor resources, that is cracking us.

What can we do to improve?

I’ll leave this question broad and open like that, and I’d like to hear the opinions of the community. Let’s just try to keep the level of the respect acceptable.

By the way: Shell crashed 11 times, and I was kicked out of my session twice, until I could finish this article

April 17, 2017

Alternate Questions

Is it still in vogue for US tech companies to ask quantitative estimation/implausible-problem questions like "how many phone booths/piano tuners are there in Manhattan?" in hiring interviews, particularly for programming-related jobs? Fog Creek asked me one of those in 2005. There was even a book, How Would You Move Mount Fuji?: Microsoft's Cult of the Puzzle -- How the World's Smartest Companies Select the Most Creative Thinkers.* How many companies are still into that?**

I ask because I came up with a couple you could use, maybe for a digital humanities kind of position:

  1. How many people, throughout history, have actually been named "Flee-From-Sin"? I feel like you see this as a jokey Puritan first name in books like Good Omens or the Baroque Cycle, but was it a name that some non-negligible number of people actually had?
  2. Out of all the people currently within New York City limits, have more of them written a sonnet or a dating profile? What's the ratio?

* That's right, two subtitles. That's how you know you're getting a lot for your $16.00 MSRP.

** It's hard to tell these things sometimes even if you listen to lots of people discuss hiring and recruiting. "Five Worlds" and its decade-later ramifications apply to work culture, not just software development methodology. Stripe's engineering interview aims to "simulate the engineering work you'd do day-to-day" (link via Julia Evans) so I think you can expect your interviewer won't show up wearing a question-mark costume and screeching, "Riddle me this, Batman!" This software engineer, who's just been through scads of hiring interviews, doesn't mention puzzle questions. This level of detail ain't exactly on the "How to Become a Computer Programmer" page in the Occupational Outlook Handbook from the US Department of Labor -- but then again we already knew that the assessment vacuum in software engineering skills is a huge problem.

April 16, 2017

Why don't you just rewrite it in X?

Whenever a new programming language becomes popular its fanpersons start evangelizing its virtues by going to existing projects and filing bug reports that look like this.

Hi, I noticed that this project is written in [programming language X]. You really should just rewrite it in [programming language Y] because it is better at [some feature Z]. Kthxbye!

When put like that this seems like a no-brainer. Being better at Z is good so surely everyone should just port their projects to Y.

Recently there has been movement to convert tooling used by various software projects in the Gnome stack from a mishmash of shell, Awk and Perl into Python 3. The main reasoning for this is that having only one "scripting" dependency to a modern, well maintained project makes it simple to compile applications using Gnome technologies on platforms such as Windows. Moving between projects also becomes easier.

One tool undergoing this transition is GTK-doc. which is a documentation generation tool written mostly in Perl. I have been working together with upstream to convert it to Python 3. This has been an educational experience in many ways. One of the first things learned is that converting between any two languages usually breaks down to three distinct phases.

  1. Manual syntax conversion
  2. Fixing bugs caused by conversion errors
  3. Converting code to idiomatic target language

A Perl to Python conversion is relatively straightforward in the case of gtk-doc. It mostly deals with regular expressions, arrays and dictionaries. All three of these behave pretty much the same in both languages so step one is mostly manual work. Step two consists of fixing all the bugs and behavioural changes introduced in phase one (many caused by typos and lapses of concentration during step one). This phase is basically debugging. The third step is then a question of converting regular expressions and global variables into objects and other sane and readable constructs.

When doing the conversion I have been mostly focusing on step one, while the gtk-doc maintainer has agreed to finalize the work with steps two and three. While converting the 6000+ lines file gtkdoc-mkdb, I did some measurements and it turns out that I could do the conversion at a peak rate of 500 lines an hour, meaning roughly 7 seconds per line of code.

This was achieved only on code that was easy to convert and was basically an exercise in Emacs fingering. Every now and then the code used "fancy" Perl features. Converting those parts was 10x, 100x, and sometimes up to 1000x slower. If the port had required architectural rework (which might happen when converting a lax language to one that has a lifetime checker in the compiler, for example) it would have slowed things down even more.

I don't know how much work steps 2 and 3 entail, but based on comments posted on certain IRC channels, it is probably quite a lot. Let's be generous and say overall these three items come to 250 lines of converted code per hour.

Now comes the truly sad part. This speed of conversion is not sustainable. Manually converting code from one format to another is the most boring, draining and soul-crushing work you can imagine. I could only do this for a maximum of a few hours per day and then I had to stop because all I could see was a flurry of dollar signs, semicolons and curly braces. Based on this we can estimate that a sustained rate of conversion one person can maintain is around 100 lines of code per hour (it is unlikely that this speed can be maintained if the project goes on for weeks but since there are no measurements let's ignore it for now).

The cURL project consists of roughly 100 thousand lines of C code according to Ohloh. If we assume that converting it to a some other language is just as easy as converting simple Perl to Python (which seems unlikely), the conversion would take 1000 person hours. At 8 hours per day that comes to around 5 months of full time work. Once that is done you get to port all the changes made in trunk since starting the conversion. Halting the entire project while converting it from one language to another is not an option.

This gives us a clear answer on why people don't just convert their projects from one language to another:

There is no such thing as "just rewrite it in X".

Post scriptum

There are tools that automatically convert from one language to another. They can help, but only in step one. Steps two and three are still there, and could take more work than manually converted code because usually manual conversion produces more human-understandable code. Sadly Turing-completeness says that we can't have nice things.

April 15, 2017

More On Private Internet Access

A few quick follow-up thoughts from my original review. First, problems I haven’t solved yet:

  • I forgot an important problem in my first blog: email. Evolution is borderline unusable with PIA. My personal GMail account usually works reliably, but my Google Apps school GMail account (which you’d think would function the same) and my Igalia email both time out with the error “Source doesn’t support prompt for credentials”. That’s Evolution’s generic error that it throws up whenever the mail server is taking too long to respond. So what’s going on here? I can check my email via webmail as a workaround in the meantime, but this is really terrible.
  • Still no solution for the first attempt to connect always failing. That’s really annoying! I was expecting some insight (or at least guesses) as to what might be going wrong here, but nobody has suggested anything about this yet. Update: The problem is that I had selected “Make available to other users” but “Store the password only for this user”, which results in the first attempt to connect always failing, because it’s performed by the gdm user. The fix is to store the password for all users.

Some solutions and answers to problems from my original post:

  • Jonh Wendell suggested using TCP instead of UDP to connect to PIA. I’ve been trying this and so far have not noticed a single instance of connection loss. So I think my biggest problem has been solved. Yay!
  • Dan LaManna posted a link to vpnfailsafe. I’m probably not going to use this since it’s a long shell script that I don’t understand, and since my connection drop problems seem to be solved now that I’ve switched to TCP, but it looks like it’d probably be a good solution to its problem. Real shame this is not built in to NetworkManager already.
  • Christel Dahlskjaer has confirmed that freenode requires NickServ/SASL authentication to use via PIA. This isn’t acceptable for me, since Empathy can’t handle it well, so I’m probably just going to stop using freenode for the most part. The only room I was ever really active in was #webkitgtk+, but in practice our use of that room is basically redundant with #epiphany on GIMPNet (where you’ll still find me, and which would be a better location for a WebKitGTK+ channel anyway), so I don’t think I’ll miss it. I’ve been looking to reduce the number of IRC rooms I join for a long time anyway. The only thing I really need freenode for is Fedora Workstation meetings, which I can attend via a web gateway. (Update: I realized that I am going to miss #webkit as well. Hmm, this could be a problem….)

So my biggest issue now is that I can’t use my email. That’s pretty surprising, as I wouldn’t think using a VPN would make any difference for that. I don’t actually care about my Google Apps account, but I need to be able to read my Igalia mail in Evolution. (Note: My actual IP seems to leak in my email headers, but I don’t care. My name is on my emails anyway. I just care that it works.)

Happy Hardware Freedom Day 2017!

And today is the day where we celebrate Free Hardware and the possibilities to build and design upon other people’s work or simply start something with the community in mind by ensuring projects can be shared and improved at will. In case you’ve missed our announcement the registration for Hardware Freedom Day will remain open for the month to come allowing you to celebrate at a later date, just make sure you specify the new date on your wiki page.

As usual we have a mailing list to discuss all these issues and you can find our marketing artworks at this wiki page.

As usual we want to thank our community to be there and celebrate HFD in their area. Hardware Freedom Day wouldn’t be what it is without you!

Happy hacking!

April 14, 2017

Encouraging New Contributors in Lima, Peru

A worldwide enthusiastic representative FLOSS as Stormy is, have public encouraged contributors to share experiences about their communities around the world. So I decided to post about it since I usually have the support of two great communities such as GNOME and Fedora to do Linux events in my local community. Following the suggested structure, here are some experiences that I can make you know. Hope you do not mind to check every single link I pointed out to the words throughout this post because it has more posts of the job we do in Lima, Peru.

  • 3 best places for finding new users

Universities and renowned IT companies 

It is well known in Peru that the best programmers and IT people are studying in three top universities such as PUCP, UNI and UNMSM. And as you can see in the corresponding links, I did presented GNOME and Fedora in these three universities as well as in other universities around my country such as UIGV, UPN, UC, UPIG, UNA, UNTELS, UNSAAC, UNU, USIL, UTP, UNICA and UCSS.

IBM and PetroPeru are two renowned companies in Peru and it is attracted to newcomers to have the opportunity to attend and hear for free professional and expert experiences that use Linux solutions in their daily. This is inspired experience to many students and local enthusiasts.

Online IT communities such as training IT courses and IT channels 

There are other IT communities that I use different platforms and usually the followers get in contact with this online communities in order to learn Linux and innovative apps that other communities can provide. This year I was invited to participate in an interview of DevAcademy channel with more than 7.3K followers around the world as well as BacktrackAcademy with more than 60K followers.

Newspapers and social network to spread a Linux event to the community

Maybe your contacts around are not so interested in using Linux or contribute with the projects, but by spreading the news of Linux events in a wellknown newspaper, we can expand the horizons and get people who really concern about Linux. La Republica is one of the top recognized newspaper in Peru and they helped us two years in a raw 2014 and 2015. Ads in social networks as Facebook and Twitter were also important to let us calculated how much people are interested.

  • 10 steps to keeping new contributors once you have their attention

Installing Fedora and GNOME

I started virtually because it is a slow process to teach how to use Linux since you have always used another operating system. Some really enthusiastic people decide to do dual boot and others after the first talk about GNU/Linux decide to delete other OS 🙂

Using and interacting with the GUI and by terminal

Configuring IP, PE, DNS, editing configuration of keyboard and languages and many other commands that will help to use the system in daily activities.

Create an online group or chat to support each other

Some students are shy to speak out in English because they do not it well and it is preferable to start locally, so chats or groups in social networks were a way to communication that usually works, Whatapp groups also were an alternative.

Set up workshops to start little challenges to finish in GSoC

Workshops and hacking meetings periodically are important.

Make them part to other events as volunteers in the organization

I have experienced that my attends after a while become my volunteers in next events.

Show the different ways to contribute the project

Leyla is a student who started with us by design our events, then she become active in learning GNU/Linux commands as well as programming to help us in the workshops.

Build and Studying the code of a particular app

Felipe is on of our students that designed a simple game called Snake in GTK by first learning GTK.

Show the tools for specific contribution

Each team has their own tools and it is crucial to know them before start contribution.

Contact them with experts in the area they are having problems

Thanks to my trips abroad I was be able to know who is in charge of certain apps or areas so it was easier for me to send them mails when I can not answer a question from my students. I must thank people who have helped kindly and in time!

Teach to use the formal way to communicate to the community

IRC is the formal way to communicate in GNU/Linux even it is old, it has worked whenever we have a question in my local community.

  • 7 steps for onboarding new community members

Sharing experiences besides the code such as lunch and after hacks

Work in pairs in a common project

Students of UNTELS are a great sample this time, they worked in their university and then integrated with us whenever w have a meeting. I want to share the work of Bressner from USIL because he has a clean documentation during our workshops.

Grouping members to do a challenge

Code is definitely a challenge but what if you challenge a group to write a song for GNU/Linux?

Playing “trivias” and prize the knowledge they have about the project

During the talks you can ask to the audience questions related to the topic and see if they are understanding.

Meetings to hack outdoor in group

This year we have a great chance to celebrate an event at the beach called Linux Playa, and the last year we went to the camp thanks the event called HackCamp 2016.

Foster them to post their work

Since the installation workshop and then step by step to reach a great contribution.

Congratulating the work of a member in public

I use social network to share his/ her posts congratulating the job.

  • How did you get started in your first project?

We did usually started by “jhbuilding” modules in the system.

  • 3 best tips you’ve gotten for attracting new contributors

Show the OS Revolution video which shows all the efforts of GNU/Linux since MIT.

Explain the importance of GNU/Linux in computing history and in supercomputers.

I also highlight that GNU/Linux has an aggregate value such as knowing another language and I also present the job demanding from important IT companies.

  • Ways you find the right type of contributor and where to find them

One of my students in UNI university last year during my course of Operating System did a patch that has passed in production for GNOME. I am doing the same this year for my students as an alternative of final project. One of them got in contact with Athos Ribeiro of Fedora to solve a bug.


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: #osscommunities, community, Events, fedora, FLOSS community, GNOME, Julita Inca, Julita Inca Chiroque, Lima, Perú, Stormy, work FLOSS

April 13, 2017

On Private Internet Access

I’m soon going to be moving to Charter Communications territory, but I don’t trust Charter and don’t want it to keep records of all the websites that I visit.  The natural solution is to use a VPN, and the natural first choice is Private Internet Access, since it’s a huge financial supporter of GNOME, and I haven’t heard anybody complain about problems with using it. This will be a short review of my experience.

The service is not free. That’s actually good: it means I’m the customer, not the product. Cost is $40 per year if you pay a year in advance, but you should probably start with the $7/month plan until you’re sure you’re happy with the service and will be keeping it long-term. Anyway, this is a pretty reasonable price that I’m happy to pay.

The website is fairly good. It makes it easy to buy or discontinue service, so there are no pricing surprises, and there’s a pretty good library of support documentation. Unfortunately some of the claims on the website seem to be — arguably — borderline deceptive. A VPN service provides excellent anonymity against your ISP, but relying on a VPN would be a pretty bad idea if your adversary is the government (it can perform a traffic correlation attack) or advertising companies (they know your screen resolution, the performance characteristics of your graphics card, and until recently the rate your battery drains…). But my adversary is going to be Charter Communications, so a VPN is the perfect solution for me. If you need real anonymity, you absolutely must use the Tor Browser Bundle, but that’s going to make your life harder, and I don’t want my life to be harder, so I’ll stick with a VPN.

Private Internet Access provides an Ubuntu app, but I’m going to ignore that because (a) I use Fedora, not Ubuntu, and (b) why on Earth would you want a separate desktop app for your VPN when OpenVPN integration is already built-in on Ubuntu and all modern Linux desktops? Unfortunately the documentation provided by Private Internet Access is not really sufficient — they have a script to set it up automatically, but it’s really designed for Ubuntu and doesn’t work on Fedora — so configuration was slightly challenging.  I wound up following instructions on some third-party website, which I have long since forgotten. There are many third-party resources for how to configure PIA on Linux, which you might think is good but actually indicates a problem with the official documentation in my opinion. So there is some room for improvement here. PIA should ditch the pointless desktop app and improve its documentation for configuring OpenVPN via NetworkManager. (Update: After publishing this post, I discovered this article. Seems the installation script now supports for Fedora/RHEL and Arch Linux. So my claim that it only works on Ubuntu is outdated.) But anyway, once you get it configured properly with NetworkManager, it works: no need to install anything (besides the OpenVPN certificate, of course).

Well, it mostly works. Now, I have two main requirements to ensure that Charter can’t keep records of the websites I’m visiting:

  • NetworkManager must autoconnect to the VPN, so I don’t have to do it manually.
  • NetworkManager must reconnect to the VPN service if connection drops, and must never send any data if the VPN is off.

The first requirement was hard to solve, and I still don’t have it working perfectly. There is no GUI configuration option for this in gnome-control-center, but I eventually found it in nm-connection-editor: you have to edit your normal non-VPN connection, which has a preference to select a VPN to connect to automatically. So we should improve that in gnome-control-center. Unfortunately, it doesn’t work at all the first time your computer connects to the internet after it’s booted. Each time I boot my computer, I’m greeted with a Connection Failed notification on the login screen. This is probably a NetworkManager bug. Anyway, after logging in, I just have to manually connect once, then it works.

As for the next requirement, I’ve given up. My PIA connection is routinely lost about once every 30-45 minutes, usually when watching YouTube or otherwise using a lot of data. This is most likely a problem with PIA’s service, but I don’t know that: it could just as well be my current ISP cutting the connection, or maybe even some client-side NetworkManager bug. Anyway, I could live with brief connection interruptions, but when this happens, I lose connection entirely for about a minute — too long — and then the VPN times out and NetworkManager switches back to sending all the data outside the VPN. That’s totally unacceptable. To be clear, sending data outside the VPN is surely a NetworkManager problem, not a PIA problem, but it needs to be fixed for me to be comfortable using PIA. I see some discussion about that on this third-party GitHub issue, but the “solution” there is to stop using NetworkManager, which I’m not going to do. This is probably one of the reasons why PIA provides a desktop app — I think the PIA app doesn’t suffer from this issue? — but like I said, I’m not going to use a third-party OpenVPN app instead of the undoubtedly-nicer support that’s built in to GNOME.

Another problem is that I can’t connect to Freenode when I’m using the VPN. GIMPNet works fine, so it’s not a problem with IRC in general: Freenode is specifically blocking Private Internet Access users. This seems very strange, since Freenode has a bunch of prominent advertising for PIA all over its website. I could understand blocking PIA if there are too many users abusing it, but not if you’re going to simultaneously advertise it.

I also cannot access Igalia’s SIP service when using PIA. I need that too, but that’s probably something we have to fix on our end.

So I’m not sure what to do now. We have two NetworkManager bugs and a problem with Freenode. Eventually I’ll drop Empathy in favor of Matrix or some other IRC client where registering with NickServ is not a terrible mistake (presumably they’re only blocking unregistered users?), so the Freenode issue seems less-important. I think I’d be willing to just stop visiting Freenode if required to use PIA, anyway. But those NetworkManager issues are blockers to me. With those unfixed, I’m not sure if I’m going to renew my PIA subscription or not. I would definitely renew if someone were to fix those two issues. The ideal solution would be for PIA to adopt NetworkManager’s OpenVPN plugin and ensure it gets cared for, but if not, maybe someone else will fix it?

Update: See part two for how to solve some of these problems.

Mailing list for fwupd and the LVFS

I’ve created a mailing list for fwupd and LVFS discussions. If you’re interested in firmware updating on Linux, or want to know what’s happening on the Linux Vendor Firmware Service you probably want to join. There are a few interesting things I’ll post in a few days.

April 12, 2017

How Meson is tested

A build system is a very important part of the development workflow and people depend on it to work reliably. In order to achieve this we do a lot of testing on Meson. As this is mostly invisible to end users I thought I'd write some information about our testing setup and practices.

Perhaps the most unconventional thing is that Meson has no unit tests in the traditional sense of the word. The code consists of functions, classes and modules as you would expect but there are no test for these individual items. Instead all testing is done on full projects. The bulk of tests are what could traditionally be called integration tests. That is, we take an entire project that does one thing, such as compile and install a shared library, and then configures it, builds it, runs tests and runs install and checks that the output is correct.

There are also smaller scale tests which are called unit tests. They also configure a project but then inspect the result directly. As an example test might set up a project and build it, and then touch one file and verify that it triggers a rebuild. Some people might claim that these are not unit tests either or that they should instead be tested with mock classes and the like. This is a valid point to make, but this is the terminology we have converged unto.

Enter The (Testing) Matrix

Our testing matrix is big. Really big. At its core are the three main supported platforms, Linux, Windows and OSX. We also support BSD's and the like but we currently don't have CI machines for them.

On Windows our Appveyor setup tests VS2010, 2015 and 2017 and in addition mingw and Cygwin. Apart from Cygwin these all are tested on both 32 and 64 bits variants. Visual studio is tested both with the Ninja and Visual Studio project generator backends.

OSX is the simplest, it is tested only with the Ninja backend using both regular and unity builds.

Linux tests do a lot more. In addition running the basic tests (both in unity and regular modes) it also runs the entire test suite in a cross compilation setup.

All of these tests are only for the core code. Meson supports a large amount of frameworks and libraries, such as GLib, Qt and Doxygen, and many programming languages. Every one of these has an associated test or, usually, several. This means that running the full test suite on Debian requires installing all of these:
  • Boost
  • D, Rust, Java, Fortran, C# and Swift compilers
  • Qt 5, WxWidgets, GTK+ 3
  • Valgrind
  • GNUstep
  • Protocol Buffers
  • Cython
  • GTest and GMock
And a bunch of other packages as well. The CI Docker image that has all of these installed takes 2 gigabytes of space. On many distros all dependencies are not available so packagers have to disable tests. Having a build dependency on all these packages sometimes yields interesting problems. As an example the Rust dependency means that Meson depends on LLVM. Every now and then it breaks on s390x meaning that Meson and every package that uses it to build get flagged for removal from Debian.

Every merge proposal to Meson master is run through all of these tests and is eligible for merging only if they all pass. There are no exceptions to this rule. 

There are some downsides to this, the biggest being that every now and then Appveyor and/or Travis get clogged and getting the green light takes forever. We looked briefly into getting paid instances but for our usage the bill would be in the neighborhood of $300 per month. Given that you can buy your own hardware for that kind of money, this has not been seen as a worthwhile investment. 

April 11, 2017

Disabling SSL validation in binary apps

Reverse engineering protocols is a great deal easier when they're not encrypted. Thankfully most apps I've dealt with have been doing something convenient like using AES with a key embedded in the app, but others use remote protocols over HTTPS and that makes things much less straightforward. MITMProxy will solve this, as long as you're able to get the app to trust its certificate, but if there's a built-in pinned certificate that's going to be a pain. So, given an app written in C running on an embedded device, and without an easy way to inject new certificates into that device, what do you do?

First: The app is probably using libcurl, because it's free, works and is under a license that allows you to link it into proprietary apps. This is also bad news, because libcurl defaults to having sensible security settings. In the worst case we've got a statically linked binary with all the symbols stripped out, so we're left with the problem of (a) finding the relevant code and (b) replacing it with modified code. Fortuntely, this is much less difficult than you might imagine.

First, let's find where curl sets up its defaults. Curl_init_userdefined() in curl/lib/url.c has the following code:
set->ssl.primary.verifypeer = TRUE;
set->ssl.primary.verifyhost = TRUE;
#ifdef USE_TLS_SRP
set->ssl.authtype = CURL_TLSAUTH_NONE;
#endif
set->ssh_auth_types = CURLSSH_AUTH_DEFAULT; /* defaults to any auth
type */
set->general_ssl.sessionid = TRUE; /* session ID caching enabled by
default */
set->proxy_ssl = set->ssl;

set->new_file_perms = 0644; /* Default permissions */
set->new_directory_perms = 0755; /* Default permissions */

TRUE is defined as 1, so we want to change the code that currently sets verifypeer and verifyhost to 1 to instead set them to 0. How to find it? Look further down - new_file_perms is set to 0644 and new_directory_perms is set to 0755. The leading 0 indicates octal, so these correspond to decimal 420 and 493. Passing the file to objdump -d (assuming a build of objdump that supports this architecture) will give us a disassembled version of the code, so time to fix our problems with grep:
objdump -d target | grep --after=20 ,420 | grep ,493

This gives us the disassembly of target, searches for any occurrence of ",420" (indicating that 420 is being used as an argument in an instruction), prints the following 20 lines and then searches for a reference to 493. It spits out a single hit:
43e864: 240301ed li v1,493
Which is promising. Looking at the surrounding code gives:
43e820: 24030001 li v1,1
43e824: a0430138 sb v1,312(v0)
43e828: 8fc20018 lw v0,24(s8)
43e82c: 24030001 li v1,1
43e830: a0430139 sb v1,313(v0)
43e834: 8fc20018 lw v0,24(s8)
43e838: ac400170 sw zero,368(v0)
43e83c: 8fc20018 lw v0,24(s8)
43e840: 2403ffff li v1,-1
43e844: ac4301dc sw v1,476(v0)
43e848: 8fc20018 lw v0,24(s8)
43e84c: 24030001 li v1,1
43e850: a0430164 sb v1,356(v0)
43e854: 8fc20018 lw v0,24(s8)
43e858: 240301a4 li v1,420
43e85c: ac4301e4 sw v1,484(v0)
43e860: 8fc20018 lw v0,24(s8)
43e864: 240301ed li v1,493
43e868: ac4301e8 sw v1,488(v0)

Towards the end we can see 493 being loaded into v1, and v1 then being copied into an offset from v0. This looks like a structure member being set to 493, which is what we expected. Above that we see the same thing being done to 420. Further up we have some more stuff being set, including a -1 - that corresponds to CURLSSH_AUTH_DEFAULT, so we seem to be in the right place. There's a zero above that, which corresponds to CURL_TLSAUTH_NONE. That means that the two 1 operations above the -1 are the code we want, and simply changing 43e820 and 43e82c to 24030000 instead of 24030001 means that our targets will be set to 0 (ie, FALSE) rather than 1 (ie, TRUE). Copy the modified binary back to the device, run it and now it happily talks to MITMProxy. Huge success.

(If the app calls Curl_setopt() to reconfigure the state of these values, you'll need to stub those out as well - thankfully, recent versions of curl include a convenient string "CURLOPT_SSL_VERIFYHOST no longer supports 1 as value!" in this function, so if the code in question is using semi-recent curl it's easy to find. Then it's just a matter of looking for the constants that CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set to, following the jumps and hacking the code to always set them to 0 regardless of the argument)

comment count unavailable comments

Achievement Unlocked: MimeKit and MailKit in official Microsoft docs

Open Source Days 2017 Impressions

Open Source Days is an annual conference held in Copenhagen, this time held from the 17th March to the 18th March. Since my successful trip with members of Open Source Aalborg we are keeping a close eye on free software happening in and around Denmark. For all of us, this was the first time we went to the Open Source Days conference.

Day 1: Business Days

First day of the conference was arranged as an opportunity for networking and presentations oriented around open source in corporate setting. We were there, as part of PROSA, a local Danish union organization supporting open source. While Open Source Days is a significantly smaller conference than say FOSDEM, I was still impressed by the variance of local Scandinavian firms present which ranged from firms selling courses and education to firms offering cloud-based services and offering support on self-hosted services.

I had the chance to talk to quite a few around there including FAIR Denmark which is recycling computers with GNOME installed on them to provide education to poor countries. Very interesting!

I also had the chance to meet Jesper, Martin and a few others from last years open source camp. Jesper was presenting about his work on enabling high speed network packet support in the Linux Kernel. Lots of it flew over my head but it was very interesting to hear as the presentation was a continuation the work he presented last year at the camp.

Day 2: Community Days

The second day marked the community days. In spirit of the day, Open Source Aalborg had its own humble booth with hand-drawn flyers, signs and everything. Start small, as they say. :-)

Copenhagen’s hackerspace Labitat was also present and had brought lots of small projects with them such as hacked sewing machines, LED matrix bling-bling and other electronics.

The community day had two tracks with talks. Probably the most interesting was the talk about how Danish municipalities are collaborating on infrastructure based on free and open source software principles called OS2. This model doesn’t mean that the municipalties are developing the project in-house. Rather, they are placing contracts with local danish firms to work for specific periods of time to develop projects further – and one municipality’s work benefit all other 98 municipalities.

The conference ended with beers and popcorn at Farfar’s as is tradition or so I have been told. Thanks to PROSA for sponsoring this trip to Open Source Days for Open Source Aalborg. I’m definitely attending again next year. :-)

April 10, 2017

Encouraging new community members

My friend and colleague Stormy Peters just launched a challenge to the community – to blog on a specific community related topic before the end of the week. This week, the topic is “Encouraging new contributors”.

I have written about the topic of encouraging new contributors in the past, as have many others. So this week, I am kind of cheating, and collecting some of the “Greatest Hits”, articles I have written, or which others have written, which struck a chord on this topic.

Some of my own blog posts I have particular affection for on the topic are:

I also have a few go-to articles I return to often, for the clarity of their ideas, and for their general usefulness:

  • Open Source Community, Simplified” by Max Kanat-Alexander, does a great job of communicating the core values of communities which are successful at recruiting new contributors. I particularly like his mantra at the end: “be really, abnormally, really, really kind, and don’t be mean“. That about sums it up…
  • Building Belonging“, by Jono Bacon: I love Jono’s ability to weave a narrative from personal stories, and the mental image of an 18 year old kid knocking on a stranger’s door and instantly feeling like he was with “his people” is great. This is a key concept of community for me – creating a sense of “us” where newcomers feel like part of a greater whole. Communities who fail to create a sense of belonging leave their engaged users on the outside, where there is a community of “core developers” and those outside. Communities who suck people in and indoctrinate them by force-feeding them kool-aid are successful at growing their communities.
  • I love all of “Producing Open Source Software“, but in the context of this topic, I particularly love the sentiment in the “Managing Participants” chapter: “Each interaction with a user is an opportunity to get a new participant. When a user takes the time to post to one of the project’s mailing lists, or to file a bug report, she has already tagged herself as having more potential for involvement than most users (from whom the project will never hear at all). Follow up on that potential.”

To close, one thing I think is particularly important when you are managing a team of professional developers who work together is to ensure that they understand that they are part of a team that extends beyond their walls. I have written about this before as the “water cooler” anti-pattern. To extend on what is written there, it is not enough to have a policy against internal discussion and decisions – creating a sense of community, with face to face time and with quality engagements with community members outside the company walls, can help a team member really feel like they are part of a community in addition to being a member of a development team in a company.

 

Netflix blocks Fedora users

Netflix should finally support their HTML5 player in Firefox 52 on Linux.  This version has already landed in Fedora and been there for a couple of weeks and we’ve already received complaints from users who are confused. Both Netflix and Mozilla claim it should work, but it doesn’t for them.

Netflix still forwards them to their Silverlight player.  That’s pretty much a showstopper because Silverlight has been dead for quite a few years and it has never been easy to make it work on Linux.

In fact, Firefox 52 in Fedora does work with Netflix. As we found out the problem is in the user agent. The default user agent is:

Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0

If you remove “Fedora” from the user agent, Netflix suddenly stops offering Silverlight and just works. One would say that they only want to support official builds from Mozilla and allow only the upstream user agent. It would be an unfortunate way to do it, but at least partly understandable. But things get really weird when you try replacing “Fedora” with  random strings. Because then it also works which means that Netflix blocks Fedora specifically!

Netflix has supported Chrome for much longer and it also has behaved the same there. We set the Fedora user agent via an extension and the only reason why it works in Chrome on Fedora is that we blacklisted the netflix.com domain for the Fedora user agent.

We could do the same in Firefox, but I think it’s something that should be fixed on the side of Netflix. Users should not be denied a service based on their user agent. It takes us 15 years back when Opera had to fake its user agent to work with websites. Moreover Fedora isn’t anyhow different in this than other Linux distributions, so why is it blocked while others are not?

As a Netflix customer, I tried to call their support. I got to a first line support person who didn’t have much of a clue, trying to convince me that Silverlight works just fine on Fedora (which is not really true). So I tried to explain the problem and asked if they could pass it on to responsible engineers. We’ve also been trying to reach them through various contacts. Linux is not probably an important platform for Netflix, but they at least care enough to block specifically Fedora, so they should care enough to fix it. Moreover there are many Linux engineers in the company who could care, too. If you know anyone working in Netflix, please tell them about this and ask them to pass it on to responsible people. If you’re both a Netflix and Fedora user, you may also try to contact their support and let them know that it doesn’t work for you. Maybe if they collect more such cases it will make them look at it.

Edit: I’ve been told that Netflix also blocks user agents of other popular distros. So to make it work you can replace “Fedora” with random strings so long as it’s not “openSUSE”, “Debian”,  “CentOS”. The only exception is Ubuntu which is not blocked.

Edit2: I’ve managed to contact the right people in Netflix and they promised to fix it within the next couple of weeks!


Beyond NetworkManager 1.6: Part 1

NetworkManager 1.6 was delivered in early 2017, and is doing pretty well. It has found its way to many Linux distributions, including the upcoming Debian 9 “Stretch” release. There are good chances you’re already running it. Nevertheless, we still owe you an overview of what’s new.

Debian 9 snapshot already includes the new, much faster, nmcli

My favorite parts are: MACsec, much improved libnm performance, systemd-resolved support, PacRunner integration and IPv6 connection sharing. Let’s delve into them!

MACsec

When accompanied with a recent-enough wpa_supplicant (that for now means a post 2.6 git snapshot) and kernel (4.6 or newer), NetworkManager is able to create and maintain IEEE 802.1AE (better known as MACsec) links.

For those those who don’t know: MACsec is an encryption protocol that operates in the data link layer (Layer 2 in OSI model), beneath IP. MACsec comes useful when you don’t trust your physical link — such as with cloud hostings. IPsec, on the contrary, would operate on Level 3 and thus is not practical for protecting the ARP, DHCP or Neighbor Discovery traffic.

For more details, watch this talk by Sabrina of Red Hat, who implemented the kernel and wpa_supplicant parts.

Faster client library and tools

I mean really faster. Check out the D-Bus timing diagram. Behind the speedup is the rework of the client library to utilize the org.freedesktop.DBus.ObjectManager API. It allows us to initialize libnm in constant time. What we couldn’t initialize using the ObjectManager we made parallel.

In practical terms this means “nmcli c” would be a lot snappier even with a lot of connections or devices. This also significantly reduces latency where good interactive response is needed, such as when tab-completing the connection names.

Really, try it!

Support for systemd-resolved

Here’s some good news for those, who have systemd-resolved in charge of their name resolution: NetworkManager now integrates with it seemlessly. That means it tells it about the domain servers as they are discovered. To make NetworkManager not to control your resolv.conf directly, it’s sufficient set the “dns” property in your NetworkManager.conf file.

Web Proxy support

NetworkManager now supports configuration and discovery of Web Proxies and hands over the information to PacRunner. PacRunner then does the heavy lifting of interpreting the WPAD scripts and providing a sane interface to whichever application wants to use a proxy.

It should be noted that this feature as developed by Atul, a newcomer to  NetworkManager developer community. He did the work as part of Google Summer of Code project and he has done a superb job. Proxy users will notice.

IPv6 Connection sharing

2016 was the year IPv6 continued to grow exponentially and almost doubled. Therefore we decided it’s time to extend the IPv4-based connection sharing typically utilized by the Wi-Fi Hotspot feature with a IPv6-based counterpart. Unlike IPv4 that uses private address ranges and NAT for the downstream network, the IPv6 connection sharing utilizes RFC 3633 DHCPv6-based Prefix Delegation.

This means that the downlink connection gets a globally routable address, but it also requires your router to provide you with a prefix to use. We don’t have plans to support NAT for IPv6 on our roadmap — with growing importance of IPv6 it’s important that everyone gets their IPv6 setup right. We’d love to hear about your experience with IPv6 and the connection sharing in the comments.

Note: for now the feature only works with the ISC DHCP client and is known not to play very well with OpenWRT/LEDE’s odhcpd. We’re working with the odhcpd  developers to address that.

…and beyond

We’re almost done with NetworkManager 1.8; we just need iron out some rough edges. What’s coming is smarter connectivity checks, better PKCS#11 integration or much smaller dependency chain. But that’s going to be covered in the next article.

Stay tuned!

BuildStream progress and booting images

It’s been a while since my initial post about BuildStream and I’m happy to see that it has generated some discussion.

Since then, here at Codethink we’ve made quite a bit of progress, but we still have some road to travel before we can purport to solve all of the world’s build problems.

So here is a report on the progress we’ve made in various areas.

Infrastructure

Last time I blogged, project infrastructure was still not entirely sorted. Now that this is in place and will remain fixed for the foreseeable future, I’ll provide the more permanent links:

Links to the same in previous post have been updated

A note on GitLab

Gitlab provides us with some irresistible features.

Asides from the Merge Request feature which really does lower the barrier to contributing patches, the pre-merge CI pipelines allow us to ensure the test cases run prior to accepting any patch and are a deciding factor to remain hosted on GitLab for our git repository in lieu of creating a repo on git.gnome.org.

Another feature we get for free with GitLab’s pipeline feature is that we can automatically publish our documentation generated from source whenever a commit lands on the master branch, this was all very easy to setup.

User Experience

A significantly large portion of a software developer’s time is spent building and assembling software. Especially in tight debug and test loops, the seconds that it takes and menial tasks which stand in between an added printf() statement and a running test to reproduce some issue can make the difference between tooling which is actually helpful to the user, or just getting in the way of progress.

As such, we are paying attention to the user experience and have plans in place to ensure the most productive experience is possible.

Here are some of the advancements made since my first post

Presentation

Some of the elements we considered as important when viewing the output of a build include:

  • Separation and easy to find log files. Many build tools which use a serial build model will leave you with one huge log file to parse and figure out what happened, which is rather unwieldy to read. On the other hand, tools which exercise a parallelized build model can leave you searching through log directories for the build log you are looking for.
  • Constant feedback of what is being processed. When your build appears to hang for 30 minutes while all of your cores are being hammered down by a WebKit build, it’s nice to have some indication that a WebKit build is in fact taking place.
  • Consideration of terminal width. It’s desirable however not always possible, to avoid wrapping lines in the output of any command line interface.
  • Colorful and aligned output. When viewing a lot of terminal output, it helps to use some colors to assist the user in identifying some output they may be searching for. Likewise, alignment and formatting of text helps the user to parse more information with less frustration.

Here is a short video showing what the output currently looks like:

I’m particularly happy about how the status bar remains at the bottom of the terminal output while the regular rolling log continues above. While the status bar tells us what is going on right now, the rolling log above provides detail about what tasks are being launched, how long they took to complete and in what log files you can find the detailed build log.

Note that colors and status lines are automatically disabled when BuildStream is not connected to a tty. Interactive mode is also automatically disabled in that case. However using the bst –log-file /path/to/build.log … option will allow you to preserve the master build log of the entire session and also work in interactive mode.

Job Control

Advancements have also been made in the scheduler and how child tasks are managed.

When CNTL-C is pressed in interactive mode, all ongoing tasks are suspended and the user is presented with some choices:

  • continue – Carries on processing and queuing jobs
  • quit – Carries on with ongoing jobs but stops queuing new jobs
  • terminate – Terminates any ongoing jobs and exits immediately

Similarly, if an ongoing build fails in interactive mode, all ongoing tasks will be suspended while the user has the same choices, and an additional choice to debug the failing build in a shell.

Unfortunately continuing with a “repaired” build is not possible at this time in the same way as it is with JHBuild, however one day it should be possible in some developer mode where the user accepts that anything further that is built can only be used locally (any generated artifacts would be tainted as they don’t really correspond to their deterministic cache keys, those artifacts should be rebuilt with a fix to the input bst file before they can be shared with peers).

New Element Plugins

For those who have not been following closely, BuildStream is a system for the modeling and running of build pipelines. While this is fully intended for software building and the decoupling of the build problem and the distribution problem; in a more abstract perspective it can be said that BuildStream provides an environment for the modeling of pipelines, which consist of elements which perform mutations on filesystem data.

The full list of Element and Source plugins currently implemented in BuildStream can be found on the face page of the documentation.

As a part of my efforts to fully reproduce and provide a migration path for Baserock’s declarative definitions, some interesting new plugins were required.

meson

The meson element is a BuildElement for building modules which use meson as their build system.

Thanks goes to Patrick Griffis for filing a patch and adding this to BuildStream.

compose

The compose plugin creates a composition of its own build dependencies. Which is to say that its direct dependencies are not transitive and depending on a compose element can only pull in the output artifact of the compose element itself and none of its dependencies (a brief explanation of build and runtime dependencies can be found here)

Basically this is just a way to collect the output of various dependencies and compress them into a single artifact, that with some additional options.

For the purpose of categorizing the output of a set of dependencies, we have also introduced the split-rules public data which can be read off of the the dependencies of a given element. The default split-rules are defined in BuildStream’s default project configuration, which can be overridden on a per project and also on a per element basis.

The compose element makes use of this public data in order to provide a more versatile composition, which is to say that it’s possible to create an artifact composition of all of the files which are captured by a given domain declared in your split-rules, for instance all of the files related to internationalization, or the debugging symbols.

Example:

kind: compose
description: Initramfs composition
depends:
- filename: gnu-toolchain.bst
  type: build
- filename: initramfs/initramfs-scripts.bst
  type: build

config:
  # Include only the minimum files for the runtime
  include:
  - runtime

The above example takes the gnu-toolchain.bst stack which basically includes a base runtime with busybox, and adds to this some scripts. In this case the initramfs-scripts.bst element just imports an init and shutdown script required for the simplest of initramfs variations. The output is integrated; which is to say that things like ldconfig have run and the output of those has been collected in the output artifact. Further, any documentation, localization, debugging symbols etc, have been excluded from the composition.

script

The script element is a simple but powerful element allowing one to stage more than one set of dependencies into the sandbox in different places.

One set of dependencies is used to stage the base runtime for the sandbox, and the other is used to stage the input which one intends to mutate in some way to produce output, to be collected in the regular /buildstream/install location.

Example:

kind: script
description: The compressed initramfs
depends:
- filename: initramfs/initramfs.bst
  type: build
- filename: foundation.bst
  type: build

config:
  base: foundation.bst
  input: initramfs/initramfs.bst

  commands:
  - mkdir -p %{install-root}/boot
  - (find . -print0 | cpio -0 -H newc -o) |
    gzip -c > %{install-root}/boot/initramfs.gz

This example element will take the foundation.bst stack element (which in this context, is just a base runtime with your regular shell tools available) and stage that at the root of the sandbox, providing the few tools and runtime we want to use. Then, still following the same initramfs example as above, the integrated composition element initramfs/initramfs.bst will be staged as input in the /buildstream/input directory of the build sandbox.

The script commands then simply use the provided base tools to create a gzipped cpio archive inside the /buildstream/install directory, which will be collected as the artifact produced by this script.

A bootable system

Another thing we’ve been doing since last we touched base is providing a migration path for Baserock users to use BuildStream.

This is a particularly interesting case for BuildStream because Baserock systems provide metadata to build a bootable system from the ground up, from a libc and compiler boostrapping phase all the way up to the creation and deployment of a bootable image.

In this way we cover a lot of ground and can now demonstrate that bootstrapping, building and deploying a bootable image as a result is all possible using BuildStream.

The bootstrap

One of the more interesting parts is that the bootstrap remains almost unchanged, except for the key ingredient which is that we never allow any host tools to be present in the build sandbox.

The working theory is that whenever you bootstrap, you bootstrap from some tools. If you were ever able to obtain these tools in binary form installed on your computer, then it should also be possible to obtain them in the form of a chrootable sysroot (or “SDK”).

Anyone who has had a hand in maintaining a tree of build instructions which include a bootstrap phase from host tooling to first get off the ground (like buildroot or yocto) will have lived through the burden of vetting new distros as they roll out and patching builds so as to work “on the latest debian” or whatnot. This whole maintenance aspect is simply dropped from the equation by ensuring that host tools are not a variable in the equation but rather a constant.

Assembling the image

When it comes time to assemble an image to boot with, there are various options and it should not be such a big deal, right ? Well, unfortunately it’s not quite that simple.

It turns out that even in 2017, the options we have for assembling a bootable file system image as a regular unprivileged user are still quite limited.

Short of building qemu and using some virtualization, I’ve found that the only straight forward method of installing a boot loader is with syslinux on a vfat filesystem. While there are some tools around for manipulating ext2 filesystems in user space but these are largely unneeded anyway as static device nodes and assigning file ownership to arbitrary uid/gids is mostly unneeded when using modern init systems. In any case recent versions of e2fsprogs provide an option for populating the filesystem at creation time.

Partitioning an image for your file systems is also possible as a regular user, but populating those partitions is a game of splicing filesystem images into their respective partition locations.

I am hopeful however that with some virtualization performed entirely inside the build sandbox, we can achieve a much better outcome using libguestfs. I’m not altogether clear on how supermin and libguestfs come together but from what I understand, this technology will allow us to mount any linux supported filesystem in userspace, and quite possibly without even having (or using) the supporting filesystem drivers in your host kernel.

That said, for now we settle for the poor mans basic tooling and live with the restriction of having our boot partition be a vfat partition. The image can be created using the script element described above.

Example:

kind: script
description: Create a deployment of the GNOME system
depends:
- filename: gnome/gnome-system.bst
  type: build
- filename: deploy-base.bst
  type: build

variables:
  # Size of the disk to create
  #
  # Should be able to calculate this based on the space
  # used, however it must be a multiple of (63 * 512) bytes
  # as mtools wants a size that is devisable by sectors (512 bytes)
  # per track (63).
  boot-size: 252000K

  rootfs-size: 4G
  swap-size: 1G
  sector-size: 512

config:
  base: deploy-base.bst
  input: gnome/gnome-system.bst

  commands:

  - |
    # Split up the boot directory and the other
    #
    # This should be changed so that the /boot directory
    # is created separately.

    cd /buildstream
    mkdir -p /buildstream/sda1
    mkdir -p /buildstream/sda2

    mv %{build-root}/boot/* /buildstream/sda1
    mv %{build-root}/* /buildstream/sda2

  - |
    # Generate an fstab
    cat > /buildstream/sda2/etc/fstab << EOF
    /dev/sda2 / ext4 defaults,rw,noatime 0 1
    /dev/sda1 /boot vfat defaults 0 2
    /dev/sda3 none swap defaults 0 0
    EOF

  - |
    # Create the syslinux config
    mkdir -p /buildstream/sda1/syslinux
    cat > /buildstream/sda1/syslinux/syslinux.cfg << EOF
    PROMPT 0
    TIMEOUT 5

    ALLOWOPTIONS 1
    SERIAL 0 115200

    DEFAULT boot
    LABEL boot

    KERNEL /vmlinuz
    INITRD /initramfs.gz

    APPEND root=/dev/sda2 rootfstype=ext4 init=/sbin/init
    EOF

  - |
    # Create the vfat image
    truncate -s %{boot-size} /buildstream/sda1.img
    mkdosfs /buildstream/sda1.img

  - |
    # Copy all that stuff into the image
    mcopy -D s -i /buildstream/sda1.img -s /buildstream/sda1/* ::/

  - |
    # Install the bootloader on the image, it will load the
    # config file from inside the vfat boot partition
    syslinux --directory /syslinux/ /buildstream/sda1.img

  - |
    # Now create the root filesys on sda2
    truncate -s %{rootfs-size} /buildstream/sda2.img
    mkfs.ext4 -F -i 8192 /buildstream/sda2.img \
              -L root -d /buildstream/sda2

  - |
    # Create swap
    truncate -s %{swap-size} /buildstream/sda3.img
    mkswap -L swap /buildstream/sda3.img

  - |

    ########################################
    #        Partition the disk            #
    ########################################

    # First get the size in bytes
    sda1size=$(stat --printf="%s" /buildstream/sda1.img)
    sda2size=$(stat --printf="%s" /buildstream/sda2.img)
    sda3size=$(stat --printf="%s" /buildstream/sda3.img)

    # Now convert to sectors
    sda1sec=$(( ${sda1size} / %{sector-size} ))
    sda2sec=$(( ${sda2size} / %{sector-size} ))
    sda3sec=$(( ${sda3size} / %{sector-size} ))

    # Now get the offsets in sectors, first sector reserved
    # for MBR partition table
    sda1offset=1
    sda2offset=$(( ${sda1offset} + ${sda1sec} ))
    sda3offset=$(( ${sda2offset} + ${sda2sec} ))

    # Get total disk size in sectors and bytes
    sdasectors=$(( ${sda3offset} + ${sda3sec} ))
    sdabytes=$(( ${sdasectors} * %{sector-size} ))

    # Create the main disk and do the partitioning
    truncate -s ${sdabytes} /buildstream/sda.img
    parted -s /buildstream/sda.img mklabel msdos
    parted -s /buildstream/sda.img unit s mkpart primary fat32 \
       ${sda1offset} $(( ${sda1offset} + ${sda1sec} - 1 ))
    parted -s /buildstream/sda.img unit s mkpart primary ext2 \
       ${sda2offset} $(( ${sda2offset} + ${sda2sec} - 1 ))
    parted -s /buildstream/sda.img unit s mkpart primary \
       linux-swap \
       ${sda3offset} $(( ${sda3offset} + ${sda3sec} - 1 ))

    # Make partition 1 the boot partition
    parted -s /buildstream/sda.img set 1 boot on

    # Now splice the existing filesystems directly into the image
    dd if=/buildstream/sda1.img of=/buildstream/sda.img \
      ibs=%{sector-size} obs=%{sector-size} conv=notrunc \
      count=${sda1sec} seek=${sda1offset} 

    dd if=/buildstream/sda2.img of=/buildstream/sda.img \
      ibs=%{sector-size} obs=%{sector-size} conv=notrunc \
      count=${sda2sec} seek=${sda2offset} 

    dd if=/buildstream/sda3.img of=/buildstream/sda.img \
      ibs=%{sector-size} obs=%{sector-size} conv=notrunc \
      count=${sda3sec} seek=${sda3offset} 

  - |
    # Move the image where it will be collected
    mv /buildstream/sda.img %{install-root}
    chmod 0644 %{install-root}/sda.img

As you can see the script element is a bit too verbose for this type of task. Following the pattern we have in place for the various build elements, we will soon be creating a reusable element with some more simple parameters (filesystem types, image sizes, swap size, partition table type, etc) for the purpose of whipping together bootable images.

A booting demo

So for those who want to try this at home, we’ve prepared a complete system which can be built in the build-gnome branch of the buildstream-tests repository.

BuildStream now requires python 3.4 instead of 3.5, so this should hopefully be repeatable on most stable distros, e.g. debian jessie ships 3.4 (and also has the required ostree and bubblewrap available in the  jessie-backports repository).

Here are some instructions to get you off the ground:

mkdir work
cd work

# Clone related repositories
git clone git@gitlab.com:BuildStream/buildstream.git
git clone git@gitlab.com:BuildStream/buildstream-tests.git

# Checkout build-gnome branch
cd buildstream-tests
git checkout build-gnome
cd ..

# Make sure you have ostree and bubblewrap provided by your distro
# first, you will also need pygobject for python 3.4

# Install BuildStream as local user, does not require root
# If this fails, it's because you lack some required dependency.
cd buildstream
pip install --user -e .
cd ..

# If you've gotten this far, then the following should also succeed
# after many hours of building.
cd buildstream-tests
bst build gnome/gnome-system-image.bst

# Once the above completes, there is an image which can be
# checked out from the artifact cache.
#
# The following command will create ~/VM/sda.img
#
bst checkout gnome/gnome-system-image.bst ~/VM/

# Now you can use your favorite VM to boot the image, e.g.:
qemu-system-x86_64 -m size=1024 ~/VM/sda.img

# GDM is currently disabled in this build, once the VM boots
# you can login as root (no password) and in that VM you can run:
systemctl start gdm

# And the above will bring up gdm and start the regular
# gnome-initial-setup tool.

With SSD storage and a powerful quad core CPU, this build completes in less than 5 hours (and pretty much makes full usage of your machine’s resources all along the way). All told, the build will take around 40GB of disk space to build and store the result of around 500 modules. I would advise having at least 50GB of free space for this though, especially to account for some headroom in the final step.

Note: This is not an up to date GNOME system based on current modulesets yet, but rather a clone/conversion of the system I tried integrating last year using YBD. I will soon be starting on creating a more modular repository which builds only the components relevant to GNOME and follows the releases, for that I will need to open some dialog and sort out some of the logistics.

Note on modularity

The mentioned buildstream-tests repository is one huge repository with build metadata to build everything from the compiler up to a desktop environment and some applications.

This is not what we ultimately want, because first off, it’s obviously a huge mess to maintain and you dont want your project to be littered with build metadata that you’re not going to use (which is what happens when forking projects like buildroot). Secondly, even when you are concerned with building an entire operating system from scratch, we have found that without modularity, changes introduced in the lower levels of the stack tend to be pushed on the stacks which consume those modules. This introduces much friction in the development and integration process for such projects.

Instead, we will eventually be using recursive pipeline elements to allow modular BuildStream projects to depend on one another in such a way that consuming projects can always decide what version of a project they depend on will be used.

 

April 09, 2017

GMime 2.99.0 released

After a long hiatus, I am pleased to announce the release of GMime 2.99.0!

See below for a list of new features and bug fixes.


About GMime

GMime is a C library which may be used for the creation and parsing of messages using the Multipurpose Internet Mail Extension (MIME), as defined by numerous IETF specifications.

GMime features an extremely robust high-performance parser designed to be able to preserve byte-for-byte information allowing developers to re-seralize the parsed messages back to a stream exactly as the parser found them. It also features integrated GnuPG and S/MIME v3.2 support.

Built on top of GObject (the object system used by the GNOME desktop), many developers should find its API design and memory management very familiar.


Noteworthy changes in version 2.99.0

  • Overhauled the GnuPG support to use GPGME under the hood rather than a custom wrapper.
  • Added S/MIME support, also thanks to GPGME.
  • Added International Domain Name support via GNU's libidn.
  • Improved the GMimeMessage APIs for accessing the common address headers. They now all return an InternetAddressList.
  • g_mime_init() no longer takes any flag arguments and the g_mime_set_user_charsets() API has also been dropped. Instead, GMimeParserOptions and GMimeFormatOptions have taken the place of these APIs to allow customization of various parser and formatting options in a much cleaner way. To facilitate this, many parsing functions and formatting functions have changed to now take these options arguments.
  • InternetAddress now has a 'charset' property that can be set to override GMime's auto-detection of the best charset to use when encoding names.
  • GMimeHeaderIter has been dropped in favor of a much simpler index-based API on GMimeHeaderList.
  • GMimeHeaderList no longer caches the raw message/mime headers in a stream. Instead, each GMimeHeader now has its own cache. This means that changing the GMimeHeaderList or any of its GMimeHeaders no longer invalidates the entire cache.
  • GMimeParser has been fixed to preserve (munged or otherwise) From-lines that sometimes appear at the start of the content of message/rfc822 parts.
  • GMimeParser now also scans for encapsulated PGP blocks within MIME parts as it is parsing them and sets a flag on each GMimePart that contains one of these blocks.
  • GMimePart now has APIs for dealing with said encapsulated PGP blocks.

Developers interested in migrating to the upcoming GMime 3.0 API (of which GMime 2.99.0 is a preview) should take a look at the PORTING document included with the source code as it contains a fairly comprehensive list of the API changes that they will need to be aware of.


Getting the Source Code

You can download official public release tarballs of GMime at https://download.gnome.org/sources/gmime/ or ftp://ftp.gnome.org/pub/GNOME/sources/gmime/.

If you would like to contribute to the GMime project, it is recommended that you grab the source code from the official GitHub repository at https://github.com/jstedfast/gmime. Cloning this repository can be done using the following command:

git clone https://github.com/jstedfast/gmime.git

Documentation

API reference documentation can be found at https://developer.gnome.org/gmime/2.99/.

Documentation for getting started can be found in the README.md.

Spring is here!

This is spring, time of cherry blossoms and warmer weather (well for people in north hemiphere only!). So Aryeom drew a new header image for our blog (as usual drawn in GIMP under Creative Commons by-sa).

Very soon more news on GIMP and ZeMarmot!

A quick look at the Ikea Trådfri lighting platform

Ikea recently launched their Trådfri smart lighting platform in the US. The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it's surprisingly competent. Hardware-wise, the device is pretty minimal - it seems to be based on the Cypress[1] WICED IoT platform, with 100MBit ethernet and a Silicon Labs Zigbee chipset. It's running the Express Logic ThreadX RTOS, has no running services on any TCP ports and appears to listen on two single UDP ports. As IoT devices go, it's pleasingly minimal.

That single port seems to be a COAP server running with DTLS and a pre-shared key that's printed on the bottom of the device. When you start the app for the first time it prompts you to scan a QR code that's just a machine-readable version of that key. The Android app has code for using the insecure COAP port rather than the encrypted one, but the device doesn't respond to queries there so it's presumably disabled in release builds. It's also local only, with no cloud support. You can program timers, but they run on the device. The only other service it seems to run is an mdns responder, which responds to the _coap._udp.local query to allow for discovery.

From a security perspective, this is pretty close to ideal. Having no remote APIs means that security is limited to what's exposed locally. The local traffic is all encrypted. You can only authenticate with the device if you have physical access to read the (decently long) key off the bottom. I haven't checked whether the DTLS server is actually well-implemented, but it doesn't seem to respond unless you authenticate first which probably covers off a lot of potential risks. The SoC has wireless support, but it seems to be disabled - there's no antenna on board and no mechanism for configuring it.

However, there's one minor issue. On boot the device grabs the current time from pool.ntp.org (fine) but also hits http://fw.ota.homesmart.ikea.net/feed/version_info.json . That file contains a bunch of links to firmware updates, all of which are also downloaded over http (and not https). The firmware images themselves appear to be signed, but downloading untrusted objects and then parsing them isn't ideal. Realistically, this is only a problem if someone already has enough control over your network to mess with your DNS, and being wired-only makes this pretty unlikely. I'd be surprised if it's ever used as a real avenue of attack.

Overall: as far as design goes, this is one of the most secure IoT-style devices I've looked at. I haven't examined the COAP stack in detail to figure out whether it has any exploitable bugs, but the attack surface is pretty much as minimal as it could be while still retaining any functionality at all. I'm impressed.

[1] Formerly Broadcom

comment count unavailable comments

April 08, 2017

Insights into the GNOME 3.24 Release Video

What a month! 3.24 is out, the revamped newcomers guide is out and I’m still trying to catch my breath here. This blog post will go a bit behind the scenes of the 3.24 release video.

First, here’s a closer look at the process of making a release video. These videos are a big effort from me but they are made possible thanks to many others. Of course this is just an approximate visualization of the time spent and how the processes are laid out. In reality much of it intertwine a lot more, as the video and its assets are created in several iterations.

The process


Time spent on the release video.


Visualization of the release video creation process

First, highlights from the new changes to applications and developer tools are chosen in the draft release notes. From this a manuscript draft is created and sent to the engagement list. Once the structure is approximately in place we can start recording footage. Much of the footage of the applications was this time provided by developers and application contributors. This meant I could spend extra time working on the animations themselves and I really enjoyed that part! A large majority of the time I was livestreaming my work on my twitch channel. Recording footage might sound like something trivial to do, but this actually normally takes up a large amount of time for me because:

  • The recordings require the latest unstable application version. This can be either super easy or very time consuming if the application doesn’t build, doesn’t run or isn’t up to date in flatpak, rawhide, JHBuild.
  • The application needs to be in a state which exposes what needs to be recorded. There are typically a few cool features which require special hardware (fx touchscreen, drawing tablets), need to be populated with some sample data (content applications).

So to all the developers and maintainers helping me with the special cases, thank you very much! I hope you don’t mind if I ask of your assistance again sometime in the future.

Once the manuscript is in good shape, it’s ready to be sent to Karen and Mike who help with the final revision and voice-over. On the sideline I have been working with Simon (@TheBaronHimself) who has produced the music for the video. This has been going on since the manuscript was still being written and having music produced from scratch for the video really upped the quality! The music is designed to work together with the content in the video, take for example how the music is timed to sound different when we talk about new developer features.

Mid-march Simon sent a draft of the music and I had a draft of the video which we then synchronized. This marks the editing freeze, which freezes the timing of Karen’s voice, this time 7 days before the release of GNOME 3.24. This is a new constraint that I put on the editing process in order to give translators a chance to translate the release video so as many translations of the subtitles are available as possible at release.

We managed to release the video a day after the release of GNOME 3.24. The slight delay was partly because timing the music proved quite difficult due to the editing freeze, but me and Simon now have some experience dealing with this, so we will come up with a better approach for the next video.

Source files

The manuscript is available here. I have also uploaded blender source files to this public git repository.

I’ll end this blog post with showcasing a few animations, some of which gave some new learning opportunities and some which were of the fun things I worked in this video:

a lock object with a constraint copying the rotation and noise from an empty with animated influence.


an array and bend modifier with f-curve offset.


many smaller animations, that I had fun with making to represent our teams in GNOME.

Thanks to translation team, design team, engagement team, all the developers helping me recording footage, karen and mike for the voice-over and Simon for producing the music. These videos could not be possible without help from all these people in the GNOME community. :)

This video was made using Blender, GIMP and Inkscape. It is satisfying to know that I can produce all of this using a free software pipeline.

April 07, 2017

Inclusive-Or: Hospitality in Bug Tracking

Lindsey Kuper asked:

I’m interested in hearing about [open source software] projects that have successfully adopted an "only insiders use the issue tracker" approach. For instance, a project might have a mailing list where users discuss bugs in an unstructured way, and project insiders distill those discussions into bug reports to be entered into the issue tracker. Where does this approach succeed, and where does it fail? How can projects that operate this way effectively communicate their expectations to non-insider users, especially those users who might be more accustomed to using issue trackers directly?
More recently, Jillian C. York wrote:

...sick of "just file a bug with us through github!" You realize that's offputting to your average users, right?

If you want actual, average users to submit bugs, you know what you have to do: You have to use email. Sorry, but it's true.

Oh, and that goes especially for high-risk users. Give them easy ways to talk to you. You know who you are, devs.

Both Kuper and York get at: How do we open source maintainers get the bug reports we need, in a way that works for us and for our users?

My short answer is that open source projects should have centralized bug trackers that are as easy as possible to work in as an expert user, and that they should find automated ways to accept bug reports from less structured and less expert sources. I'll discuss some examples and then some general principles.

Dreamwidth logo Dreamwidth: Dreamwidth takes support questions via a customer support interface. The volunteers and paid staff answering those questions sometimes find that a support request reveals a bug, and then file it in GitHub on the customer's behalf, then tell her when it's fixed. (Each support request has a private section that only Support can see, which makes it easier to track the connection between Support requests and GitHub issues, and Support regulars tend to have enough ambient awareness of both Support and GitHub traffic to speak up when relevant issues crop up or get closed.) Dreamwidth users and developers who are comfortable using the GitHub issue tracker are welcomed if they want to file bugs there directly instead.

Dreamwidth also has a non-GitHub interface for feature suggestions: the suggestions form is the preferred interface for people to suggest new features for Dreamwidth. Users post their suggestions into a queue and a maintainer chooses whether to turn that suggestion into a post for open discussion in the dw-suggestions community, or whether to bounce it straight into GitHub (e.g., for an uncontroversial request to whitelist a new site for media embedding or add a new site for easy cross-site user linking, or at the maintainer's prerogative). Once a maintainer has turned a suggestion into a post, other users use an interface familiar to them (Dreamwidth itself) to discuss whether they want the feature. Then, if they and the maintainer come to consensus and approve it, the maintainer adds a ticket for it to GitHub. That moderation step has been a bottleneck in the past, and the process of moving a suggestion into GitHub also hasn't yet been automated.

Since discussion about site changes needs to include users who aren't developers, Dreamwidth maintainers prefer that people use the suggestions form; experienced developers sometimes start conversations in GitHub, but the norm (at least the official norm) is to use dw-suggestions; I think the occasional GitHub comment suffices for redirecting these discussions.

Zulip logo Zulip: We use GitHub issues. The Zulip installations hosted by Kandra Labs (the for-profit company that stewards the open source project) also have a "Send feedback" button in one of the upper corners of the Zulip web user interface. Clicking this opens a private message conversation with feedback-at-zulip.com, which users used more heavily when the product was younger. (We also used to have a nice setup where we could actually send you replies in-Zulip, and may bring that back in the future.)

I often see Tim Abbott and other maintainers noticing problems that new users/customers are having and, while helping them (via the zulip-devel mailing list, via the Zuliping-about-Zulip chat at chat.zulip.org, or in person), opening GitHub issues about the issue, as the next step towards a long-term fix. But -- as with the Dreamwidth example -- it is also fine for people who are used to filing bug reports or feature requests directly to go ahead and file them in GitHub. And if Tim et alia know that the person they're helping has that skill and probably has the time to write up a quick issue, then the maintainers will likely say, "hey would you mind filing that in GitHub?"

We sometimes hold live office hours at chat.zulip.org. At yesterday's office hour, Tim set up a discussion topic named "warts" and said,

I think another good topic is to just have folks list the things that feel like they're some of our uglier/messier parts of the UI that should be getting attention. We can use this topic to collect them :).

Several people spoke up about little irritations, and we ended up filing and fixing multiple issues. One of Zulip's lead developers, Steve Howell, reflected: "As many bug reports as we get normally, asking for 'warts' seems to empower customers to report stuff that might not be considered bugs, or just empower them to speak up more." I'd also point out that some people feel more comfortable responding to an invitation in a synchronous conversation than initiating an asynchronous one -- plus, there's the power of personal invitation to consider.

As user uptake goes up, I hope we'll also have more of a presence on Twitter, IRC, and Stack Overflow in order to engage people who are asking questions there and help them there, and get proto-bug reports from those platforms to transform into GitHub issues. We already use our Twitter integration to help -- if someone mentions Zulip in a public Tweet, a bot tells us about it in our developers' livechat, so we can log into our Twitter account and reply to them.

MediaWiki logo 1MediaWiki and Wikimedia: Wikipedia editors and other contributors have a lot of places they communicate about the sites themselves, such as the technical-issues subforum of English Wikipedia's "Village Pump", and similar community-conversation pages within other Wikipedias, Wikivoyages, etc. Under my leadership, the team within Wikimedia Foundation's engineering department that liaised with the larger Wikimedia community grew more systematic about working with those Wikimedia spaces where users were saying things that were proto-bug reports. We got more systematic about listening for those complaints, filing them as bugs in the public bug tracker, and keeping in touch with those reporters as bugs progressed -- and building a kind of ambassador community to further that kind of information dissemination. (I don't know how well that worked out; I think we built a better social infrastructure for people who were already doing that kind of volunteer work ad hoc, but I don't know whether we succeeded in recruiting more people to do it, and I haven't kept a close eye on how that's gone in the years since I left.)

We also worked to make it easy for people to report bugs into the main bug tracker. The Bugzilla installation we had for most of the time that I was at Wikimedia had two bug reporting forms: a "simple" submission form that we pointed most people to, with far fewer fields, and an "advanced" form that Wikimedia-experienced developers used. They've moved to Phabricator now, and I don't know whether they've replicated that kind of two-lane approach.

A closed-source example: FogBugz. When I was at Fog Creek Software doing sales and customer support, we used FogBugz as our internal bug tracker (to manage TODOs for our products,* and as our customer relationship manager). Emails into the relevant email addresses landed in FogBugz, so it was easy for me to reply directly to help requests that I could fix myself, and easy for me to note "this customer support request demonstrates a bug we need to fix" and turn it into a bug report, or open a related issue for that bug report. If I recall correctly, I could even set the visibility of the issue so the customer could see it and its progress (unusual, since almost all our issue-tracking was private and visible only within the company).

Debian logo An interface example: Debian. Debian lets you report bugs via email and via the command-line reportbug program. As the "how to use BTS" guide says,

some spam messages managed to send mails to -done addresses. Those are usually easily caught, and given that everything can get reverted easily it's not that troublesome. The package maintainers usually notice those and react to them, as do the BTS admins regularly.

The BTS admins also have the possibility to block some senders from working on the bug tracking system in case they deliberately do malicious things.

But being open and inviting everyone to work on bugs totally outweighs the troubles that sometimes pop up because of misuse of the control bot.

And that leads us to:

General guidelines: Dreamwidth, Zulip, MediaWiki, and Debian don't discourage people from filing bug reports in the official central bug tracker. Even someone quite new to a particular codebase/project can file a very helpful and clear bug report, after all, as long as they know the general skill of filing a good bug report. Rather, I think the philosophy is what you might find in hospitable activism in general: meet people where they are, and provide a means for them to conveniently start the conversation in a time, place, and manner that's more comfortable for them. For a lot of people, that means email, or the product itself.

Failure modes can include:

  • a disconnect among the different "places" such that the central bug tracker is a black hole and nothing gets reported back to the more accessible place or the original reporter
  • a feeling of elitism where only special important people are allowed to even comment in the main bug tracker
  • bottlenecks such that it seems like there's a non-bug-tracker way to report a question or suggestion but that process has creaked to a halt and is silently blocking momentum
  • bottlenecks in bug triage
  • brusque reaction at the stage where the bug report gets to the central bug tracker (e.g., "oh that's a duplicate; CLOSE" without explanation or thanks), which jars the user (who's expecting more explicit friendliness) and which the user perceives as hostile

Whether or not you choose to increase the number of interfaces you enable for bug reporting, it's worth improving the user experience for people reporting bugs into your main bug tracker. Tedious, lots-of-fields issue tracker templates and UIs decrease throughput, even for skilled bug reporters who simply aren't used to the particular codebase/project they're currently trying to file an issue about. So we should make that easier. You can provide an easy web form, as Wikimedia did via the simplified Bugzilla form, or an email or in-application route, as Debian does.

And FLOSS projects oughta do what the Accumulo folks did for Kuper, too, saying, "I can file that bug for you." We can be inclusive-or rather than exclusive-or about it, you know? That's how I figure it.


* Those products were CityDesk, Copilot, and FogBugz -- this was before Kiln, Stack Overflow, Trello, and Glitch.

Thanks to Lindsey Kuper and Jillian C. York for sparking this post, and thanks to azurelunatic for making sure I got Dreamwidth details right.

The new contribution workflow for GNOME

Hello community,

I have big good news to share with you. You might know we have been working for years on materializing what we wanted the future of contribution to be, we did multiple iterations and we worked full time on our developer experience… and finally, I’m glad to announce, we achieved it, we have a new way to contribute to GNOME!

One image says more than 1000 words, the whole process of contributing to GNOME is as easy as you will see, all documented in the new newcomers wiki

builderstep2step3

No specific distribution required. No specific version required. No dependencies hell. Reproducible, if it builds for me it will build for you. All with an UI and integrated, no terminal required. Less than five minutes of downloading plus building and you are contributing.

Can you imagine how this changes the GNOME contribution story? We went from requiring either latest Fedora or Ubuntu, fighting dependencies and random issues, taking more than 80 modules to build just for contributing to a single app. It was a pain.

As an example, Nautilus with the previous tool and workflow took around 6 hours the first time if no issues were present. Now it’s 5 minutes, with no possible build issues (forgive exceptions in the rule 🙂 ).

I think we just opened a new world for contributors.

The work behind it

Of course, a change as big as this didn’t come overnight, this is possible because GNOME and sponsors put the time and resources on it, with rock-stars like Alex Larsson creating Flatpak and Christian Hergert creating Builder, working both for years nonstop in these technologies, with no short term benefit.

Finally the benefit is here, the future we imagined and shaped 5 years ago is coming together, and it’s shining.

Thanks a lot to the people involved, also specially Bastian Ilso for his guidance, design and writing of the new wiki guide.

Hope you enjoy all the work we did, I’m looking forward for your feedback and to fix the issues you may find (contact us in IRC in #newcomers). And soon, to have your first contribution with GNOME done 🙂

 

PD: Please follow the newcomers wiki to have it working, lot of work to make this happen was done in Flatpak 0.9.1, when Ubuntu 16.04 has 0.8.4 for now, so we say to use a PPA for have it updated. I tested thoughtfully in Ubuntu 16.04 and Fedora 25, and it works out of the box following the wiki making sure Flatpak is updated. Thanks all for the feedback so far! 🙂

PD2: I just realized I had a small error when doing the switch to the new wiki and the instructions for Ubuntu 16.04 and PPA got lost. Now it’s fixed, try again and tell us how it goes! 🙂

PD3: Cool video of Jono Bacon showing what Endless does with the same technology https://twitter.com/jonobacon/status/817059475437879305


gspell maintenance

The gspell bug tracker is perfect again, there are only feature requests (marked as enhancements).

I’ve fixed two bugs recently, the second one was not that easy to fix:

  • One crash (a failed assertion) probably due to a bug in an underlying library.
  • A responsiveness problem when editing long lines. It turned out that the spell-checking code for GtkTextView was very slow (200 ms to re-check the long line). So I’ve written a new implementation, which is 20x faster! So with 10 ms it’s now responsive.

And I regularly do other various maintenance tasks in gspell, as can be seen in the Git repository.

gspell is now used by at least 6 applications (see the list on the wiki page), and with both GtkTextView and GtkEntry support I’m sure a lot more applications will use it in the future.

If you like the work I’m doing, the gspell fundraising is still open. Your donations encourage me to continue to take care of gspell, to make it a rock-solid library and well-maintained in the long term. Thanks!

April 06, 2017

GNOME Paint - simple drawing app for GNOME

TL;DR
I've started working on simple drawing application for GNOME. Current state - just started (see the current screenshot [1]), but progressing. Help needed (especially UX guys).

Motivation
A couple of weeks ago I've needed to make a very simple modification of the image - some cutting, moving some part of the image from left to right, draw a few lines. Turns out, that I couldn't easily find a software, that I could use. I've tried following:
  • gnome-paint [2] - the name (GNOME-XXX) convinced me to try this tool as a first one. The first attempt of modifying the image (AFAIR I just wanted to resize the canvas) - crash. Ok, that happens. Fortunately, we live in the open source world so I can fix it. I've downloaded the sources, and... bang! The project was unmaintained for a while (last commit more than 6 years ago), GTK+2, etc - I gave up. I didn't have that much time, I just needed to do simple modifications of the image. So let's try another tool:
  • gpaint [3] - the same story - unmaintained, and crashed after few clicks. Keep searching, and I found:
  • GNU Paint [4] - this app looks awful (especially if you got used to beautiful GNOME apps) but at least it seems functional. Unfortunately, didn't work for me neither. After few minutes of using it, the app just didn't respond to the input (yes, it means that I couldn't save my work, I had to do the print-screen...). And finally, I've installed:
  • KolourPaint [5] with tons of KDE stuff... seriously, I couldn't believe that I can't easily find simple image editor for GNOME...
Hey, but what about GIMP?
Sorry to say that, but it's not a SIMPLE image editor. Every time when I tried to do something that would be simple in MS Paint, I had to search for a tutorial. Every single time. 

You didn't do the research good enough, I know tool XXX which is based on GTK+
That's true, I didn't spend that much time on doing the research. However, after a few failures, I was just tired of this. I just wanted to do a simple modification, so I finally ended up with an app that I used to use some time ago, which was stable, functional, and simple. Now I'm aware of a few more apps (Pinta, mtpaint etc).

GNOME Paint
I've decided to start a new project - GNOME Paint. You can find some of the reasons in the paragraph "Motivation", here are the others:
  1. There is no GNOME-look-like drawing editor. Would be nice to have one. If the app will be good enough, and people will like it, it might be a part of GNOME core apps (but that's very long way)
  2. I finally want to dive deep into the GTK+ framework. I used to use it for several projects (mostly GTKMM), but without deep understanding of the framework, and working on a real project is IMO a better way for learning the framework than just reading the manual.
  3. I'm going to use it a lot, and it's also fun using software that you made on your own.
Current state
See the screenshot below. I've high-level design and some basic data structures, but still missing minimal functionality, so it's not even close to the first release. Also, the UI needs to be re-done (but I didn't pay attention to the UI so far). Hopefully, I'll be able to do a simple demo of the first release during the lightning session at GUADEC, but no promises.

Help needed
Any help (most important - UX, GNOME HIG experts) very welcome!


Links

You get what you pay for so start paying for your media

Warning! This is not a directly Linux/Tech related blog post (it is not the only thing I care about in this world :).

One thing that has been on my mind for a while is the state of journalism. The quality of journalism seems to have
been declining over the last decade and I think it is clear that the new internet driven expectation that news content
is free for the consumer is a big part of the explanation. We all know that newspapers and TV news teams have seen their
staff cut as advertising revenue has not been strong enough to keep staffing up. And in my opinion advertising is in itself
a horrible way to finance something like news and information as it drivers a lot of unwanted behaviour both in terms of
avoiding critical journalism that might drive away advertisers and an intensive drive for ‘clicks’ per news article that often
comes at the expense of level of accuracy and making news very scandal driven.

So I have come to believe over the last few years that if we want to see quality journalism and a healthy democracy we need
to move away from free news content to accepting that we get what we pay for and start paying for our news again. This is true
both in terms of mainstream media, but also in terms of topical media like tech media.

So as a start I began paying for some of my most use news sites last year. I am now a paying subscriber to the Economist, which is one
of the best sources of quality news in my opinion and on the tech side I am a paying subscriber to Phoronix (I am also a paying subscriber to LWN through work). Anyway, I feel strongly enough about this to write this blog and hope that other people reading it agree with my thinking here and start paying for the content you enjoy, be that through subscriptions or patreons or similar. And maybe we can be part of a process to change the expectation and understanding of the value of well funded independent media. Lets help make news something that is made to help inform us as readers and not something that is made to help someone sell something to us.

Feeds