January 19, 2019

My Name is Handy, Lib Handy

Libhandy 0.0.7 just got released! I didn't blog about this mobile and adaptive oriented GTK widget library since the release of its 0.0.4 version three months ago, so let's catch up on what has been added since.

List Rows

A common pattern in GNOME applications is lists, which are typically implemented via GtkListBox. More specific patterns arose, where rows have a title at the start, an optional subtitle below it, actions at the end and an icon or some other widget like a radio button as a prefix. These rows can also be expanded to reveal nested rows or anything else that fits the need.

So far every application using these patterns implemented the rows by hand for each and every row. It made using these a bit cumbersome and it led to inconsistencies in sizing, even inside a single application. To make these patterns easier to use, we implemented HdyActionRow, HdyComboRow and HdyExpanderRow.

The row widgets in action

HdyActionRow

The action row is a simple and flexible row, it lets you give it a title, a subtitle, an icon name, action widgets at its end, prefix widgets at its start and other widgets below it. It takes care of the base layout of the row while giving you control over what to do with it.

HdyComboRow

The combo row lets the user pick a single value from a list model, it is quite convenient to use as you can even set it for an enumeration GType.

HdyExpanderRow

The expander row allows you to reveal a widget below the row, like a nested list of options. It lets you optionnaly have a switch triggering whether it is possible to expand the row to access the nested widget or not.

Adaptive Dialog

HdyDialog is a dialog which behaves like a regular GtkDialog on normal conditions, but which automatically adapt its size to the one of its parent window and replace its window decorations by a back button if that parent window is small, e.g. if it is used on a phone. This will mean that HdyDialog will act like a regular dialog on form factors like a desktop, a laptop or a tablet, but it will act like another view of the main window if it is used on a phone or on a really narrow window. HdyDialog has been written by Zander Brown, thanks a lot!

Adaptive Search Bar

HdySearchBar is a reimplementation of GtkSearchBar that allows the search entry to be expanded to take all the available space up. This allows for an expanded HdyColumn between the search entry and the search bar, allowing to automatically adapt the width allocated to the search entry to the one allocated to the bar.

GtkSearchBar from GTK 4 already handles that correctly so HdySearchBar will not be ported to GTK 4.

Internationalization

Libhandy now supports internationalization, there are no end-user-facing strings but developer-facing strings like property descriptions can now be localized.

Initialization

The `hdy_init()` function has been added, it will initialize the internationalization, the types, and the resources, ensuring Libhandy will work in any context.

Annotation of Symbols Introduction

We started annotating when symbols were added to the API, which will better explain to you via the documentation what is available with your current Libhandy version and which version of Libhandy you should require to use a specific feature.

glade_catalog and introspection Options

The glade_catalog and introspection options have been turned from booleans into features, that means that we broke the build system's interface as true and false are not valid values anymore and should be replaced by enabled, disabled or auto. Their default value is auto which means that if you don't care about the availability of these features, you don't have to care about these options anymore.

Making Libhandy Static

The static boolean option have been added to allow Libhandy to be built as a static library. Note that the introspection and the Glade catalog can't be built when building Libhandy as a static library.

Bundle Libhandy in a Flatpak Manifest

To bundle the master version of Libhandy in your Flatpak manifest, simply add the following module:

{
"name" : "libhandy",
"buildsystem" : "meson",
"builddir" : true,
"config-opts": [
"-Dexamples=false",
"-Dtests=false"
],
"sources" : [
{
"type" : "git",
"url" : "https://source.puri.sm/Librem5/libhandy.git"
}
]
}

Bundle Libhandy as a Meson Subproject

To use Libhandy 0.0.7 as a Meson subproject, first add Libhandy as a git submodule:

git submodule add https://source.puri.sm/Librem5/libhandy.git subprojects/libhandy
cd subprojects/libhandy
git checkout v0.0.7 # Or any version of your choice.
cd ../..
git add subprojects/libhandy

Then add this to your Meson build system (adapt the package sub-directory name to your needs):

libhandy_dep = dependency('libhandy-0.0', version: '>= 0.0.7', required: false)
if not libhandy_dep.found()
libhandy = subproject(
'libhandy',
install: false,
default_options: [
'examples=false',
'package_subdir=my-project-name',
'tests=false',
]
)
libhandy_dep = libhandy.get_variable('libhandy_dep')
endif

If you don't require introspection and you don't care about localization, you can alternatively build it as a static library:

libhandy_dep = dependency('libhandy-0.0', version: '>= 0.0.7', required: false)
if not libhandy_dep.found()
libhandy = subproject(
'libhandy',
install: false,
default_options: [
'examples=false',
'static=true',
'tests=false',
]
)
libhandy_dep = libhandy.get_variable('libhandy_dep')
endif

Librem 5 DevKits

As a sidenote: the Librem 5 devkits shipped at the very end of 2018, here are photos of mine! I'm eager to play with Libhandy on it.

Back view of the Librem 5 devkitFront view of the Librem 5 devkitAdditional content comming with the Librem 5 devkit

Starting on a new map rendering library

Currently in Maps, we use the libchamplain library to display the bitmap map titles (based on OpenStreetMap data and aerial photography) that we get from our tile provider, currently MapBox. This library is based on Clutter and used via the GTK+ embed support within libchamplain, which in turn makes use of the Clutter GTK embed support. Since this will not be supported when moving along to GTK+ 4.x and the Clutter library is not maintained anymore (besides the copy of it that is included in the GNOME Shell window manager/Wayland compositor, Mutter) eventually Maps will have to find a replacement. There's also some wonky bugs especially with regards to the mixing of event handling on the Clutter side vs. the GTK+ side.

So to at least get the ball rolling a bit, I recently decided to see how hard it would be to take the code from libchamplain and keep the grotty deep-down internals dealing with tile downloading and caching and such and refocus the top-level parts onto new GTK+ 4 technologies such as the Snapshot, GSK (scene graph), and render node APIs.

Picture under Public Domainfrom Wikipedia
I decided to call the new library “libshumate” in honor of Jessamine Shumate who was an artist, historian, and cartographer.















The code currently lives in this personal repo https://gitlab.gnome.org/mlundblad/libshumate
So far it's not so exciting as I've only done some cleanups, based off the Meson build system port for libchamplain, removed support for the GNU Autotools build system, removed support for the unmaintained Memphis renderer library and the GTK+ Champlain widget, as the plan is to re-work the library to use GTK+ facilities directly. I've gone through all the files and renamed the API to use the new name. And rather than using something like sed, I went through all source and header files in GNOME Builder and use search and replace, this way I got to get a quick glance at the internals 😎.

The next step will probably be to change the “top” class into a GTK+ widget and try getting first to just display the initially downloaded tiles using the GSK and leave out all the other functionallity at first (handling input and the overlay layers and so on).

Let's see how it goes…

January 17, 2019

Builder 3.32 Sightings

We just landed the largest refactor to Builder since it’s inception. Somewhere around 100,000 lines of code where touched which is substantial for a single development cycle. I wrote a few tools to help us do that work, because that’s really the only way to do such a large refactor.

Not only does the refactor make things easier for us to maintain but it will make things easier for contributors to write new plugins. In a future blog post I’ll cover some of the new design that makes it possible.

Let’s take a look at some of the changes in Builder for 3.32 as users will see them.

First we have the greeter. It looks virtually the same as before. However, it no longer shares it’s windowing with the workspace that is opened. Taking this approach allowed us to simplify a great deal of Builder’s code-base and allows for a new feature you’ll see later.

The UI might change before 3.32, but that depends on our available time and some freezes.

Builder now gives some feedback about what files were removed when cleaning up old projects.

Builder gained support for more command-line options which can prove useful in simplifying your applications setup procedure. For example, you can run gnome-builder --clone https://gitlab.gnome.org/GNOME/gnome-builder.git to be taken directly to the clone dialog for a given URL.

The clone activity provides various messaging in case you need to debug some issues during the transfer. I may hide this behind a revealer by default, I haven’t decided yet.

Creating a new project allows specifying an application-id, which is good form for desktop applications.

We also moved the “Continue” button out of the header bar and placed it alongside content since a number of users had difficulty there.

The “omni-bar” (center of header bar) has gained support for moving through notifications when multiple are present. It can also display buttons and operational progress for rich notifications.

Completion hasn’t changed much since last cycle. Still there, still works.

Notifications that support progress can also be viewed from our progress popover similar to Nautilus and Epiphany. Getting that circle-pause-button aligned correctly was far more troublesome than you’d imagine.

The command-bar has been extracted from the bottom of the screen into a more prominent position. I do expect some iteration on design over the next cycle. I’ve also considered merging it into the global search, but I’m still undecided.

Also on display is the new project-less mode. If you open Builder for a specific file via Nautilus or gnome-builder foo.c you’ll get this mode. It doesn’t have access to the foundry, however. (The foundry contains build management and other project-based features).

The refactoring not only allowed for project-less mode but also basic multi-monitor support. You can now open a new workspace window and place it on another monitor. This can be helpful for headers, documentation, or other references.

The project tree has support for unit tests and build targets in addition to files.

Build Preferences has been rebuilt to allow plugins to extend the view. That means we’ll be able to add features like toggle buttons for meson_options.txt or toggling various clang/gcc sanitizers from the Meson plugin.

The debugger has gone through a number of improvements for resilience with modern gdb.

When Builder is full-screen, the header bar slides in more reliably now thanks to a fix I merged in gtk-3-24.

As previewed earlier in the cycle, we have rudimentary glade integration.

Also displayed here, you can select a Build Target from the project tree and run it using a registered IdeRunHandler.

Files with diagnostics registered can have that information displayed in the project tree.

The document preferences have been simplified and extracted from the side panel.

The terminal now can highlight filename:line:column patterns and allow you to ctrl+click to open just like URLs.


In a future post, we’ll cover some of what went into the refactoring. I’d like to discuss how the source tree is organized into a series of static libraries and how internal plugins are used to bridge subsystems to avoid layering violations. We also have a number of simplified interfaces for plugin authors and are beginning to have a story around ABI promises to allow for external plugins.

If you just can’t wait, you can play around with it now (and report bugs).

flatpak install https://gitlab.gnome.org/GNOME/gnome-apps-nightly/raw/master/gnome-builder.flatpakref

Until next time, Happy Hacking!

January 15, 2019

GDA and GObject Introspection: Remember 1

This is just to help remember:

If you have used G_DECLARE_* macros in a C class header in order to modernize your old C code to resent GLib class definition methods; you are using GObject Introspection to parse it and generate GIR files; and you get this error:

"Fatal: Gda: Namespace conflict for 'your_type_get_type'"

Then remove your “*_get_type()” method definition from your header file.

Above have been the result to modernize GDA to resent practices and could be helpful for any one else.

January 14, 2019

This is in English

I just posted a blog item in Spanish, by way of showing support for Clarissa, my intern on the Outreachy program. But I'm sure my Spanish is pretty rusty these days, so I wanted to share what I was trying to communicate, by writing it again in English.
I usually speak and write in English, but for this article, I am writing it in Spanish.

A few weeks ago, I read a blog article by my intern Clarissa in Outreachy. In her article, Clarissa wrote about her challenges in the program. One challenge for her was writing in English. Her native language is Portuguese.

Clarissa's writing is so good that I often forget that English is not her native language. So I wanted to share that experience by writing a blog article of my own in another language. I can speak some Spanish; I learned it in high school and lived for part of a summer in Mexico (about a month, part of a student-abroad language-immersion program for advanced students). But I don't write in Spanish very well.

Writing in Spanish is very difficult for me. I know many of the words, but some I need to translate via Google Translate. I'm sure I'm writing seems stilted and awkward. That's because I'm writing in a language that I don't usually write in.

I support Clarissa in her internship. Writing in another language is really difficult. I can do it, but it takes about twice the time for me me to write what I want to because I'm not very strong in the Spanish language. From this, I can really appreciate the challenges that Clarissa goes through in writing for Outreachy.

This is in Spanish

Yo hablo y escribo inglés. Pero para este artículo, yo escribo en español.

Hace unas semanas, leí un artículo del blog de mi pasante Clarissa en Outreachy. En su artículo, Clarissa escribió sobre sus desafíos en el programa. Un desafío fue escribir en inglés. Su lengua natural es el portugués.

La escritura de Clarissa es buena porque a olvido que el inglés no es su lengua natural. Así que quería compartir la experiencia escribiendo un artículo de mi blog en otro lengua. Puedo hablar algo de español; lo aprendí en la escuela secundaria y durante viví un verano en México. Pero no escribo muy bien el español.

Escribir en español es muy difícil. Conozco muchas de las palabras, pero algunas tengo que traducirlas en Google Translate. Estoy seguro de que mi escritura parece incómoda. Eso es porque estoy escribiendo en una lengua que no suelo escribir.

Apoyo a Clarissa en su pasantía. Escribir en otro idioma es muy difícil. Puedo hacerlo, pero me lleva mucho tiempo escribir lo que quiero porque no domino el idioma español. Aprecio más los desafíos que Clarissa tiene por escrito para Outreachy.

Theme changes in GTK 3

Adwaita has been the default GTK+ theme for quite a while now (on all platforms). It has served us well, but Adwaita hasn’t seen major updates in some time, and there is a desire to give it a refresh.

Updating Adwaita is a challenge, since most GTK applications are using the stable 3.x series, and some of them include Adwaita-compatible theming for their own custom widgets. Given the stable nature of this release series, we don’t want to cause theme compatibility issues for applications. At the same time, 3.x is the main GTK version in use today, and we want to ensure that GTK applications don’t feel stale or old fashioned.

A trial

A number of approaches to this problem have been considered and discussed. Out of these, a tentative plan has been put forward to trial a limited set of theme changes, with the possibility of including them in a future GTK 3 release.

Our hope is that, due to the limited nature of the theme changes, they shouldn’t cause issues for applications. However, we don’t want to put our faith in hope alone. Therefore, the next three weeks are being designated as a testing and consultation period, and if things go well, we hope to merge the theme into the GTK 3.24.4 release.

It should be emphasised that these changes are confined to Adwaita itself. GTK’s CSS selectors and classes have not been changed since GTK 3.22, and the changes in Adwaita won’t impact other GTK themes.

The Adwaita updated theme is being made available as a separate tarball in parallel with the GTK 3.24.3 release, and can be downloaded here. GTK application developers are invited to try 3.24.3 along with the new version of Adwaita, and report any issues that they encounter. The GTK team and Adwaita authors will also be conducting their own tests. Details of how to test the new theme in various ways are described here.

We are hoping to strike a balance between GTK’s stability promises on the one hand, and the desire to provide up-to-date applications on the other. It is a delicate balance to get right and we are keen to engage with GTK users as part of this process!

Theme changes

The rest of this post summarises which changes are have been made to the theme. This will hopefully demonstrate the limited extent of these changes. It will also help developers know what to look for when testing.

Colors

Many of the Adwaita colors have been very slightly tweaked. The new colors are more vivid than the previous versions, and so give Adwaita more energy and vibrancy. The new colors also form part of a more extensive palette, which is being used for application icons. These colours can also be used in custom application styling.

The color changes are subtle, so any compatibility issues between the new and the old versions should not be serious. Blue is still blue (just a slightly different shade!) Red is still red. Visually, the dark and light versions of the theme remain largely the same.

Adwaita’s dark variant, showing the slight color changes between old (left) and new (right).

Note that the red of the button has been toned down a bit in the dark theme.

Header bars and buttons

Most widgets have not been specifically changed in the updated version of Adwaita. However, two places where there are widget-specific changes are header bars and buttons. In both cases, an effort has been made to be lighter and more elegant.

Buttons have had their solid borders replaced with shadows. Their background is also flatter and their corners are more rounded. Their shape has also been changed very slightly.

Header bars have been updated to complement the button changes. This has primarily been done by darkening their background, in order to give buttons sufficient contrast. The contrast between header bars’ focused and unfocused states has also been increased. This makes it easier for users to identify the focused window.

At first glance, these changes are some of the most significant, but they are achieved with some quite minor code changes.

The header bar in GNOME’s Calendar app (old version on top, new version on the bottom):

Switches

Aside from header bars and buttons, the only other widget to be changed is switches. When GTK first introduced switches, they were a fairly new concept on the desktop. For this reason, they included explicit “ON” and “OFF” labels, in order to communicate how the switches operated. Since then, switch widgets have become ubiquitous, and users have become familiar with switches that don’t contain labels.

The latest Adwaita changes bring the theme into line with other platforms and make switches more compact and modern in appearance, by removing the labels and introducing a more rounded shape.

Elsewhere, no change

Aside from the changes described above, very little has changed in Adwaita. The vast majority of widgets remain the same, albeit with very slightly altered colours. Generally, UI layouts shouldn’t alter and users should feel comfortable with the changes.

Spot the difference (the old version of Adwaita is on the left and the new version is on the right):

Conclusion

Please try the new theme. We hope you like it!

And we appreciate your feedback—in particular if you are a GTK application developer. You can provide it on irc (in the #gtk+ channel on GimpNet) or via the gtk-devel-list mailing list, or by filing an issue in gitlab.

GNOME Security Internship - Update 3

Here you can find the introduction, the update 1 and the update 2.

Notification if you replug your main keyboard

As of now we allow a single keyboard even if the protection is active because we don’t want to lock out the users. But Saltarelli left a comment making me notice that an attacker would have been able to plug an hardware keylogger between the keyboard and the PC without the user noticing.

To prevent this now we display a notification if the main keyboard gets unplugged and plugged again.

USB new keyboard notification

Smart authorization with touchscreen

If your device can use the touchscreen and your physical keyboard breaks you should be able to use your device because you’ll have a working on-screen keyboard. Because of this I added an exception to the auto authorization of keyboards if touchscreen is also available. Thanks to Marcus for this hint.

GNOME Shell protection status

Now GNOME-Shell shows an icon in the status bar only if the protection is really active. In order to reliably test it we check both the gsetting protection value and also if the USBGuard DBus is really up and running.

As shown in the screenshot below we are currently using a generic “external device” icon. Before the final release we should change it, hopefully with an external help :)

USB protection status icon

GNOME Control Center

GNOME Control Center before showed just an “on”/”off” label next to the “forbid new USB devices” entry. Now a more informative label has been added. In this way we can show directly the protection level in use.

On top of that, we also check for the USBGuard DBus availability. If it is not available we show the current protection level as “off” and we prevent the users to interact with the USB protection dialog.

g-c-c USB current status

Limit keyboards capabilities

As I said briefly in the last update, the goal here is to prevent new keyboards from using dangerous keys (e.g. ctrl, alt ecc…).

At the beginning I tried to experiment with scancodes. One possible approach was to append to the hwdb an always limit rule like this:

evdev:name:*:dmi:bvn*:bvr*:bd*:svn*:pn*    # Matches every keyboards
    KEYBOARD_KEY_700e0=blocked             # Block left ctrl
    [...]

Then when we wanted to grant full access to a single keyboard we appended a device specific rule (vendor, product, version ID and input-modalias of it) mapping back every previously blocked keys.

While at first this seemed a feasible option, after a few emails and a call with Peter Hutterer and Benjamin Tissoires we decided to discard it in favour of a device grabbing with EVIOCGRAB/libevdev_grab().

For example one problem with scancodes mapping using the hwdb was that the matches were not predictable if there were multiple of them applied to a single device (first the always block and then the permission granted).

EVIOCGRAB and Mutter

The route we are taking is to implement the key filtering directly in mutter (my early work on it). As soon as we have a new device we take the grab on it with EVIOCGRAB, in this way we will be the only one that can see the device key events. Then when a key gets pressed we check if it is a dangerous key. If it is we drop it.

With this approach for example we still can get notified when a dangerous key is pressed (as opposite to the scancodes approach). In this way we can display a notification saying that the device is limited, so nothing happened because a key has been blocked and not because the keyboard is broken.

How to handle this in the UI is still under discussion and evaluation. Maybe we can start with showing a list of connected keyboards in GNOME Control Center. Next to every entry in this list we could put a colored icon to show the current device status. That is could be either limited or full access. Then the users will be able to toggle this icon changing the specific device permission.

What to expect next and comments

In the next days I’ll try to refine the “limit keyboards” functionality and I’ll try to come up with a way to expose this feature to the end users. Also related to the smart authorization, we should also check for other input devices like keyboards on the legacy PS/2 port.

Feedback is welcome!

January 13, 2019

GNOME Outreachy mentorship

The Outreachy program is a three month internship to work in FOSS. There are two periods for the outreachy, the first one from December to March and the other one from May to August. It's similar to the Google Summer Of Code, but in this case the interns doesn't need to be students.

I proposed some ideas for interts to work on GNOME, with me as a mentor. I wrote three proposals this time:

  • Extend Fractal media viewer with video support and explore video conference
  • Create Gtranslator initial integration with Damned Lies
  • Books : Improve the epub support in gnome-books

There was people interested in all of three projects, but for the Books app we don't have any real contribution so there was no real applicants.

I've two good proposals for Fractal and for Gtranslator, so I approve both and the Outreachy people approve these two interts. So we get two new devs working in GNOME for three months as interns.

This is something great, paid developers working in my proposals is a good thing, but this implies that I need to do the mentor work for these two interns during the three months period, so it's more work for me :/

But I think this is a really important work to do to bring more people to the free software, so I've less time for hacking, but I think it's good, because the fresh blood can do the hacking and if, after the Outreachy, one of the interns continues collaborating with GNOME, that will be more important for the GNOME project that some new features in one app.

GNOME Translation Editor

Teja is the intern working in gtranslator. She is working in the Damned Lies integration.

Damned Lies is a web application for GNOME translators. This app provides updated .po file for each GNOME module and language and translators can download and update that file using the web interface. That web is able to do the commit to the original repository with the upload version from translators.

The idea is to provide a simple integration with this platform in the GNOME Translation Editor app, using the web json API to be able to open .po files from the web directly without the need to download the file and then open it.

The current API of DL is really simple so we can't implement a real integration without adding more functionality to this API. So this project requires some work in the DL app too.

In the future we can improve the integration adding the posibility to upload the new .po after saving to DL so translators doesn't need to go to the web interface and can do all the translation flow using only the Translation Editor.

Fractal

Maira is the intern working in Fractal. They are working in the initial video preview widget.

Fractal is a instant messaging app that works over matrix.org. Currently we support different types of messages, like text, images, audio and files. But for video we're using the same widget as we're using for files, so you can download or open, but we've not a preview or inline player.

The main idea of this project is to provide a simple video player using gstreamer to play the video inside the Fractal app.

This is not an easy task, because we're using Rust in Fractal and we need to deal with bindings and language stuff, but I think it's doable.

During the internship, Maira is also working fixing some bugs in the audio player, becuase it uses gstreamer too, so during the code review, Maira detected some problems and they are fixing it.

January 11, 2019

My 2018 in Review: Making headlines year

The internationalization of my career was a spotlight in 2008 starting in the U.S.A. with an internship at ORNL in February 2018 and then in U.K. with a Master in HPC (High- Performance Computing) on September 2018. Some online articles can be found chronologically with my active participation and efforts done during the year 2018.

1. Organizing an HPC event

Usually, July is the month in where many interns go to ORNL to have real-world experiences in their fields. Some of them are students from schools, colleges, and universities and I had enough time to interact with most of them and to realize that some of them need more preparation in topics related to HPC such as Linux commands, HPC concepts, Programming, and Deep Learning. This is the web configured to do it:

After the successful event, a note was published in the Sci Ed Programs Summer 2018 Newsletter, Issue 11. I was so happy to help in training students from many countries. 

2. ORNL Photo Story Contest

It was announced a contest of photography about posting photographies related to the internship of students and my picture was published in the Sci Ed Programs Summer 2018 Newsletter. In this magazine, local events in Knoxville and useful ORNL tips were pointed.

Sci Ed Programs Summer 2018 Newsletter, Issue 13 announced the winner of the contest and surprisingly my group got the second place in a picture we took at Smoky Mountain.

https://wakelet.com/wake/0970235f-b3ed-40bf-9fb2-9bd3f2e4960d

3. ORNL 75 years 

The official video was publicly released and 0:33  captured the work I did as a volunteer in the core day of the 75 years of the creation of the Oak Ridge National Laboratory in TN.

https://www.youtube.com/watch?v=OEOHk4wQeTkfg

4. OLCF 

Five OLCF students of 26 were selected to share their experiences and accomplishments during the internship period. I am so happy with the recognition received for my work.

https://www.olcf.ornl.gov/2018/08/07/summer-interns-gain-hpc-skills-professional-development-at-the-olcf/

5. EPCC

I decided to prepare myself further in HPC(High-Performance Computing) and I applied to the school of Informatics in Edinburgh where they have a Master in High Performance Computing. I traveled to Scotland to learn from the world-class staff of EPCC that have more than 20 years of experience in teaching parallel programming worldwide.

https://www.epcc.ed.ac.uk/sites/default/files/EPCC%20News%2084.pdf

6. ORISE

On December it was published my profile after the internship done in ORNL where they highlighted the passion at work I demonstrated while interacted with leadership-class computing systems for open science in the CADES area with an OpenShift project.

https://orise.orau.gov/ornl/experiences/recent-graduates/inca.htmlFinally, my participation was printed to inspire other students to be involved in STEM:

Lucid

The start of a new year often brings change. Our family has increased in size, which is very exciting. I’m also moving on from Endless and have a new job Managing Product at Lucid. I’m sad to be leaving my friends at Endless after a couple of delightful and very satisfying years but I’m also very pleased to be working with Jonty and Jono again. I still remain as emotionally invested in the GNOME and Flatpak communities as ever - I just won’t be paid to contribute, which is no bad thing for an open source project.

Fractal Hackfest in Seville

Last month I was in Seville for the second Fractal Hackfest. It was a bit of a different experience for me than the other hackfests I have been too, as I’m a core member to the project now, and I also knew most of the other people already.

My main focus for this hackfest was to push forward the work on the Fractal backend. The backend should handle persistent storage and the preparation of data needed in the UI. Right now many things which should conceptually be in the backend are in the frontend, and can therefore cause UI freezes. Over the past months I have been working hard on refactoring the code so we can just drop in the backend without making many changes to the UI code. So the core of the refactors I did was to make sure that data flows in one direction and we don’t keep different copies of the data around.

Backend

For storage we decided to use SQLite instead of LMDB because we have many relations between the data. The data will be structured into 3 tables: USERS, ROOMS, MESSAGES. This gives us a lot of space to expand and allows us to reference to other data with ease.

The backend will have use a glib based API to communicate with the frontend, and on the other hand it will have a Rust only interface to communicate with fractal-matrix-api.

Community

Lunch at a spanish Pizza place

We also had really awesome community bonding events, and some local newcomers joining the hackfest. Thanks to Daniel for organizing the event and also all locals who helped him.

January 10, 2019

Librsvg is almost rustified now

Since a few days ago, librsvg's library implementation is almost 100% Rust code. Paolo Borelli's and Carlos Martín Nieto's latest commits made it possible.

What does "almost 100% Rust code" mean here?

  • The C code no longer has struct fields that refer to the library's real work. The only field in RsvgHandlePrivate is an opaque pointer to a Rust-side structure. All the rest of the library's data lives in Rust structs.

  • The public API is implemented in C, but it is just stubs that immediately call into Rust functions. For example:

gboolean
rsvg_handle_render_cairo_sub (RsvgHandle * handle, cairo_t * cr, const char *id)
{
    g_return_val_if_fail (RSVG_IS_HANDLE (handle), FALSE);
    g_return_val_if_fail (cr != NULL, FALSE);

    return rsvg_handle_rust_render_cairo_sub (handle, cr, id);
}
  • The GObject boilerplate and supporting code is still in C: rsvg_handle_class_init and set_property and friends.

  • All the high-level tests are still done in C.

  • The gdk-pixbuf loader for SVG files is done in C.

Someone posted a chart on Reddit about the rustification of librsvg, comparing lines of code in each language vs. time.

Rustifying the remaining C code

There is only a handful of very small functions from the public API still implemented in C, and I am converting them one by one to Rust. These are just helper functions built on top of other public API that does the real work.

Converting the gdk-pixbuf loader to Rust seems like writing a little glue code for the loadable module; the actual loading is just a couple of calls to librsvg's API.

Rsvg-rs in rsvg?

Converting the tests to Rust... ideally this would use the rsvg-rs bindings; for example, it is what I already use for rsvg-bench, a benchmarking program for librsvg.

I have an unfinished branch to merge the rsvg-rs repository into librsvg's own repository. This is because...

  1. Librsvg builds its library, librsvg.so
  2. Gobject-introspection runs on librsvg.so and the source code, and produces librsvg.gir
  3. Rsvg-rs's build system calls gir on librsvg.gir to generate the Rust binding's code.

As you can imagine, doing all of this with Autotools is... rather convoluted. It gives me a lot of anxiety to think that there is also an unfinished branch to port the build system to Meson, where probably doing the .so→.gir→rs chain would be easier, but who knows. Help in this area is much appreciated!

An alternative?

Rustified tests could, of course, call the C API of librsvg by hand, in unsafe code. This may not be idiomatic, but sounds like it could be done relatively quickly.

Future work

There are two options to get rid of all the C code in the library, and just leave C header files for public consumption:

  1. Do the GObject implementation in Rust, using Sebastian Dröge's work from GStreamer to do this easily.

  2. Work on making gnome-class powerful enough to implement the librsvg API directly, and in an ABI-compatible fashion to what there is right now.

The second case will probably build upon the first one, since one of my plans for gnome-class is to make it generate code that uses Sebastian's, instead of generating all the GObject boilerplate by hand.

Issue handling automation for GNOME

I remember some time ago discussing with someone from GNOME how important is to make a good issue report, the conclusion came along the lines of “I have 10 seconds to read per issue, if the issue is not well done it’s most likely I won’t have time to read it”. It’s true most of us are focused on doing actual code, after all it’s what most of us enjoy and/or what we are paid for. So bug handling always takes a quite back place on our priorities.

On the other hand, the management of issues is neccessary for a healthy project, specially if you are using it for planification, prioritization, feedback gatherer and interaction with other developers or teams. In general, a set of open issues that is representative, updated and properly reported helps the project progress and build a community around.

bot

Handling issues automatically

There are a set of tasks that are quite repetitive, such as closing issues that were left with the “need information” label. Or old feature requests. Or crashes that were reported years ago with an old version of the software…

This is something I really wanted to get done with, so I’ve been working in the past weeks to make a set up where repetitive tasks are automated by a GitLab CI “bot”, and this is what I’m gonna share with you today!

The tool that performs the task is used by GitLab CE itself for their own issue triage automation and it’s called gitlab-triage. This tool takes a set of rules and process them with a CI job, then you can set to be run periodically in schedules.

So let’s take a look how to make a simple set up.

Basic set up

First, you need to set up a CI job that will run the tool. For that, in your CI file add:

triage:
  image: ruby:2.4
  stage: triage
  script:
    - gem install gitlab-triage
    - gitlab-triage --token $TRIAGE_BOT_TOKEN --project-id $CI_PROJECT_PATH --host-url https://gitlab.gnome.org
  only:
    - schedules

And add a “triage” stage to the file stages. This will install a ruby docker image, install the gitlab-triage program and then run it on schedules. Note the variable $TRIAGE_BOT_TOKEN, this is the API token of the account you want to use for perform the actions. You can use your account, a new account, or the bot I created for GNOME. Feel free to ask me the token for your use. Also, make sure the project has a schedule that the triage can run on.

Now, what it should run tho? That’s where the policies file enters the game.

The policies

gitlab-triage will process a file called .triage-policies.yml on the root of your project. This file defines with what rules and what the bot should do. You can make conditions based on labels, dates, etc. Feel free to take a look at the gitlab-triage documentation, it’s quite extense and helpful.

For now, let’s write our first rule. And we are going with the less controversial one, closing issues that were marked as need information.

Close issues that need information and weren’t updated

When a bug needs information to be properly triaged, we mark issues with the “need information” label. After some time, if nobody provides the information the bug should be closed. At GNOME Bugzilla we were closing these bugs after 6 weeks, with a stock answer .

How can we write this so it’s done by the gitlab-triage bot? We just create a new rule in the policy file, in this way:

 resource_rules:
  issues:
    rules:
     - name: Close issues that need information and weren't updated 
        conditions:
          date:
            attribute: updated_at
            condition: older_than
            interval_type: weeks
            interval: 6
          state: opened
          labels:
            - 2. Needs Information 
        actions:
          status: close
          labels:
            - 15. Auto Updated
          comment: |
            Closing this issue as no further information or feedback has been provided.

            Please feel free to reopen this issue if you can provide the information or feedback.

            Thanks for your help!

            ---

            This is an automatic message. If you have suggestions to improve this automatic action feel free to add a comment on https://gitlab.gnome.org/GNOME/nautilus/issues/715

Quite simple. We set up the conditions that an issue must match in order to be processed under “conditions”, then we set the actions we want to do to those issues in the “actions” section.

In this case, if an issue was last updated more than 6 weeks ago, it’s in the opened state, and has the label “2. Needs Information” it will close the issue, set the label “15. Auto Updated” and comment the stock response. Note that we set a label to know that it was autoupdated by a bot, so in case something goes wrong we can query those issues and perform other actions on them.

You can see an example result here, it looks like:

bot result

And an example of a query for Nautilus auto updated issues here, which looks like:

auto updated result

Nice, right?

Close old feature proposals

Features proposals are probably the issues that has a higher ratio of being ignored. With the resources we usually have, it’s unlikely we can add more maintainership “cost” to our projects, or that we already didn’t plan carefully what features to add.

With this at hand, we want a rule that closes old feature proposals that hasn’t been marked as part of the project planning. The rule looks like:

- name: Close old feature proposals without planning labels or milestones
  conditions:
    date:
      attribute: created_at
      condition: older_than
      interval_type: months
      interval: 12
    labels:
      - 1. Feature
    forbidden_labels:
      - 2. Deliverable
      - 2. Stretch
      - 1. Epic
    milestone:
      - No Milestone
    state: opened
    upvotes:
      attribute: upvotes
      condition: less_than
      threshold: 10
  actions:
    labels:
      - 15. Auto Updated
    status: close
    comment: |
      Hi,

      First of all, thank you for raising an issue to help improving Nautilus. In order to maintain order in the issue tracker we are closing old, unscheduled feature proposals.

      Unfortunately, no Merge Request has been provided for this, and/or the project contributors are not planning this feature in the foreseeable future.

      This issue will be closed as it meets the following criteria:
      * Created more than 12 months ago
      * Labeled as ~"1. Feature"
      * Not associated with a milestone or with ~"2. Deliverable" or ~"2. Stretch" project planning labels.
      
      Thanks for your help!

      ---

      This is an automatic message. If you have suggestions to improve this automatic action feel free to add a comment on https://gitlab.gnome.org/GNOME/nautilus/issues/715

It’s similar to the previous rule, we just added the condition to not proccess issues that has the project planing labels “2. Deliverable”, “2. Stretch” or “1. Epic”. The project planning labels comes from the Project Planning for GNOME post.

Note the voting threshold condition of 10 votes, this is just an internal way to make sure we don’t close automatically a highly voted feature proposal, so we rather make a manual comment to avoid misscomunications.

You can see an example result here, it looks like:

feature proposal result

Bring attention to untriaged issues

This is all nice, but what about issues that wasn’t even triagged in the first place? For those, we can make the bot create a summary for us. This helps greatly to bring to our attention in a regular basis those issues that need to be taken care of.

We use the “summarize” action, the rule looks like:

- name: Mark stale unlabelled issues for triage
  conditions:
    date:
      attribute: created_at
      condition: older_than
      interval_type: months
      interval: 2
    # We want to handle those that doesn't have these labels, including those with other labels.
    forbidden_labels:
      - 1. Bug
      - 1. Crash
      - 1. Epic
      - 1. Feature
    state: opened
  actions:
    labels:
      - 15. Untriaged
    summarize:
      title: Issues that need triaging
      item: |
        - {{web_url}} - {{title}} - {{labels}}
      summary: |
        The following issues were created two months ago and they are unlabeled:

        {{items}}

        /cc @Teams/BugSquad

This will create an issue with a list of issues that are lacking one of the labels listed in “forbidden_labels”. Note that we could have use simply “No Label” value as a condition in a “Labels” section, however we wanted to bring to our attention those issues that have other labels but we didn’t mark if they were a bug or a feature, because we rely on these labels for the previous rules.

Also note that the created issue pings a group called “Teams/BugSquad”, which doesn’t exist yet. If this sounds useful for us, I would like to create this group so bug triagers can be part of that group and get pinged to handle these issues in a regular basis.

You can see an example result here, it looks like:

summary result

Close old issues

This is a controversial one. Who hasn’t receive a “Fedora has reached EOL” mass mails? :) I will leave the explanation why I think we should try this for another post, since here I want to focus on providing just the tooling and snippets for those who want to try it out.

For closing old bugs we just look at the updated date and close those that hasn’t been touched in 18 months.

- name: Close stale issues with no milestone or planning labels
  conditions:
    date:
      attribute: updated_at
      condition: older_than
      interval_type: months
      interval: 18
    milestone:
      - No Milestone
    forbidden_labels:
      - 2. Deliverable
      - 2. Stretch
      - 1. Epic
      # Features are handled in a different rule
      - 1. Feature
    state: opened
  actions:
    status: close
    labels:
      - 15. Auto Updated
    comment: |
      Hi,

      Thank you for raising an issue to help improve Nautilus. We're sorry this particular issue has gone unnoticed for quite some time.

      This issue will be closed, as it meets the following criteria:
      * No activity in the past 18 months (3 releases).
      * Unscheduled. Not associated with a milestone or with ~"2. Deliverable" or ~"2. Stretch" project planning labels.

      We'd like to ask you to help us keep our issue tracker organized  by determining whether this issue should be reopened.

      If this issue is reporting a bug, let us know if this issue is still present in a newer version and if you can reproduce it in the [nightly version](https://wiki.gnome.org/Apps/Nightly).

      Thanks for your help!

      ---

      This is an automatic message. If you have suggestions to improve this automatic action feel free to add a comment on https://gitlab.gnome.org/GNOME/nautilus/issues/715

Let me know how it works for you

So far it has work quite well for Nautilus, in our first run it helped us to triage around 50 issues, and the bot closes 15 issues that were either waiting long for information or were old features requests.

These rules can be tweaked, and in general these are an approach I think we should test. There are a few things that are a bit bold, specially closing issues automatically, so any feedback is appreciated.

I’m looking forward to hear how it works for you, and see if there are other modifications you do or some advice you would propose.

Enjoy!

January 09, 2019

Phoenix joins the LVFS

Just like AMI, Phoenix is a huge firmware vendor, providing the firmware for millions of machines. If you’re using a ThinkPad right now, you’re most probably using Phoenix code in your mainboard firmware. Phoenix have been working with Lenovo and their ODMs on LVFS support for a while, fixing all the niggles that was stopping the capsule from working with the loader used by Linux. Phoenix can help customers build deliverables for the LVFS that use UX capsule support to make flashing beautiful, although it’s up to the OEM if that’s used or not.

It might seem slightly odd for me to be working with the firmware suppliers, rather than just OEMs, but I’m actually just doing both in parallel. From my point of view, both of the biggest firmware suppliers now understand the LVFS, and provide standards-compliant capsules by default. This should hopefully mean smaller Linux-specific OEMs like Tuxedo and Star Labs might be able to get signed UEFI capsules, rather than just getting a ROM file and an unsigned loader.

We’re still waiting for the last remaining huge OEM, but fingers crossed that should be any day now.

January 08, 2019

Epiphany automation mode

Last week I finally found some time to add the automation mode to Epiphany, that allows to run automated tests using WebDriver. It’s important to note that the automation mode is not expected to be used by users or applications to control the browser remotely, but only by WebDriver automated tests. For that reason, the automation mode is incompatible with a primary user profile. There are a few other things affected by the auotmation mode:

  • There’s no persistency. A private profile is created in tmp and only ephemeral web contexts are used.
  • URL entry is not editable, since users are not expected to interact with the browser.
  • An info bar is shown to notify the user that the browser is being controlled by automation.
  • The window decoration is orange to make it even clearer that the browser is running in automation mode.

So, how can I write tests to be run in Epiphany? First, you need to install a recently enough selenium. For now, only the python API is supported. Selenium doesn’t have an Epiphany driver, but the WebKitGTK driver can be used with any WebKitGTK+ based browser, by providing the browser information as part of session capabilities.

from selenium import webdriver

options = webdriver.WebKitGTKOptions()
options.binary_location = 'epiphany'
options.add_argument('--automation-mode')
options.set_capability('browserName', 'Epiphany')
options.set_capability('version', '3.31.4')

ephy = webdriver.WebKitGTK(options=options, desired_capabilities={})
ephy.get('http://www.webkitgtk.org')
ephy.quit()

This is a very simple example that just opens Epiphany in automation mode, loads http://www.webkitgtk.org and closes Epiphany. A few comments about the example:

  • Version 3.31.4 will be the first one including the automation mode.
  • The parameter desired_capabilities shouldn’t be needed, but there’s a bug in selenium that has been fixed very recently.
  • WebKitGTKOptions.set_capability was added in selenium 3.14, if you have an older version you can use the following snippet instead
from selenium import webdriver

options = webdriver.WebKitGTKOptions()
options.binary_location = 'epiphany'
options.add_argument('--automation-mode')
capabilities = options.to_capabilities()
capabilities['browserName'] = 'Epiphany'
capabilities['version'] = '3.31.4'

ephy = webdriver.WebKitGTK(desired_capabilities=capabilities)
ephy.get('http://www.webkitgtk.org')
ephy.quit()

To simplify the driver instantation you can create your own Epiphany driver derived from the WebKitGTK one:

from selenium import webdriver

class Epiphany(webdriver.WebKitGTK):
    def __init__(self):
        options = webdriver.WebKitGTKOptions()
        options.binary_location = 'epiphany'
        options.add_argument('--automation-mode')
        options.set_capability('browserName', 'Epiphany')
        options.set_capability('version', '3.31.4')

        webdriver.WebKitGTK.__init__(self, options=options, desired_capabilities={})

ephy = Epiphany()
ephy.get('http://www.webkitgtk.org')
ephy.quit()

The same for selenium < 3.14

from selenium import webdriver

class Epiphany(webdriver.WebKitGTK):
    def __init__(self):
        options = webdriver.WebKitGTKOptions()
        options.binary_location = 'epiphany'
        options.add_argument('--automation-mode')
        capabilities = options.to_capabilities()
        capabilities['browserName'] = 'Epiphany'
        capabilities['version'] = '3.31.4'

        webdriver.WebKitGTK.__init__(self, desired_capabilities=capabilities)

ephy = Epiphany()
ephy.get('http://www.webkitgtk.org')
ephy.quit()

Marketing in Vendor Neutral FLOSS Projects #4

This continues, and concludes a series of items on Vendor Neutral FLOSS projects and how they do marketing which you can read here.

TDF / LibreOffice Branding

If we want to grow our community, and to drive this with marketing – we need to position our brands to make this easy and ideally unconscious. Currently we have two brands, taking the descriptions from their websites:

  • LibreOffice is Free and Open Source Software Development is open to new talent and new ideas, and our software is tested and used daily by a large and devoted user community (link).
    • ie. it is pretty clear: 'LibreOffice' is software.
  • The Document Foundation - It is an independent self-governing meritocratic entity, created by a large group of Free Software advocates, in the form of a charitable Foundation under German law (gemeinnützige rechtsfähige Stiftung des bürgerlichen Rechts). (link).
    • ie. it is clear this is a Stiftung – and by association / default comes to also mean the handful of employees who comprise the paid team there with some oversight from the board.

Unfortunately – it seems we have two brands, and neither of these means “The community”, or “The people who do most of the hard work”. These are the people we need to be encouraging, recruiting, building up, and talking about. The degree to which TDF’s paid staff represent ‘the community’ is unclear. The board is elected to represent the community, and oversees TDF but is also not itself the community (but the board). When TDF says “our software” - how can we ensure that everyone feels included in that ‘our’ ?

It seems clear that we need to solve this dis-connection with some formulation, strap-line, brand or form of words that we use to highlight and emphasize our contributor’s input – and use this repeatedly.

LibreOffice vs. Commercial branding

Branding is really important as we have seen: shipping identical software, at the same price in the Mac app store with just the LibreOffice vs. Collabora Office brand changed shows – that the LibreOffice brand is simply far better known & sought after gathering the overwhelming majority of interest. This however brings a problem – if development work is funded by leads generated from brands then TDF promoting eg. LibreOffice Online under its own brand can easily radically impair leads, investment and thus code-flows into LibreOffice Online without any offsetting advantage. The picture below compares two branding approaches for the 95%+ of commits that Collabora has put into LibreOffice Online. The 0.05% is the proportion of visitors to LibreOffice that discover that they should fund development buying professional services (from anyone) – as we shall see below.

Which way to brand something such that re-investment and growth is possible ?

TDF marketing in practice

How does LibreOffice get marketed from this perspective ? How do companies get leads from TDF so that they can sell to some fraction of them their products, support & services thus allowing re-investment back into LibreOffice ? Answer: very poorly. Recently we’ve done a better job of telling people about LibreOffice, a recent release announcement says:

"LibreOffice 6.1’s new features have been developed by a large community of code contributors: 72% of commits are from developers employed by companies sitting in the Advisory Board like Collabora, Red Hat and CIB and by other contributors such as SIL and Pardus, and 28% are from individual volunteers."

and also encourages people to use an LTS version – which is not itself provided by TDF – which is a major improvement:

"For any enterprise class deployment, TDF maintains the more mature LibreOffice 6.0, which should be sourced from a company providing a Long Term Supported version of the suite (they are all members of TDF Advisory Board, and are listed here: http://www.documentfoundation.org/governance/advisory-board/)."

However the website still has a large number of issues in this area, investing nominal fees into Advisory Board membership is a marginal contribution vs. the substantial investments into the software & community. A better approach is the single page that educates users about the availability of professional services – which is the get-help/professional-support/ page which highlights certified competent migraators, trainers and developers. So how does this hero list of contributors to LibreOffice's success fare when people visit our site, lets see by checking out the metrics on page visits:

Relative pageviews, bounce rates, exit rates for libreoffice

It is interesting to see that those interested in professional support – only around 4700 this year (1/3rd of 13000) exited to either close the session, or visit a supported version provider. The bounce rate suggests that the majority of people arrive on the professional support page from elsewhere, and not TDF’s own properties. This matches with what is seen by vendors analyzing what arrives from TDF. Compared with the total visits as of (2018-09-07):

Visit statistics to 2018-09-07 for libreoffice

the number of people exiting to find professional service from that page is 0.05% of our 9.5 million visitors so far.

The contrast between the "Economic and code contribution flows in today's ecosystem" and even the helpful description in the 6.1 release marketing acknowledging 72% of commits, compared with the 0.05% actual click-through rate is extraordinarily stark. It seems clear that our users are not being educated as to the importance of supporting the certified ecosystem - certainly in the area of code, but almost certainly also in other certified areas such as training & migration.

Visualizing 0.05%

This is rather a tricky task – it rapidly turns into something like visualizing the geometry and distances of the solar system. Two charts are shown – a pie chart with the corporate contribution to the code in commits – and a crop to the center of another showing the flow of people clicking through the page to find professional services which go to 1improve LibreOffice (in blue) and others in red:

Commits by affiliationLeads coming to professional support or zero point zero five percent

It is of course unclear what %age of our visitors are enterprises and thus should be encouraged to seek professional help, however 0.05% seems an implausibly low fraction by perhaps two orders of magnitude.

Marketing – Product expectations

Another side-effect of majoring on LibreOffice as a free, always and forever and for everyone free, no catches, product – is creating mistaken expectations in our users around how the relationship works. Here is a simple example mail from 2018-04-19:

“Please help me to installLibreOffice at Lubuntu17
 my donation number via PayPal is 1TV07632F2376242R”

Apparently the donor believes there is some connection between his donation and installation support, despite our donate page being quite explicit that this is not so. This is buttressed by rather regular E-mails of the form I made my donation, but still can't download it - apparently people love to miss the LibreOffice is Free Software and is made available free of charge. Your donation, which is purely optional, supports our worldwide community. text there.

This example is relatively friendly. Some chunk of user interactions are much less friendly – criticizing the product, attacking the project for not fixing their particular issue on their timeline, or investing in their particular problem. A peripheral, but amusing pathology is of users from time to time augmenting the urgency of a request by generously offering a $50 donation to TDF to cover the (often) multiple-person-week of (pet) feature work needed.

By setting a more realistic expectation around support, enterprise suitability , and particularly by encouraging people on our main properties to contribute – it is possible to build a consumer, community brand – rather than a pure product brand. This may have a positive impact on reducing the feeling of entitlement that some of our users have.

Similarly enterprises deploy the wrong software, without support, fail to keep it up-to-date, and then believe that we are responsible for helping them, a recent mail to the security list highlights this, names removed to protect the mislead:

Subject: Security issues within Libre Office
My Company, XXXXX, uses the latest ( I think ) version of Libre Office.
We also use the Qualys tool for security Compliance. It has found a vulnerability
with Libre Office. Are you familiar with this, And how do I remediate your application?

... signature reads ...

FORTUNE Magazine World's Most Admired Companies® 2014|2015|2016|2017|2018

A kind reply, funded by RedHat’s formidable security investment:

It might be that its already fixed in the latest stable release, or its
a false positive, or even something that remains to be fixed, but we'd
need more information to judge.

And then we find out:

The version we have is:
C:\Program Files (x86)\LibreOffice 4\program\soffice.exe Version is 4.2.0.4
Have you found similar vulnerabilites? Is there a newer version that we can
download and test against the above reported vulnerabilities.

They use a version that is four years old today, and this is from a significant company, saving plenty of money and apparently investing nothing – instead, consuming time from those who are. Far from an isolated example, some of them are ruder with a more explicit sense of entitlement.

Our marketing – setting expectations

The software industry is an industry typically driven by hype, where software startup marketing has this rather ‘visionary’ approach, something like:

Text: "Look at this awesome product (demo), come and buy it !"
Sub-text: “so when you bought it we can fund actually delivering the product.”

This could be called the Sagrada Familia model, as long as people know this is what they’re buying it has a certain logic. Arguably TDF’s current marketing has a leaning towards:

Text: “Look at this awesome product, come get it for free !”
Sub-text: “we’ll work out how to get people to contribute to fully
deliver on our promise of awesomeness sometime later”

Almost certainly a more helpful marketing approach might be:

Text: “Join our awesome project and contributors to improve our great product”
Sub-text: “community should be fun, and we need to grow it, we’re here to promote you if you contribute.”

The experience of selling a supported LibreOffice

Against this – the experience of selling a supported version of LibreOffice is hard. LibreOffice has a powerful brand, and it is associated with everything being free as in beer. Some small subset of our community appear to believe that building product brands and businesses around LibreOffice is non-ideal, and that we should focus on providing ever more services free to enterprises. The perception that the ‘genuine’ version is LibreOffice from TDF is real one, and stoked by the lack of systematic acknowledgment of the great benefits provided by the ecosystem.

Contributors are sometimes deeply emotionally attached to the project, and the LibreOffice brand and feel that to promote an alternative brand, even if in doing so that helps fund the work, is some sort of betrayal – or lack of neutrality. This sometimes extends to being eager to duplicate functionality, packaging, documentation etc. simply to re-brand it to LibreOffice.

This too is profoundly unfortunate. Others believe that FLOSS is fundamentally identified with a zero per-seat cost – perhaps plus some consultancy (perhaps installation, or some migration support), and that having no SLA, and letting others fund long term product investment is the only sensible approach to choose: maximising their apparent saving. Discussions with such parties are quite interesting – often oscillating between variants of: “I should pay nothing per seat because its FLOSS”, and “The product is not yet quite good enough for us – you must fix it for free before we [don't] buy your product”.

It would be good to have TDF’s explicit support for selling branded support services and versions around LibreOffice to make this more socially obvious to those who are not members of our community.

Conclusions

The commercial ecosystem around LibreOffice is an un-necessarily tough environment to operate in. Companies contribute a large proportion of the work, and yet get very little acknowledgement – which in turn makes it hard for them to invest. This also creates an un-necessary tension with companies marketing – which has to focus on building their own brands. Companies should not fear the arrival of the LibreOffice brand to squash, claim credit for, and present their work as created by someone else – thus effectively depriving them of leads. This is unsustainable.

The LibreOffice project should give a new focus to promoting and celebrating all participants in its community – including ecosystem companies. This is far from a problem unique to companies. It is routinely the case that individual community members feel under-appreciated – they would like more recognition of their work, and promotion of their own personal public brands as valued contributors. This is something that TDF should re-balance its marketing resource into, in preference to product marketing.

The LibreOffice project should explicitly create space for enterprise distributions by explicitly pointing out the weaknesses of LibreOffice for enterprises on its hot marketing properties. This would have a positive effect of encouraging companies to acknowledge and build the LibreOffice brand safe in the knowledge that anyone visiting LibreOffice will get an accurate and balanced picture of their skills and contribution.

We badly need to increase diverse investment into our ecosystem by building an environment where deep investment into LibreOffice is a sound economic choice: economics ultimately drives ecosystem behavior. By creating the right environment – often not by acting, but by clearly and deliberately not acting in a space – we can build a virtuous circle of investment that produces ever better software that meets TDF’s mission.

Balancing what we market - Product vs. Community

It has been said that in life - "You can either achieve things, or get the credit". Unfortunately, in the world of FLOSS – in order to sustainably achieve things you need to get the credit (and associated leads and hence sales). During the early creation of the project it was strategically necessary to under-emphasize corporate involvement, particularly of SUSE heavy lifting – but these days are long past. Similarly, we need to build a brand or formulation that stands for all volunteer contributors to LibreOffice and acknowledge these fulsomely.

This is the state of play today in the LibreOffice world, but the good news is, this is just the background for a series of positive discussions and suggested actions to re-balance TDF's marketing. No significant change is proposed to our development process, timeline, branching etc. I believe most of these are common-sense, and should be supported by the majority of the outside community, as well as those involved with the project - who have a more intuitive feel of the balance here. Some suggestions may be relevant to vendor neutral non-profits; but most are specific to LibreOffice. My plan is to post those suggestions to our public Marketing list and to have an open discussion there of how best to balance this. Potentially I'll add a summary here later.

And to whom do we market ?

Thanks for reading - perhaps this may help some other communities improve their ecosystems too. For those interested in the source for my drawings they are available

Postscript to marketers

This paper has focused heavily on ways to improve our marketing and messaging. If you read this far – thank you ! It should in no way be read as a personal critique of people doing our marketing. Our existing positioning and approach and direction is something that has accumulated over many years, and all in leadership in the project are responsible for where we are at now. If you work in marketiing – thank-you for all you do, and the many positive messages that get out. Hopefully with some adjustments we can grow the project in exciting new directions at some large multiple of its current trajectory for the benefit of all.

This way – I hope we can meet the dream of gaining wider acceptance in the enterprise market.

2019-01-08 Tuesday

  • Poked at vendor neutral marketing bits and updated some pictures; mail chew, pleased to get a Purchase Order six months after signing a contract, nice.

January 07, 2019

Wikimedia in Google Code-in 2018

Newcomer and Mentor sticker designs

Newcomer and Mentor stickers designed by GCI 2017 participant Ashley Zhang, CC BY-SA 4.0.

Google Code-in (GCI) is an annual seven week long contest for 14–17-year-old students exploring free and open source software projects. Organizations, such as the Wikimedia community, offer small tasks in the areas of code, documentation, outreach, research, and design. Students who complete tasks receive a digital certificate and a shirt from Google. The top students in every participating organization win a visit of Google’s headquarters. Students can directly experience how large online projects are organized, collaborate with humans across the planet, and the students’ accepted work is made available to millions of worldwide users.

For the sixth time, Wikimedia was one of 27 participating organizations which offered tasks mentored by community members.

In late 2018, 199 students worked on 765 Wikimedia tasks with the help of 39 mentors. To list only some students’ achievements and show the variety of projects, areas, and programming languages in the Wikimedia community:

…and many many more.

Some students have also written about their experience. Google also posted a summary with statistics.

We would like to congratulate our winners Nathan and Shreyas Minocha, our finalists arcaynia, Jan Rosa, takidelfin and Zoran Dori, and all contributors on their many contributions! We hope to see you around! We would also like to thank all our mentors for their commitment to be available also on weekends and holidays, for coming up with task ideas, working together, quickly reviewing contributions, and for providing feedback what we could improve next time.
Thanks to everybody on IRC, Gerrit, Phabricator, mailing lists, Github, Telegram for their friendliness, patience, support and help.

Wikimedia always welcomes contributions to improve free and open knowledge. Find out how you can contribute.

Easier syntax highlighting with Mallard

Mallard allows you to declare a content type for code blocks, which can then be used for syntax highlighting. We ship a copy of the excellent highlight.js with yelp-xsl, which in turn is used by tools like Yelp, yelp-build, and Pintail. And we colorize what highlight.js gives us using the same color templates used in the rest of the stack.

With Mallard 1.0, the way to declare the content type of a code block is using a mime attribute, which takes a MIME type.

<code mime="text/x-csrc">

This seemed like a good idea at the time. MIME types are the most universal standard for identifying stuff like this, for better or for worse. In practice, it’s been pretty cumbersome. Almost none of the kinds of things you’d put in a code block have registered MIME types. You just have to look up and remember arbitrary long strings. Sad face.

In Mallard 1.1, we’re introducing the type attribute and deprecating the mime attribute. The type attribute can take a simple, short string identifier, just like other documentation formats use.

<code type="c">

The strings are shorter than ad hoc MIME types, and generally easier to remember, or even to guess. For most programming languages, both the file extension and the lowercase name of the language probably work. If they don’t, tell us. The type attribute can also take a space-separated list. So you can list a more specific type alongside a generic type you know will match. This is useful for XML vocabularies.

<code type="xml mallard">

This is also particularly useful when you’re also doing things besides syntax highlighting with code blocks of a certain type. The type attribute (and mime before it) is useful for syntax highlighting, but that’s not its only purpose. You could use it to extract code samples for linting, for example. On projectmallard.org, we automatically extract the RELAX NG schemas from the specifications using (currently) the mime attribute. So it can be useful to list multiple types.

One other really nice thing about using the type attribute is that it takes advantage of a syntax shorthand in Ducktype. Ducktype is a compact syntax designed to do everything Mallard can do. Attributes like type, style, and xref are so common that Ducktype gives you a shorthand way of writing them. For the type attribute, you just put the value there on its own. So in Ducktype, it’s as easy as this:

[code c]

Or this:

[code xml mallard]

We could always use help with testing and with updating the documentation. See my post to mallard-list for more information.

The worst ANSI renderer, except for all the others

Chafa (github) started out as a small piece of supporting code for an obscure personal project I may announce at some indefinite point in the future. Then I decided to release it as a tongue-in-cheek thing for the VT100 anniversary last year, and, well… it gathered a bit of steam.

Chafa 1.0

Since I’m not one to leave well enough alone, I packaged it up over the holidays for the 1.0 release. It brings a pile of improvements, e.g. new symbol ranges like ASCII and Braille, better image preprocessing and a new --fill option for halftone/dithering. It’s also, like, real fast now, and the build is much less brittle.

Big thanks to everyone who contributed to this release: Adam Borowski, Felix Yan, Lajos Papp, Mo Zhou, Ricardo Arguello, Robert-André Mauchin, @dcb314 and @medusacle.

You’ll find packages out there for Arch, Debian, Fedora, Gentoo and Ubuntu now; check your repositories. Extra big thanks to the package maintainers.

As the post title implies, I think Chafa is now the least bad tool in this tiny but tradition-rich niche. So what’s so not-quite-terrible about it?

Only the best for your aixterm

If you’ve been around text terminals for a while, you’ll know what this means:

16 colors

Up until fairly recently, the most colorful terminal applications would operate within the confines of some variant of the above palette, and many still do. It’s a nice and distinct (not to mention cheerful) set of colors, but it’s not a great match for most image material you’ll come across, which makes legible Chafa output a challenge. And that is precisely why I had to try (here with a nice woodblock print):

ANSI art variations

The top left image is the best reproduction you can get with Chafa using a modern 24-bit color terminal emulator (in this case, GNOME Terminal) at 60 characters wide. At top right is the 16-color mapping you get without applying any tricks; this is pretty close to the mark given the muted moonlight colors of the input image, but it doesn’t make good use of our palette, nor does it convey the essence of the scene very well. Blue sky, rolling hills, green grass. A shady waterfront pavilion. Given our limitations, the output will look nothing like the original anyway, so we’re better off trying to capture the gist of it.

We do this simply by cranking up the contrast and saturation to levels where our cheerful old palette can do a decent job (bottom left). Chafa no longer relies on ImageMagick for this, so it’s available in the library API, and integer-only math makes it performant enough to run in real time on animations and video (more on this in a later post, perhaps).

It gets even better if you do color assignment in DIN99d space (bottom right), but that’s way slow, so you have to explicitly enable it.

No ANSI? No problem

Braille and ASCII art

You can generate varied output without any color codes at all. The above examples also demonstrate use of the --fg option, which, along with --bg, exists to tell Chafa what your terminal’s default colors look like so it can target those, but is equally useful for tweaking monochrome output thresholds.

So if for some reason your terminal can’t handle ANSI or you still have one of those warm & fuzzy monochrome tubes sitting around, we’ve got you covered. Or break out your civil rights-era dot-matrix printer and make some precision banners to hang in your lab!

January 06, 2019

Talking at HITCon 2018 in Taipei, Taiwan

I was invited to give a talk at Hacks in Taiwan Conference, or HITCon. Since I missed the GNOME Asia Summit and COSCUP just before, I was quite happy to go to Taiwan still.

The country is incredibly civilised and friendly. I felt much more reminded of Japan rather than China. It’s a very safe and easy place to travel. The public transportation system is fast and efficient. The food is cheap and you’ll rarely be surprised by what you get. The accommodation is a bit pricey but we haven’t been disappointed. But the fact the
Taiwan is among the 20 countries which are least reliant on tourism
, you may also say that they have not yet developed tourism as a GDP dominating factor, shows. Many Web sites are in Chinese, only. The language barrier is clearly noticeable, albeit fun to overcome. Certain processes, like booking a train ticket, are designed for residents, only, leaving tourists only the option of going to a counter rather than using electronic bookings. So while it’s a safe and fun country to travel, it’s not as easy as it could or should be.

The conference was fairly big. I reckon that there have been 500 attendees, at least. The tracks were a bit confusing as there were info panels showing the schedule, a leaflet with the programme, and a Web site indicating what was going on, but all of those were contradicting each other. So I couldn’t know whether a talk was in English, Chinese, or a wild mix of those. It shouldn’t have mattered much, because, amazingly enough, they had live translation into either language. But I wasn’t convinced by their system, because they had one poor person translating the whole talk. And after ten minutes or so I noticed how the guy lost his concentration.

Anyway, a few interesting talks I have seen were given by Trend Micro’s Fyodor about fraud in the banking and telephony sector. He said that telcos and banks are quite similar and in fact, in order to perform a banking operation a phone is required often times. And in certain African countries, telcos like Vodafone are pretty much a bank. He showed examples of how these sectors are being attacked by groups with malicious intents. He mentioned, among others, the Lazarus group.

Another interesting talk was about Korean browser plugins which are required by banks and other companies. It was quite disastrous. From what I understood the banks require you to install their software which listens on all interfaces. Then, the bank’s Web site would contact that banking software which in turn cryptographically signs a request or something. That software, however, is full of bugs. So bad, that you can exploit them remotely. To make matters worse, that software installs itself as a privileged program, so your whole machine is at risk. I was very surprised to learn that banks take such an approach. But then again, currently banks require us to install their proprietary apps on proprietary phone operating systems and at least on my phone those apps crash regularly :(

My own talk was about making operating system more secure and making more secure operating systems. With my GNOME hat on, I mentioned how I think that the user needs to led in a cruel world with omnipresent temptation to misbehave. I have given similar presentations a few times and I developed a few questions and jokes to get the audience back at a few difficult moments during the presentation. But with that didn’t work so well due to the language barrier. Anyway, it was great fun and I still got some interesting discussions out of it afterwards.

Big kudos to the organisers who have been running this event for many many years now. Their experience can certainly be seen in the quality of the venue, the catering, and the selection of speakers. I hope to be able to return in the next few years.

January 05, 2019

How Tracker is tested in 2019

I became interested in the Tracker project in 2011. I was looking at media file scanning and was happy to discover an active project that was focused on the same thing. I wanted to contribute, but I found it very hard to test my changes; and since Tracker runs as a daemon I really didn’t want to introduce any crazy regressions.

In those days Tracker already had a set of tests written in Python that tested the Tracker daemons as a whole, but they were a bit unfinished and unreliable. I focused some spare-time effort on improving those. Surprisingly enough it’s taken eight years to get the point where I’m happy with how they work.

The two biggest improvements parallel changes in many other GNOME projects. Last year Tracker stopped using GNU Autotools in favour of Meson, after a long incubation period. I probably don’t need to go into detail of how much better this is for developers. Also, we set up GitLab CI to automatically run the test suite, where previously developers and maintainers were required to run the test suite manually before merging anything. Together, these changes have made it about 100000% easier to review patches for Tracker, so if you were considering contributing code to the project I can safely say that there has never been a better time!

The Tracker project is now divided into two parts, the ‘core’ (tracker.git) and the ‘miners’ (tracker-miners.git) . The core project contains the database and the application interface libraries, while the miners project contains the daemons that scan your filesystem and extract metadata from your interesting files.

Let’s look at what happens automatically when you submit a merge request on GNOME GitLab for the tracker-miners project:

  1. The .gitlab-ci.yml file specifies a Docker image to be used for running tests. The Docker images are built automatically from this project and are based on Fedora.
  2. The script in .gitlab-ci.yml clones the ‘master’ version of Tracker core.
  3. The tracker and tracker-miners projects are configured and built, using Meson. There is a special build option in tracker-miners that makes it include Tracker core as a Meson subproject, instead of building against the system-provided version. (It still depends on a few files from host at the time of writing).
  4. The script starts a private D-Bus session using dbus-run-session, sets a fixed en_US.UTF8 locale, and runs the test suite for tracker-miners using meson test.
  5. Meson runs the tests that are defined in meson.build files. It tries to run them in parallel with one test per CPU core.
  6. The libtracker-miners-common tests exercises some utility code, which is duplicated from libtracker-common in Tracker core.
  7. The libtracker-extract tests exercises libtracker-extract, which is a private library with helper code for accessing file metadata. It mainly focuses on standard metadata formats like XMP and EXIF.
  8. The functional-300-miner-basic-ops and functional-301-resource-removal tests check the operation of the tracker-miner-fs daemon, mostly by copying files in and out of a specific path and then waiting for the corresponding changes to the Tracker database to take effect.
  9. The functional-310-fts-basic test tries some full-text search operations on a text file. There are a couple of other FTS tests too.
  10. The functional/extract/* tests effectively run tracker extract on a set of real media files, and test that the expected metadata is extracted. The tests are defined by JSON files such as this one.
  11. The functional-500-writeback tests exercise the tracker-writeback daemon (which allows updating things like MP3 tags following changes in the Tracker database). These tests are not particularly thorough. The writeback feature of Tracker is not widely used, to my knowledge.
  12. Finally, the functional-600-* tests simulate the behaviour of some MeeGo phone applications. Yes, that’s how old this code is 🙂

There is plenty of room for more testing of course, but this list is very comprehensive when compared to the total lack of automated testing that the project had just a year ago!

GNOME Calculator ... a library in the making

This post will be about the reason splitting and rewriting part of GNOME Calculator in two as I see it (without absolute knowledge about the project, so if you disagree with anything and/or have any ideas, just let me know), to raise awareness of the process (possibly late, but not too late)

The problem

GNOME calculator is a handy little application. Long story short, it is a calculator application for GNOME as you all know. Written (and rewritten) for GNOME, it includes lexer, parser, and evaluation of expressions, plus a GTK+-based user interface to access the features of the calculation engine. This "engine" currently lives inside the projects' tree lib folder, and is used as a static library for the application. The library seemed fairly well split from the user interface, but it turned out there is a dependency on gtk textview, because of a mathematical equation subclassing Gtk.SourceBuffer (from gtktextview library) for easier handling, which is a subclass of Gtk.TextBuffer (from gtk+). So gtk+ is a transitive dependency of the calculator library. Moreso, the library also has a direct dependency on gtk+ due to having a reference to a Gtk.TextTag for "marking" the answer part of the equation to be able to reuse it and/or find it programatically in the text view or visually by marking it with bold characters.

With all these stated, you can see that if you would like to build a simple console application for evaluating expressions using this library, you would have to pull in gtk and gtksourceview as a dependency, which might not be the best thing to do.

The solution


It would be great to have the calculation engine without a dependency on any display toolkit so that it can be reused by projects. The library source could live inside gnome-calculator source tree, but could also live separately. I would go with the first option for now (looking easier to me), unless you have a good reason to move it to a separate git repo. And if you have, please share your reason.

The library license

Now the harder question is: if we want the library to be used by other projects, what license should it use? Currently, as it lives inside the gnome-calculator tree, it is licensed using GPL(v3), but that has certain restrictions, and some people argue that LGPL is usually a better fit for libraries. I'm always afraid of licensing questions, and I'm always asking for help in these matters, so as the person asking for the library split was asking for a re-licensing of the library I got a bit worried. But fortunately it will not be the first relicensing in history, so there is quite some information on how to do it. And it involves contacting ALL authors to check if they are OK with the re-licensing. And there are quite a few authors, starting from easy-to-find people with email addresses to people without email addresses and to companies without contact (e.g. Sun Microsystems - acquired by Oracle since quite some time).

The process


Daniel Espinoza Ortiz is working hard for quite some time to make this happen, with a couple of people jumping in with ideas and feedback on his MR or the related bug report.
Daniel has already contacted all the authors he could (with some level of success and some responses for accepting the relicensing), and he is working hard on re-implementing the parts of the code authored by people/entities who could not be contacted or who didn't respond.

Feedback is important

I can see the obvious benefit of these changes for the project am willing to continue (being translated to review and accept the related MR) with the re-licensing and library split, with the benefits outweighing the disadvantages to me. But I might have missed something you know or you have already experienced while doing something similar, so it would be great if you could help Daniel and us out to finish this transition in any of the following ways after checking the discussion in the MR or in the issue:
  • if you are an author/contributor of any of the files in gnome-calculator/lib folder and you didn't receive anything related to re-licensing, please state if you accept or refuse the re-licensing from GPL to LGPL
  • if you have any suggestions on the process (either re-licensing or splitting out a backend library of a project), what to look out for, just share in a quick comment here or in the MR or the issue
  • if you have any suggestions on the MR, just join the discussion



January 04, 2019

2019 – New directions

It has been a while since you’ve heard from me. My keyboard’s Q, W and E buttons broke, but that will not prvnt m from making som nois on this blog anyay!

Pencil and paper

Since August 2018 I have been attending a classical drawing course full time. The craft has given me a great foundation for understanding composition, value and color judgement and I feel much stronger in my ability to plan, judge and execute visual artwork. My hands are also much more willing to do what my mind imagines and my thought process has been thoroughly challenged and turned upside down by my teacher.

Click to find BastianIlso's studies on Pixelfed

You may click the picture above if you are curious to see some studies. All in all, I’m far from done with classical drawing and I still have much to learn. I have every intention of returning to the school and continuing my studies in the future!

Career

Back in August I posted about my finishing of my master’s degree and that I was looking for new opportunities. Since then I have been offered a position as research assistant at Aalborg University’s Interaction Lab which I after much consideration have accepted. This has since January 1st 2019 been my full time occupation and will continue to be until 2020. The department is nice and I’m happy to be surrounded by many bright heads around me I can discuss and learn from every day. I will mainly be occupied with HCI research around mobile interaction, virtual reality and health technology. Which brings me to a sadder point..

GNOME Release Videos Needs New Hands!

It’s hard for me to let go, but reason tells me that it is time to pass on the torch with release video production for the time being. 10 videos is a great round number and a good place for me to step down. None of them were ever a stand-alone project and I deeply thank everyone for their contributions, small and big! I’m far from convinced that I have hit the right magic release video flavor yet, but they require a large concentration of time that I no longer have on my hands to give. That said, get in touch if you are interested in being the next video production person! I will gladly supervise, pass on necessary details and give feedback in the process of it all. I’m unfortunately hard to get hold off on IRC/matrix these days, but quiet easy to get hold of on telegram and e-mail.

FOSDEM

This is not a goodbye post, let’s just make that clear. I’m going to FOSDEM to take care of the GNOME stand and I’m bringing lots of socks! I’m eager to meet all of you fellow GNOMEies again. I have arranged an apartment which I will be sharing with Tobias, Florian and Julian and I’m looking forward to it!

So all in all, lots of things. I’m in the middle of moving out of my student dormitory so there’s still stuff to do. Let’s see what else 2019 brings! Happy new year!

January 02, 2019

F29 release parties in Brno and Prague

I’ve been organizing Fedora release parties in Brno since Fedora 15 (2011) and with the great release of Fedora 29 I couldn’t make an exception. With help of Květa Mrštíková, Lenka Čvančarová, and all speakers I organized F29 release parties in Brno (Nov 26) and in Prague (Dec 4).

The Brno one was hosted in Red Hat offices in Brno and all speakers were redhatters. I kicked off the event with a talk on news in F29 Workstation. Then Michal Konečný continued with his experience using Silverblue (OSTree-based Workstation). František Zatloukal talked on his passion – gaming on Fedora. After the recent release of Proton by Valve, there was a lot to talk about. The last talk was delivered by Lukáš Růžička and it was about maybe the biggest feature in Fedora 29 – modularity.

photo_2019-01-02_13-31-51Michal Konečný talking on Silverblue.

The party was attended by 50+ visitors both from Red Hat and local community (mainly students). Besides food for their minds (talks) there was also refreshment and all kinds of Fedora swag.

photo_2019-01-02_13-32-02A break between talks.

Fedora parties in Prague are usually smaller simply because Red Hat doesn’t have a large office there and visitors come from local community. A smaller number of people and a very cozy venue provided by Etnetera creates very informal atmosphere that generates interesting discussions. I must say I enjoyed this release party perhaps the most from all I’ve ever organized.

photo_2019-01-02_13-32-07Christmasy atmosphere for the F29 party in Prague.

I started with a talk on Workstation and since there was no talk on Silverblue I also talked on my experience with it. My talk blended with interesting discussions about related topics and it took over an hour. But I really enjoyed it because it didn’t feel like talking to a silent crowd and some attendees contributed by interesting points and pushed me to clarify some things I talked about. We also had a talk on modularity, this time by Adam Šamalik and František Zatloukal came with me from Brno to talk on gaming on Fedora. I was really looking forward to the talk by Ondřej Koch from the National Library of Technology where they deployed Fedora Workstation on ~200 PCs. Unfortunately he didn’t show up. Then one of the attendees stepped up and gave a talk on how he created a backup solution based on ZFS for a really small municipality in his birth village.

photo_2019-01-02_13-32-04Adam talking on modularity.
photo_2019-01-02_13-32-00And people listening.
photo_2019-01-02_13-31-55Fedora swag in Prague.

The number of attendees was around 25 and again besides talks we also prepared some food refreshment (courtesy of Red Hat) and a small keg of beer (courtesy of Etnetera). Lenka also surprised everyone by a cake for the 15-year anniversary of Fedora Project.

photo_2019-01-02_13-31-58Lenka cutting the cake.

I’d like to thank everyone who helped with the events and Red Hat and Etnetera for providing venues and refreshment.

January 01, 2019

Ready for 2019

This blog has not had many posts in 2018 but the “new year’s post” is almost mandatory, so here it goes.

Family

December was similar to last year, spending most of the time in Portugal with the family and coming back to Berlin right before the New Year’s eve.
After 3 years in Berlin, I think I am finally reconciled with the oddities/particularities of the city, my German is improving, and I do enjoy living in here more than last year. We do miss our friends in Geneva and other places, and of course it’s ever more and more difficult to leave our family in Portugal after seeing how much our son and our daughter enjoy being with the grandparents and cousins.

A picture of Lagos in Portugal, showing a landscape with big rocks and a cliff by the sea.

Lagos, Portugal, where I spent part of my vacation

Still on the personal side, the biggest event this year has been my daughter’s surgery. She needed a throat surgery to remove part of her tonsils, in order to breath and sleep better, among other things that would improve as a result (having more energy, eating and growing more, etc.). It was a “simple” throat surgery, but not without risks (we spent 5 days in the hospital for the post-surgery recovery and observation). I tried to explain it to her as if it were a special sleep-over at the hospital where the folks there would help her breath and sleep better, so much that she was disappointed when we had to reschedule the surgery 3 times (2 times because she was sick, 1 time because the hospital organization is not the best and they lost our appointment!). She faced the event like the brave girl she is and she was always patient too. In the end, the results were amazing and could be seen almost immediately: she now has much more energy, sleeps well, eats and speaks better… like night and day!
I hope my son doesn’t have the same condition, but judging on how much energy, strength, and overall physical agility he has, I’d say he’s fine 🙂

I cannot emphasize enough though, that even if the surgery scheduling was a mess, the surgery and post-surgery care could not have been better. From the doctors, to the nurses, and other assistants, everybody was really nice, patient, and professional. Having any surgery on your child is always something very delicate and challenging to deal with, and the staff did make me feel like my daughter was in the best hands possible. Of course, this care was the same anyone would get with a normal/public coverage in Germany, and thus it’s even more remarkable. Even though in the EU we sometimes take Universal Health Coverage for granted, it’s good to remind ourselves how precious it is.

Work

2018 also meant some changes in my daily work at Endless as I joined a new team to help deliver a new project with a different type of users. This project is called Hack, and aims to deliver a desktop computer experience that integrates elements to teach programming and computing concepts to users from age 8 and up. This also meant that I traveled twice to work with the rest of the team from the San Francisco office, and it’s always nice to hang out with my colleagues directly.
We have already shipped the first computers around Christmas in a great effort from everyone involved (in this team and others), and I am proud of what we’ve achieved! There is still a lot of work to be done, so be sure to follow the news about the project.

With this new project and being a father of two means I didn’t really have much more time/energy left for side projects but I still managed to give a presentation about ostree and Flatpak at CERN, and at the Linux Technologies Berlin meetup, which I really enjoyed and hope to have the opportunity of giving more presentations this new year too.

I guess that’s already a good enough summary for the small attention span we all have in this decade, so I will leave it here.

Have a great 2019, everyone!

December 31, 2018

GNOME Security Internship - Update 2

The introduction and the first update can be found here and here.

Allow one keyboard even when screen is locked

Let’s hypothesize that you choose to protect your PC from new USB devices when the lock screen is active. USBGuard does its job and every USB devices plugged with a locked screen gets blocked. The key word here is every.

What if your keyboard breaks? You go to your garage searching for an old working keyboard. After 15 minutes of searching, you find it and you return to your PC. The screen is now locked because the automatic timeout passed. You plug the new keyboard but it doesn’t work, because the screen is locked.

You could do a reboot but in this way you’ll lose all your unsaved work.

In order to gracefully handle this situation we added an exception to the block rule. Now if you plug in a keyboard we authorize it if, and only if, this is the only one currently available.

And what if an attacker disconnects my keyboard, plugs his infected device and then reconnect my keyboard? In this scenario the attacker device will be authorized (if it advertise itself as a keyboard), but then the session would be still locked. So he needs to reconnect your keyboard too, because he needs to wait that you return in order to unlock the PC. But when he reconnects your keyboard it will not be authorized because it is no more the only available keyboard in the system. Also because we don’t permanently store a list of whitelisted devices. So when you replug a device it will be treated as new, like it never saw it before.

In order to check if the plugged device is a keyboard we check if the USB class is “03:00:01” or “03:01:01”, as described in the USB specs.

Are there currently some problems to this? Unfortunately yes, because it’s not everything black and white. This method works very well until your “gaming” programmable mouse also advertise itself as a keyboard. In this scenario we do not authorize your replacement keyboard because we see that there is already a working one.

In the end this solution definitely improves the situation, while in the future probably it could be revised and tweaked a bit more.

Initial work regarding a “smart” always block.

This is something still in its early stage. In a first version, if the screen is not locked and you plug a keyboard the screen will be locked.

This works good in theory but not in practice, because not only devices with physical keys are keyboards. For example the devices for hardware 2FA (e.g. yubikey) are also keyboards, and locking the screen every time you plug one of those is not a pleasant experience. So for a first implementation this solution is ok, but it definitely needs to be improved.

One way to do it can be by mapping scancodes to keycodes, limiting particular devices capabilities (e.g. prevent them to use risky keys like “alt” or “ctrl”). Anyway this will require more research about what we can do and what’s the best way to do it.

How do we notice USBGuard configuration changes?

Before this part was handled by GNOME-Control-Center (g-c-c). Meaning that it had the job of keeping in sync what we had in gsettings and the internal state of USBGuard. But this was a problem if, for example, the user manually changed the USBGuard configuration while g-c-c was closed. We would have noticed the change only at the next g-c-c opening.

Refactoring!

In order to solve this a quite substantial refactoring happened. Now we have:

  • GNOME-Control-Center: part of its logic migrated to GNOME-Settings-Daemon. Now it just syncs with the gsettings schemas and doesn’t talk anymore directly with USBGuard.

  • gsettings-desktop-schemas: as before we store here under org.gnome.desktop.privacy the desired USB protection level.

  • GNOME-Shell: previously g-s had the job to authorize new USB devices, mainly because this was the only component that was always running. Now g-s is just used to display an indicator icon when the USB protection is active.

  • GNOME-Settings-Daemon: this is the new entry, now we have a daemon always running. It has two jobs, it keeps in sync the USBGuard configuration with the schemas on gsettings and also it is the one that authorizes new USB devices.

USB protection status icon

Enable and disable USB protection

From GNOME-Control-Center we give the user the ability to set the protection level to:

  • “Never”: meaning that you don’t want any sort of protection. You want every new devices to be authorized.

  • “When lockscreen is active”: meaning that you want the protection only when the session is locked.

  • “Always”: meaning that you always want the protection.

With “never” we set USBGuard’s InsertedDevicePolicy to apply-policy and we prepend to the rule file allow id *:*.

Good, but what if I want to manually configure USBGuard and I don’t want GNOME to mess anymore with my config, how do I do it?

To handle this scenario we added a more explicit on/off switch to g-c-c. “On” actually means “I want GNOME to handle USBGuard”. On the other hand “off” means “I want to handle USBGuard by myself”.

g-c-c switch

What to expect next and comments

In the next days I’ll try to improve the always on protection. Also I need to add a more robust way to check if we have a working USBGuard on the system. I’ll also do some extensive testing, trying to check the behaviour with different environments and scenarios.

Feedback is welcome!

Happy new year!

December 28, 2018

The dream of LILA and ZeMarmot

Imagine a movie studio, with many good artists and technicians working on cool movies or series… and releasing them under Libre Licenses for anyone to see, share and reuse. These movies could be watched freely on the web, or screened in cinemas or on TV, wherever.

Imagine now that this studio fully uses Free Software (and Open Hardware, when available!), and while it produces movies, they debug and fix upstream software they use, and also improve them as needed (new features, better interaction and design…). This would include end software (such as GIMP, Blender, Inkscape…), as well as the desktop (GNOME currently), or the operating system (GNU/Linux) and more.

Now you know my dream with the non-profit association LILA, and ZeMarmot project (our first movie project, an animation film). This is what I am aiming for. Actually this is what I have been hoping for since the start, but maybe it was not clear enough, so I decided to spell it out.

If you like this dream, I would like to encourage you to help us make it real by donating either through Patreon, Tipeee, Liberapay, or other means (such as direct donation by wire transfer, Paypal, etc.).

If you want to read more first, I am going to add details below!

My current job

I hinted that there were some cool stuff going on lately on a personal level, and now here it is: I have been hired by CNRS for a year to develop things in relationship to GIMP and G’Mic.

This is the first time in years I have a sustainable income, and this is to continue working on GIMP (something I have done for 6 years, before this job!). How cool is that?
For the full story, I was first approached by the G’Mic team to work on a Photoshop plug-in, which I politely declined. I have nothing against Photoshop, but this is not my dream job. Then the project got reworked for me to continue working on GIMP instead. In particular we identified 2 main projects:

  1. The extension management in GIMP I already talked about (back then, I had no idea I would be hired for it), since it will help G’Mic spread a lot. I will also use the opportunity to improve plug-in support in GIMP.
  2. Implementing their smart colorization algorithm within GIMP.  This was actually my own idea when they proposed to work with me, as this fits very well with my own plans, and would finally make their algorithm “useful” for real work (the interaction within G’Mic is a bit too painful!). I will talk a bit more about this soon in a dedicated post, but here is some teaser:

Where does ZeMarmot project stand?

ZeMarmot is my pet project (together with Aryeom’s), I love it, I cherish it, and as I said, this is where I see a future (not necessarily just ZeMarmot, but what it will lead to). Even though I now have another temporary source of income, I really want to stress that if you like what I do in GIMP, you should really fund ZeMarmot to ensure that I can continue once this contract ends.

This year with CNRS is an opportunity to give the project the chance to bloom. Because let’s be honest, it has not bloomed yet! Every year is the same story where we are asking for your help. And when I see all other foundations and non-profits having started to ask for help a month before, I know we are very bad at it. We are technical people here (developers, animators…), who suck at marketing and are asking at the last second.

Right now, we are crowdfunded barely above 1000 € a month, which can’t even afford to pay someone full time at legal minimum wage in the country we live in. Therefore in 2018, LILA has been able to hire Aryeom (direction/animation work) and myself (development) 6 days a month each, on average. It’s not much, right? Yet this is what we have been living off.
We need much more funding. To be clear, the minimum wage (full time) in France requires about 2100€, and we estimate that we’d need 5000€ per person to fund a real salary for such jobs (same estimation as the Blender Foundation does), though it is still below average market value . So LILA is 4 times away to be able to afford 2 salaries at minimum wage, and 10 times away to be able to pay reasonable salaries (hence also even hire more people). How sad is that?

What is LILA exactly?

LILA is officially registered in France as a non-profit association. It also has an activity number classifying it as a movie production, which makes it a very rare non-profit organization, allowed to hire people for producing movies, which it has now done for nearly 3 years now.

The goal of this production is not to enrich any shareholders (there are no such things here). We want to create our art, and spread it, then go to the next project, because we love this. This is why ZeMarmot is to be released under a Creative Commons by-sa license, which will allow you to download the movie, share it with your friends and family, even sell or modify it. No kidding! We will even upload every source image with layers and all!

Still LILA intends to pay appropriate salary to every person working. Because we don’t believe that Libre Art means “crappy work” or “amateur”. It’s for fun? Yes. But it’s also professional

So if it had crazy funding, LILA would not give us decadent salaries, but would hire more people to help us making the art/entertainment world a better place! That’s also what it means being a non-profit.

And Free Software in all that?

There is this other aspect of our studio: we use Free Software! Not only that, we also develop Free Software! When I say we develop Free Software, I don’t even mean we release once in a while some weird internal script used by 2 people in the world. Mostly we are part of the GIMP team. In the last few years, we are about a fourth of the commits of GIMP (you can just check the commits in particular by myself “Jehan”, as well as ones by Aryeom and Lionel N.). I have also pushed for years to improve the release policy to be able to get new features more often (which finally happened since GIMP 2.10.0!). I believe we are providing positive and meaningful contributions.

GIMP is our main project these days, but over the years, I have had a few patches in many important software! And we regularly report bugs when we don’t have time to fix them ourselves… we are early graphics tablet adopters so we are in contact with some developers from Wacom or Red Hat (and we are sorry to them, because we know sometimes I can be annoying with some bugs! 😛). And so on. Only thing preventing me from doing more is time. I know I just need more hands, which would only happen if we had enough funding to start hiring other Free Software developers.

And let it be known, this is not a temporary thing because we don’t want to pay some proprietary license or whatever. No, we just believe in Free Software. We believe this is the right thing to do, because everyone should have access to the best software. But also, we are making much better software this way. I said we do about 1/4 of GIMP commits. This still means that 3/4 are made by other people, and this is even forgetting GEGL (GIMP graphics engine), which is awesome too. Basically we would not be able to do that well just by ourselves. We really enjoy working with some of the sharpest minds I had the chance to work with in software. Not only this, other GIMP developers are really cool and agreeable as well. What better to ask for? This is what Free Software is.
So yeah: using and contributing Free Software is actually in our non-profit studio bylaws, our “contract as a non-profit” and we won’t drop this

2018 in review

Just a very quick review of things I brought in GIMP in 2018:

  • 633 commits, hence nearly 2 commits a day in average, in master branch of GIMP (and more on feature branches in-progress) + patches on various projects we use (GEGL, glib, GTK+, libwebp, Appstream…)
  • Helping MyPaint so that they can soon release a new libmypaint v1 (hopefully early 2019), and creating the data package mypaint-brushes (now an upstream MyPaint package!).
  • Creating and maintaining GIMP flatpak on flathub (according to what we were told, the most downloaded software on flathub!)
  • Automatic image backup on a crash of GIMP
  • Debug tools for automatic gathering of debug data (stack traces, platform info…)
  • HiDPI basic support on GIMP 2.10 (and more work on HiDPI on future GIMP 3)
  • Work-in-progress for extension management in GIMP
  • Maintenance of some data (icons, brushes, appdata, etc.) in GIMP
  • Tablet and input debugging
  • Mentored a FSF intern (improved JPEG 2000 support)
  • Fixed most cases of DLL hell of plug-ins in Windows (used to be the cause for a huge number of bug reports!)
  • Reviewed and improved many features (auto-straighten in Measure tool, libheif and libwebp support, screenshot plug-in, vertical text in text tool, and so much more that I can’t list them all!)
  • Smart colorization option in bucket fill

And everything I probably forget about. I also help a lot on maintaining the website and writing news on gimp.org (63 commits this year). And this all doesn’t count non-GIMP related patches I do sometimes (for instance on Korean input) or the many reports we write and help to fix (notably since we were probably the first to install Linux on a Wacom MobileStudio, or at least talk about it, several bugs were fixed because we reported them and helped debug, even down to the kernel, or Wayland).

And then there is what Aryeom did in 2018, which is a lot of work for ZeMarmot of course (working on animation is a loooot of work; maybe we could do some blog post about it sometime), so we are getting closer to the pilote release (by the way, we recently created an Instagram account where Aryeom posts some images and short videos of her work in progress!). And also some side projects to be able to sustain (I remind that LILA could pay her only an average of 6 days a month officially!), such as an internal board game for a big French non-profit (“Petits Frères des Pauvres“) helping penniless people, a marketing video for the Peertube free software, and pin designs for the Free Software Foundation. She also gave some courses of digital painting and retouching with GIMP at university.

Note that she would also prefer to work only on ZeMarmot full time, but once again… we need your help for this!

The Future

This is how I see our hopeful future: in a few years, we have enough to pay several artists (director, animators, background artists, music…) and developers. LILA will therefore be a small but finally a bit more productive studio

And of course it also means more developers too, hence more control over our Free Software pipeline. I have so many dreams: finally a better video non-linear editor (be it contributing to Blender VSE, Kdenlive or any other which we will decided to use), stable, powerful yet not convoluted? Finally some dedicated Free Software compositing and effects tool for professional (2018 was a bit sad there, right)? Finally more communication between all the tools so that we can just edit our XCF in GIMP and see some changes live in Blender? So many hopes! So many things I wish we had the funds to do!

Help the dream come true

So what can you do? Well you can help the studio increase its funds to first finally be able to just survive. The fact I got hired by CNRS is very cool, but in the same time so sad, because it means that our project was not self-dependent enough. Somehow I had to accept the CNRS contract to save our project. Let it not be the end!

What if we reached 5000€ a month in 2019? This would be a huge milestone for us and the proof our dream is viable.

Will you help us create a non-profit Libre Animation Studio? Professional 2D graphics with Free Software is just there, at our door. It only takes a little help from everyone interested to help it through the entrance! 🙂

» Fund in Patreon (USD $) «
» Fund in Tipeee (EUR €) «
» Fund in Liberapay «
» Other donation methods (including wire transfer or Paypal) «

And have fun end-of-year holidays everyone, and a happy new year!

December 26, 2018

I fell in love with GAction

A few weeks ago, while working on Fractal, I rediscovered something I had completely forgotten about: GAction. I don’t remember how I came across it, but it was definitely love at second sight, because for the next days I didn’t do much other than refactor Fractal and replace code with Actions wherever I could.

Of course, if you like something a lot you have to be careful to use it only where it’s appropriate. For me it was basically when it’s a user action then it should use GAction, or in other word if it’s a response to an user input, like clicking on a button, then it should be a GAction.

Now, how do you use GAction? Since GAction is only an interface we need to use a class implementing it. The most simple class we can use is of course GSimpleAction as the name already implies. Since I’m writing most of my code in Rust these days, all examples will be in Rust.

// We have to create a SimpleActionGroup
let actions = SimpleActionGroup::new();
// Currently we can't use any Variant type we want, we are limited to the most basic ones like strings, numbers, and bools
let reply = SimpleAction::new("reply", glib::VariantTy::new("s").ok());
// And then we have to add it to the ActionGroup
actions.add_action(&reply);
// The last thing we have to do is to connect to the activate singal
reply.connect_activate(move |action, data| {
//Do whatever you want in here, e.g. open a file dialog
});

The above code shows how an action is created. Let’s see how you can use it.

The first thing we need to do is add the created ActionGroup to a Widget. The action will then be available for all children of that widget, so it’s best to add it to a Container or a Box which allows us to have children.

// Create a Box
let b = gtk::Box::new(gtk::Orientation::Horizontal, 0);
// Add the action group we created before to the widget
box.insert_action_group("something", &actions);

Now the actions are available to all children of the box with the prefix “something”. In our example “something.reply” would be the name of the action. Given a GTKButton you can set the action_name and the target_value of the widget, because it implements GtkActionable. The code basically looks like this:

// Create a new GTK button
let button: gtk::Button = gtk::Button::new();
// We need to use a Variant to store the string we want to set as the target value
let data = glib::Variant::from("some string");
// Set the action name
button.set_action_name("something.reply");
// We also need to set the target value
// If we don't do this, GLib will complain that we didn't call the action with the correct target and won't execute the action
button.set_action_target_value(&data);

You could also set the properties via the .ui file. When you replace the event handlers of clicks with actions you can really streamline your code base.

I showed the basic usage of the GAction in the above examples, but you should definitely have a look at the GAction documentation to see all the potential it has. Also, GPropertyAction is awesome because it allows you to control GObject properties directly via Actions. Sadly, the Rust bindings for it are not in a stable release yet, but they were added to master a couple of months ago.

December 23, 2018

Seville Fractal Hackfest Report

Last week I was in Seville, Spain for our second Fractal Hackfest this year. This time it was organized by Daniel Garica Moreno, and held on the University of Seville campus. It was a small event with mostly core developers, focused on driving forward the backend changes and refactoring needed to make our future plans (end-to-end and the app split) possible. We also had some local newcomers join (shoutout to Alejandro Domínguez).

After-hours hacking at the apartment

Backend

The main reason why we wanted to have another hackfest was to drive forward the long-term effort of making the application more modular, in order to add persistent storage and allow for the app split. The work done over the past few months (such Julian’s room history refactor) has gotten us much closer to that, but there were still some missing pieces and open questions.

During the hackfest we discussed some of these things, and decided, among other things, to split the backend out into a separate crate and move to SQLite for persistent storage. Daniel and Julian already started working on these things, see their blog posts for more details.

GNOME Newcomer Experience

The other focus for the hackfest was discussing an improved GNOME Newcomer experience. Finding the right rooms to join is currently quite confusing, as there is no central directory of all GNOME rooms. The main way people discover rooms seems to be word of mouth or googling for them, which is clearly not great.

Since one of the main goals for Fractal is to provide a more modern alternative to IRC for GNOME developers, we’ve been thinking about how to make it seamless to discover and join GNOME rooms for some time.

In theory Matrix has communities, which would address use cases like this one. However, since the spec for this is not really settled yet, and implementing it would be a lot of work, we’d like to do something simpler for now.

Looking at some relevant art we found Builder’s integration of Newcomer apps on the project screen quite interesting, because it’s very accessible but doesn’t get in the way.

Newcomer apps in Builder

The idea we came up with is combining the newcomer UX with the room directory into a new discovery view, and moving it to a more prominent location in the sidebar. There are still some open questions about how exactly we’d implement this, but it’s an exciting direction to be working towards.

Mockup for the new first run experience with easy access to both the room directory and GNOME rooms

Hacking, Housekeeping, and Best Practices

We did a lot of overdue housekeeping and organizational work, like moving to GNOME/ on Gitlab (thanks Carlos!), getting a new policy for code reviews and QA in place, issue triage, fixing bugs, and starting the process of documenting best practices we’d like contributors make use of.

Call with johannes about end-to-end encryption. Unfortunately we had to use a University computer running Windows to use the projector :(

New Release!

Lastly, we made a push to fix some of the last outstanding blocker bugs to get a new stable release out the door, which is long overdue (the last release was in September, which is an eternity by our standards). A ton of exciting stuff has landed on master since then: a more legible layout for the message view (using Libhandy’s HdyColumn), smoother message loading, a reorganized header bar with in-window primary menu, and support for large emoji. I’m very excited to finally get these things in the hands of users.

The new release now almost ready, and will be on Flathub very soon. Since there were a lot of big changes under the hood, there will probably be some exciting new bugs as well, so please file issues on Gitlab :)

Thanks to everyone who attended the hackfest, Daniel for organizing, and the University of Seville for hosting us. Thanks also to my employer Purism for sponsoring my travel and accommodation, and the GNOME Foundation, SUGUS, and Plan4D for sponsoring some lunches and dinners. See you next time o/

Fractal hackfest in Seville

Work

7 months after our previous hackfest, we had a new one. Dani organised this one and got us a nice room at the University of Seville.

Sponsored by the GNOME Foundation

Thanks to a sponsorship from the GNOME Foundation, I was able to go and share an appartment with Tobias and Julian. Sadly other existing contributors were not able to attend, but we had the pleasant surprise of having two local students who participated. Alejandro even got a couple commits merged in master during the event. We also got to spend some time with a few local Free Software people.

We did a lot of testing to get a good feel for the current state, and rounded off a few sharp edges with quick fixes. I also spent some time figuring out what still remained to be done before we could release. I proposed some improvements to our review process, the way we share development best practices, and brought home quite a few todo items.

On the last day we had a call with Benpa and Matthew that turned into an episode of Matrix Live. We also moved to the GNOME group on gitlab and yesterday we released 4.0 after a few days of testing and some last minute fixes. It should hit Flathub in a moment.

Not just work

My first impression getting from the airport to the city center was that the landscape reminded me a lot of my trips to California. Same palm trees, large streets… even the weather was reminiscent of the one I experienced a few years ago in the month of October in San Francisco. We could go out with only a hoodie on and be fine. The first step out of the plane when I got back in Alsace was quite unpleasant, and so were the next three hours that it took me to finally be home to be honest. The sight of snow out of my window on the next morning kind of felt like nature was forcing me to catch up on what I missed during my trip. Good thing I have memories of last week to keep me warm for a while!

December 19, 2018

GUADEC 2018 - Product Management In Open Source

This year at GUADEC in Almería I was lucky enough to give a talk entitled “Product Management in Open Source”. I’ll give a text synopsis of the talk below but if you prefer you can watch the whole thing as delivered at the Internet Archive or have a look at the slides, which are entirely mysterious when viewed alone:

The talk begins like so: I’m Nick Richards. I’ve been a GNOME User for 20 years and a contributor and Foundation Member - 10 years (off and on). These days, the Free Software project I’m most passionate about is Flathub.

These days I’m a Product Manager at Endless. Endless OS ships a customised, forked version of GNOME shell and a plain version of the rest of the GNOME platform. It’s currently based on 3.26 but with plenty of activity going on upstream.

What is a Product Manager? Christian Knows!

“Product managers are responsible for taking information and feedback from everywhere and converting that into a coherent project direction and forward looking vision. They analyze common issues, bug velocity, and features to ensure reasonable milestones that keeps the product functional as it transforms into it’s more ideal state. Most importantly, this role is leadership.”

It’s often a role you grow into; Product Managers rarely start there. In the past I’ve been a User Advocate, Interaction Designer, User Tester, Localiser and messed about with Development and Operations. Having a diverse background helps you empathise with all the people you have to interact with although most Product Managers are deep experts in one area.

What does a product manager actually do?

  • Integrate all the things. You should know everything about your project, know a bit about everyone’s job and be able to help wherever there’s stress.
  • Maintain Standards everywhere. You don’t have to approve everything and become a bottleneck, but you should be able to keep on top of things and be aware if dubious code is about to be released.

A lot of this often boils down to saying no in an encouraging way. Doesn’t this sound like a free software maintainer? So why should you care? You do this already. Christian, again:

“For the past 3 years I’ve been working very hard because I fulfill a number of these roles for Builder. It’s exhausting and unsustainable. It contributes to burnout and hostile communication by putting too much responsibility on too few people’s shoulders.”

You might want to get Product Management involved so you can spend more time doing the things you joined the project to work on.

Which projects need Product Management? Not everything does, but if you’re working on these kinds of things it can help:

  • Cross cutting initatives (GNOME Goals) like the App Developer Experience or Privacy. The kind of areas where you need a strategic vision and co-ordination
  • “Big” projects likw the shell, builder, anywhere where really getting close to the details makes a difference or where a large team of regular contributors help out.

Your project may not yet have the ambition to be “Big”, which is OK. I love constrained scope too and the growth of small ‘single serving’ apps is one of the most interesting recent developments on the Free Desktop.

Product doesn’t work alone, Product is part of a software team. Here’s a quick rundown of some of the roles that you might find on a software team: Developers, Interaction Designers, Quality Assurance, User Support, Graphic Designers, Security Engineers, User and Developer Advocates, User Researchers, Tech writers, Build engineers, Operations people, Release managers, Standards Committee Members, Public Relations, Hackfest Organisers, Localisers, Internationalisers, and more… Very few projects will have all of these but expand your mind, what could your project do if it had some people working on one of these roles?

Product without other people is lonely and pointless. Solving complex problems requires a diverse team and often doesn’t admit simple solutions. A perennial problem for open source is attracting people with skillsets you don’t know about. You can’t just sit around and wait for them to turn up and start working, you need to invite them in by name. Maybe also start by taking their tools seriously. Yes, that probably means GNOME should have a Discourse forum. There’s been some really encouraging recent progress on modernising our infrastructure but GitLab is a great first step, not the end. The tricky, tricky bit once you’ve got these people involed is making sure they don’t go away. This doesn’t mean ‘do everything they say’ but it does need some open mindedness. The promise of Free Software is thatby having control over the software they run it will better fit peoples needs. Reducing the barrier between producers and consumers has always been a critical part of this and lowering it further by welcoming the skills of those who can help can only make our software stronger.

How do you decide what to do? The best way to decide is by focusing on a mix of internal drive and user satisfaction. I never want to underestimate internal drive and product feel, it’s what gives your software its unique flavour and can help create fanatically devoted and delighted users and contributors but if you want a large number of users then listening to feedback can be helpful.

Some good sources of feedback are: In person, GitLab (or bugzilla), IRC, Telegram, press reviews, blogs, forums, user testing. Just plain listening on bugzilla is kind of unhelpful, you need to look at your user support channels to feel the pain, and then translate that into a good engineering journal solution.

Endless has metrics which tell us some anonymised things about how people use our software (this is controversial). But they are very useful when used responsibly, for example they can tell me that Nautilus was GNOME app that Endless users opened most in the last month. But it’s not clear we can just roll this out. GNOME has strong privacy guarantees and intermediating forces (like Endless!). It’s not normally helpful to have people between yourself and the user if you want to react to their problems and provide solutions. Where possible, disintermediate (this means you should be using Flatpak for your apps).

User research and going onto the ground to talk to users and stakeholders is also really important. It’s great to talk to people who were motivated enough to come to GUADEC but you get a whole different view by meeting people in their own context. This is something that Endless has been fanatical about, and has really influenced our product developement process and is an area GNOME can also do well in with its distributed community.

At the end of the day, great products never listen in just one way and a key role for Product Management is to weigh the feedback from each channel and merge it together into a coherant product direction. That’s the real trick, good luck!

Summing up:

  • Everyone is making a product - you may not need dedicated management right now but using the techniques can help.

  • If you want new people you have to do something to make it happen.

  • Get a diversity of feedback - but be discerning with how you listen to it.

Flatpak commandline design

Flatpak is made to run desktop apps, and there are apps like KDE Discover or GNOME Software that let you manage the Flatpak installations on your system.

Of course, we still need a way to handle Flatpak from the commandline. The flatpak commandline tool in 1.0 is powerful without being overwhelming (like git) and way friendlier than some other tools (for example, ostree).

But we can always do better. For Flatpak 1.2, we’ve gone back to the drawing board and did some designs for the commandline user experience (yes, that needs design too).

Powerful: Columns

Many Flatpak commands produce information in tabular form. You can list remotes, apps, documents, etc. And all these have a bunch information that can be displayed. It can be overwhelming when you only need a particular piece of information (and overflowing the available space in a terminal).

To help with that, we’ve introduced a –columns option which you select which information you want to see.

You can explore the available columns using –columns=help. In Flatpak 1.2, all the list-producing commands will support the –columns option: list, search, remotes, remote-ls, ps, history.

One nice side-effect of having –columns is that we can add more information to the output of these commands without overflowing the display, by adding optional columns that are not included in the default output.

Concise: Errors

It happens all the time, I want to type search, but my c key is sticky, and it comes out as searh. Flatpak used to react to unknown command by dumping out its –help option, which is long and a bit overwhelming.

Now we try to be more concise and more helpful at the same time, by making a guess at what was meant, and just pointing out the –help option.

Friendly: Search

In the same vein, the reverse-DNS style application ID that Flatpak relies on has been criticized as unwieldy and hard to handle. Nobody wants to type

flatpak install flathub org.gnome.meld

and commandline completion does not help too much if you have no idea what the application ID might be.

Thankfully, that is no longer necessary. With Flatpak 1.2, you can type

flatpak install meld

and Flatpak will ask you a few questions to confirm which remote to use and what exact application you meant. Much friendlier. This search also works for the uninstall command, and may be added to more commands in the future.

Informative: Descriptions

Flatpak repos contain appstream data describing the apps in detail. That is what e.g. GNOME Software uses on its detail page for an application.

So far, the Flatpak commandline has not used appstream data at all. But that is changing. In 1.2, a number of commands will show useful information from appstream data, such as the description shown here by the list and info commands:

If you pay close attention you may notice that column names can be abbreviated with –columns.

Fun: Prompts

Here’s the commandline version of theming. We now set a custom prompt. to let you know what context you are in when using a shell in a flatpak sandbox.

You can customize the prompt using

flatpak override --env=PS1="..."

The application ID for the sandbox is available in the FLATPAK_ID environment variable.

The beast: Progress

Updates and installs can take a long time – there are possibly big downloads and there may be dependencies that need to be updated as well, etc. For a good user experience it is  important to provide some feedback on the progress and the expected remaining time.

The Ostree library which Flatpak uses for downloads provides progress information, but it is very detailed and hard to digest.  To make this even more challenging, there is not that much space in a terminal window to display all the relevant information.

For 1.2, we’ve come up with  a combination of a table that gets updated to display overall status and a progress line for the current operation:

A similar, but simpler layout is used for uninstalls.

Coming Soon

All of these improvements will appear in Flatpak 1.2, hopefully very soon after the New Year. Something to look forward to.  📦📦📦

December 18, 2018

Fedora Handbook 2018 Released

I finally finished the 2018 edition of Fedora Handbook (aka Fedora Workstation Beginner’s Guide). Just a recap what the handbook is about: it’s a printed handbook that should give enough information to get a user from “knowing nothing about Fedora” to first steps in the system. It’s used as a giveaway at conferences and other events.

The original handbook was written in Czech in 2015 and the English version released last year introduced only cosmetic changes, so even though the handbook has pretty generic info and is not specific to any Fedora release there were quite a lot of changes needed.

I’d especially like to thank Petr Bokoč who suggested a lot of improvements, implemented some of them, and did the proofreading.

The 2017 edition was only translated into Czech and Spanish. I’d like to see more translations this time. If you’d like to translate it into your language, just go ahead, fork the repo, create a directory named after your language code in the “2018” branch, copy the English origin content into it, and start translating. Once you’re done, do a PR. Please stay in the stable 2018 branch and don’t translate the master. Master is a subject of change and shouldn’t be translated.

We provide a script to automatically generate Docbook, HTML, and PDF files. But ideally the outcome should be a quality PDF that is then printed. But that’s not an automatic process. I’m taking care of the English and Czech versions, but other languages would need volunteers to do the typesetting.

We also have a new cover created by Tereza Hlaváčková under supervision of Mairin Duffy.

Snímek z 2018-12-18 16-52-38

December 16, 2018

Fractal December'18 Hackfest (part 2)

The Friday 14th was the last day of the second Fractal Hackfest. I've not spend much time writing real code, the Thursday was mainly another hacking day and I've been able to continue with the fractal-backend creation, but there's a lot of work to do there.

But the hackfest was really productive, we've talked about big issues, project management, some design ideas, new functionalities, the application refactor, etc.

GNOME newcomers experience

We talked about how to improve the GNOME newcomers experience and how to improve the main view of Fractal. I think that Tobias will talk more about this, he was working in some cool design for this and I think we can start to implement this new views soon.

Best practices

We've been developing Fractal in a fast way, without spend a lot of time thinking about the code quality, maintainability and that stuff. Recently we've set the rustfmt linter in the CI pipeline and we've some tests, but the Merge Requests process wasn't defined and for example I was pushing directly to master.

To improve the quality of Fractal we've started a new wiki page to have a list of best practices to follow. There we'll add some guidances on how we should write code and the processes to follow to improve Fractal.

We've decided that we should be more strict with the Merge Request code review and now, direct push to master is not allowed, all changes will go through the review process. We should wait at least two days to merge something and have at least two people that approve the change.

This will slow down the MR process, but will improve the code quality and will reduce regressions. Any help is welcome, if you're able to test the MR, you can leave a comment and other reviewers will have more confident in the change.

We're also working in the code quality using the cargo clippy tool. There's a Merge Request waiting for review, so we'll have a better rust source code soon.

Fractal is now in the GNOME group

The Fractal project was on the World group inside the GNOME gitlab. Fractal is a GNOME application, the most active developers are GNOME developers and we try to follow the GNOME Human Interface Guidelines.

Fractal is one of the first new applications that born just during the gitlab migration so we go through a new process. At first Fractal was in my personal gitlab under /danigm/fractal then we move to the World group under /World/fractal and finally we're in the main GNOME group /GNOME/fractal.

New release 4.0.0

The last release was the 3.30.0, more than three months has passed since this release and we've a lot of changes so we want to provide a new stable release.

We discuss a bit about the version number that we should follow and we decided that we should do use our own version number system, because we want to release as much as possible, when we've important changes.

We're working in the 4.0.0 release, we're stabilizing and fixing important bugs before the release and maybe we can have the new release during the next week.

So I've spend the last day looking for bugs and preparing the new release.

Matrix Live

We've a meeting with the people from Matrix.org to talk about Fractal and the hackfest. You can view the full interview in youtube:

Friday sponsored lunch

We've a sponsored lunch the Friday 14th, the local group Plan4D invite us to a great lunch in the city center.

After that, I goes back to my home and leave the other people there in Seville doing some tourism, I've to go back to Málaga.

The hackfest was great and we have done a lot of things. This was the second Fractal hackfest in 2018, and we meet at the GUADEC too. We've had two GSoC students and now we've another intern thanks to the outreachy program. There's a lot of people contributing to Fractal and that's great.

GNOME is a great community and I think that Matrix.org and Rust are helping to this project a lot. The Matrix.org people are supporting us, indeed, Matthew comes to the first hackfest and I think that the success of Fractal and the community behind has a lot to thank to the Rust language and to the people working in the Rust + GNOME integration, Gtk-rs is a great project.

I want to apologize about the network problem during the hackfest. We've been working all days thank to Julian network sharing, with eduroam, because we aren't able to have guest access in the university.

The university has a strict internet connection filtering, so only a professor can ask for a guest connection for events and we do the request too late.

December 15, 2018

What Debian Does For Me

I woke up early this morning, and those of you live above 45 parallel north or so are used to the “I'm wide awake but it's still dark as night” feeling in the winter. I usually don't turn on the lights, wander into my office, and just bring my computer out of hibernate; that takes a bit as my 100% Free-Software-only computer is old and slow, so I usually go to make coffee while that happens.

As I came back in my office this morning I was a bit struck by both displays with the huge Debian screen lock image, and it got me thinking of how Debian has been my companion for so many years. I spoke about this at DebConf 15 a bit, and wrote about a similar concept years before. I realize that it's been almost nine years that I've been thinking rather deeply about my personal relationship with Debian and why it matters.

This morning, I was inspired to post this because, echoing back to my thoughts at my DebConf 15 talk, that I can't actually do the work I do without Debian. I thought this morning about a few simple things that Debian gets done for me that are essential:

  • Licensing assurance. I really can trust that Debian will not put something in main that fails to respect my software freedom. Given my lifelong work on Free Software licensing, yes, I can vet a codebase to search for hidden proprietary software among the Free, but it's so convenient to have another group of people gladly do that job for me and other users.
  • Curated and configured software, with connection to the expert. Some days it seems none of the new generation of developers are a fan of software packaging anymore. Anytime you want to run something new these days, someone is trying to convince you to download some docker image or something like that. It's not that I don't see the value in that, but what I usually want is that software I just read about installed on my machine as quickly as possible. Debian's repository is huge, and the setup of Debian as a project allows for each package maintainer to work in relative independence to make the software of their interest run correctly as part of the whole. For the user, that means when I hear about some interesting software, Debian immediately connects me, via apt, with the individual expert who knows about that software and my operating system / distribution both. Apt, Debian's Bug Tracker, etc. are actually a rudimentary but very usable form of a social networking that allows me to find the person who did the job to get this software actually working on my system. That's a professional community that's amazing
  • Stability. It's rather amusing, All the Debian developers I know run testing on their laptop and stable only on their servers. I run stable on my laptop. I have a hectic schedule and always lots of work to do that, sadly, does not usually include “making my personal infrastructure setup do new things”. While I enjoy that sort of work, it's a rabbit hole that I rarely have the luxury to enter. Running Debian stable on my laptop means I am (almost) never surprised by any behavior of my equipment. In the last nine years, if my computer does something weird, it's basically always a hardware problem.

Sure, maybe you can get the last two mostly with other distributions, but I don't think you can get the first one anywhere better. Anyway, I've gotta get to work for the day, but those of you out there that make Debian happen, perhaps you'll see a bit of a thank you from me today. While I've thanked you all before, I think that no one does it enough.

December 14, 2018

Firmware Attestation

When fwupd writes firmware to devices, it often writes it, then does a verify pass. This is to read back the firmware to check that it was written correctly. For some devices we can do one better, and read the firmware hash and compare it against a previously cached value, or match it against the version published by the LVFS. This means we can detect some unintentional corruption or malicious firmware running on devices, on the assumption that the bad firmware isn’t just faking the requested checksum. Still, better than nothing.

Any processor better than the most basic PIC or Arduino (e.g. even a tiny $5 ARM core) is capable of doing public/private key firmware signing. This would use standard crypto using X.509 keys or GPG to ensure the device only runs signed firmware. This protects against both accidental bitflips and also naughty behaviour, and is unofficial industry recommended practice for firmware updates. Older generations of the Logitech Unifying hardware were unsigned, and this made the MouseJack hack almost trivial to deploy on an unmodified dongle. Newer Unifying hardware requires a firmware image signed by Logitech, which makes deploying unofficial or modified firmware almost impossible.

There is a snag with UEFI capsule updates, which is how you probably applied your last “BIOS” firmware update. Although the firmware capsule is signed by the OEM or ODM, we can’t reliably read the SPI EEPROM from userspace. It’s fair to say flashrom does work on some older hardware but it also likes disabling keyboard controllers and making the machine reboot when probing hardware. We can get a hash of the firmware, or rather, a hash derived from the firmware image with other firmware-related things added for good measure. This is helpfully stored in the TPM chip, which most modern laptops have installed.

Although the SecureBoot process cares about the higher PCR values to check all manners of userspace, we only care about the index zero of this register, so called PCR0. If you change your firmware, for any reason, the PCR0 will change. There is one PCR0 checksum (or a number slightly higher than one, for reasons) on all hardware of a given SKU. If you somehow turn the requirement for the hardware signing key off on your machine (e.g. a newly found security issue), or your firmware is flashed using another method than UpdateCapsule (e.g. DediProg) then you can basically flash anything. This would be unlikely, but really bad.

If we include the PCR0 in the vendor-supplied firmware.metainfo.xml file, or set it in the admin console of the LVFS then we can verify that the firmware we’re running right now is the firmware the ODM or OEM uploaded. This means you can have firmware 100% verified, where you’re sure that the firmware version that was uploaded by the vendor is running on your machine right now. This is good.

As an incentive for vendors to support signing they’ll soon be an easy to understand shield system on the LVFS. A wooden shield means the firmware was uploaded to the LVFS by the OEM or authorized ODM on behalf of the OEM. A plain metal shield means the above, plus the firmware is signed using strong encryption. A crested shield means the vendor is trusted, the firmware is signed, and we can do secure attestation and be sure the firmware hasn’t been tampered with.

Obviously some protocols can’t get either the last, or the last two shield types (e.g. ColorHug, even symmetric crypto isn’t good) but that’s okay. It’s still more secure than flashing a random binary from an FTP site, which is what most people were doing before. Not upstream yet, and not quite finished, so comments welcome.

And I’m home

It’s almost the end of the year, so it’s time for a recap of the previous episodes, I guess.

The tl;dr of 2018: started pretty much the same; massive dip in the middle; and, finally, got better at the very end.

The first couple of months of the year were pretty good; had a good time at the GTK hackfest and FOSDEM, and went to the Recipes hackfest in Yogyakarta in February.

In March, my wife Marta was diagnosed with breast cancer; Marta already had (different types of) cancer twice in her life, and had been in full remission for a couple of years, which meant she was able to cope with the mechanics of the process, but it was still a solid blow. Since she had already gone through a round of radiotheraphy 20 years ago—which had likely a hand in the cancer appearing now—her only option was surgery to remove the whole breast tissue and the associated lymph nodes. Not fun, but surgery went well, and she didn’t even need chemotherapy, so all in all it could have been way, way worse.

While Marta and I were dealing with that, I suddenly found myself out of a job, after working five years at Endless.

To be fair, this left me with enough time to help out Marta while she was recovering—which is why I didn’t come to GUADEC. After Marta was back on her feet, and was able to raise her right arm above her head, I took the first vacation in, I think, about four years. I relaxed, read a bunch of books, played some video games, built many, many, many Gundam plastic models, recharged my batteries—and ended up finally having time to spend on a project that I had pushed back for a while, because I really needed to add writing and producing 15 to 20 minutes of audio every week, after perusing thousands of email archives and old web pages on the Wayback Machine. Side note: donate to the Wayback Machine, if you can. They provide a fundamental service for everybody using the Web, and especially for people like me who want to trace the history of things that happen on the Web.

Of course I couldn’t stay home playing video games, recording podcasts, and building gunplas forever, and so I had to figure out where to go to work next, as I do enjoy being able to have a roof above my head, as well as buying food and stuff. By a crazy random happenstance, the GNOME Foundation announced that, thanks to a generous anonymous donation, it would start hiring staff, and that one of the open positions was for a GTK developer. I decided to apply, as, let’s be honest, it’s basically the dream job for me. I’ve been contributing to GNOME components for about 15 years, and to GTK for 12; and while I’ve been paid to contribute to some GNOME-related projects over the years, it was always as part of non-GNOME related work.

The hiring process was really thorough, but in the end I managed to land the most amazing job I could possibly hope for.

If you’re wondering what I’ll be working on, here’s a rough list:

  • improving performance, especially on less powerful devices
  • identify and land new features
  • identify and fix pain points for current consumers of GTK

On top of that, I’ll try to do my best to increase the awareness of the work being done on both the GTK 3.x stable branch, and the 4.x development branch, so expect more content appearing on the development blog.

The overall idea is to ensure that GTK gets more exposure and mindshare in the next 5 years as the main toolkit for Linux and Unix-like operating systems, as well better functionality for application developers that want to make sure their projects work on other platforms.

Finally, we want to make sure that more people feel confident enough to contribute to the core application development platform; if you have your pet feature or your pet bug inside GTK, and you want guidance, feel free to reach out to me.


Hopefully, the next year will not look like this one, and will be a bit better. Of course, if we in the UK don’t all die in the fiery chaos that is the Brexit circus…

The tools of libfprint

libfprint, the fingerprint reader driver library, is nearing a 1.0 release.

Since the last time I reported on the status of the library, we've made some headway modernising the library, using a variety of different tools. Let's go through them and how they were used.

Callcatcher

When libfprint was in its infancy, Daniel Drake found the NBIS fingerprint processing library matched what was required to provide fingerprint matching algorithms, and imported it in libfprint. Since then, the code in this copy-paste library in libfprint stayed the same. When updating it to the latest available version (from 2015 rather than 2007), as well as splitting off a patch to make it easier to update the library again in the future, I used Callcatcher to cull the unused functions.

Callcatcher is not a "production-level" tool (too many false positives, lack of support for many common architectures, etc.), but coupled with manual checking, it allowed us to greatly reduce the number of functions in our copy, so they weren't reported when using other source code quality checking tools.

LLVM's scan-build

This is a particularly easy one to use as its use is integrated into meson, and available through ninja scan-build. The output of the tool, whether on stderr, or on the HTML pages, is pretty similar to Coverity's, but the tool is free, and easily integrated into a CI (once you've fixed all the bugs, obviously). We found plenty of possible memory leaks and unintialised variables using this, with more flexibility than using Coverity's web interface, and avoiding going through hoops when using its "source code check as a service" model.

cflow and callgraph

LLVM has another tool, called callgraph. It's not yet integrated into meson, which was a bit of a problem to get some output out of it. But combined with cflow, we used it to find where certain functions were called, trying to find the origin of some variables (whether they were internal or device-provided for example), which helped with implementing additional guards and assertions in some parts of the library, in particular inside the NBIS sub-directory.

0.99.0 is out

We're not yet completely done with the first pass at modernising libfprint and its ecosystem, but we released an early Yule present with version 0.99.0. It will be integrated into Fedora after the holidays if the early testing goes according to plan.

We also expect a great deal from our internal driver API reference. If you have a fingerprint reader that's unsupported, contact your laptop manufacturer about them providing a Linux driver for it and point them at this documentation.

A number of laptop vendors are already asking their OEM manufacturers to provide drivers to be merged upstream, but a little nudge probably won't hurt.

Happy holidays to you all, and see you for some more interesting features in the new year.