July 11, 2020

Introducing Minuit

It all started with a sketch on paper (beware of my poor penmanship).

Sketch of a UI

I was thinking how I can build a musical instrument in software. Nothing new here actually, there are even applications like VMPK that fit the bill. But this is also about learning as I have started to be interested in music software.

This is how Minuit is born. The name come from a mix of a play on word where, in French, Minuit is midnight and MIDI is midday. MIDI of course is the acronym Musical Instrument Digital Interface, which is the technology at the heart of computer aided music. Also minuit sounds a bit like minuet which is a dance of social origin.

I have several goals here:

  1. learn about MIDI: The application can be controlled using MIDI.
  2. learn about audio: Of course you have to output audio, so now is the best time to learn about it.
  3. learn about music synthesis: for now I use existing software to do so. The first instrument was ripped off Qwertone, to produce a simple tone, and the second is just Rhodes toy piano using soundfonts.
  4. learn about audio plugins: the best way to bring new instruments is to use existing plugins, and there is a large selection of them that are libre.

Of course, I will use Rust for that, and that means I will have to deal with the gaps found in the Rust ecosystem, notably by interfacing libraries that are meant to be used from C or C++.

I also had to create a custom widget for the piano input, and for this I basically rewrote in Rust a widget I found in libgtkmusic that were written in Vala. That allowed me to be refine my tutorial on subclassing Gtk widgets in rust.

To add to the learning, I decided to do everything in Builder instead of my usual Emacs + Terminal combo, and I have to say it is awesome! It is good to start using tools you are not used to (ch, ch, changes!).

In end the first working version look like this:

Basic UI for Minuit showing the piano widget and instrument selection

But I didn't stop here. After some weeks off doing other things, I resumed. I was wondering, since I have a soundfont player, if I could turn this into a generic soundfont player. So I got to this:

Phase 1 of UI change for fluidlite

Then iterated on the idea, learning about banks and preset for soundfont, which is straight off MIDI, and made the UI look like this:

Phase 2 of the UI change for fluidlite

Non configurable instruments have a placeholder:

UI for generic instrument placeholder

There is no release yet, but it is getting close to the MVP. I am still missing an icon.

My focus forward will be:

  1. focus on user experience. Make this a versatile musical instrument, for which the technology is a mean not an end.
  2. focus on versatility by bringing more tones. I'd like to avoid the free form plugins but rather integrated plugins into the app, however leveraging that plugin architecture whenever possible.
  3. explore other ideas in the musical creativity.

The repository is hosted on GNOME gitlab.

Flatpak extensions

Linux Audio

When I started packaging music applications in Flatpak, I was confronted to the issue of audio plugins: lot of music software has effects, software instruments and others implemented as plugins colloquially called VST. When packaging Ardour, it became clear that supported these plugins where a necessity, as Ardour includes very little in term of instruments and effects.

On Linux there are 5 different formats: while LADSPA, DSSI and LV2 are open, VST2 and VST3 are proprietary standard, created by Steinberg, they are very popular in non-libre desktops and applications. Fortunately with somewhat open implementations available, they exist on Linux in open source form. These 5 audio plugins format work in whichever applications can host them. In general LV2 is preferred.

Now the problem is that "one doesn't simply drop a binary in a flatpak". I'm sure there are some tricks to install them, since a lot of plugins are standalone, but in general it's not sanctioned. So I came up with a proposal and an implementation to support and build Linux Audio plugins in Flatpak. I'll skip the details for now, as I'm working on a comprehensive guide, but the result is that several audio applications now support plugins in flatpak, and a good number of plugins are available on Flathub.

The music applications that support plugins in Flatpak are Muse3 (not to be confused with MuseScore), LMMS, Hydrogen, Ardour (this is not supported by the upstream project), Mixxx, gsequencer. Audacity is still pending. There is also a few video editors: kdenlive, Flowblade, and Shotcut that support LADSPA audio effects and can now use the packaged plugins.

Sadly there don't seem to be a way to find the plugins on Flathub web site, nor in GNOME Software (as found in Fedora). So to find available plugins, you have to use the command line:

$ flatpak search LinuxAudio

GIMP

In the same idea, GIMP was lacking plugins. GMic has been the most requested plugin. So I took similar steps and submitted plugin support for GIMP as a flatpak. This was much less complicated as we don't have the problem of multiple apps. Then I packaged a few plugins, GMic being the most complex (we need to build Qt5). Now GIMP users have a few important third-party plugins available, including LensFun, Resynthesizer, Liquid Rescale and BIMP.

Thanks to all the flathub contributors and reviewers for the feedback, to the Ardour team for creating such an amazing application, and to Jehan for his patience as the GIMP flatpak maintainer.

Live loudness normalization in GStreamer & experiences with porting a C audio filter to Rust

A few months ago I wrote a new GStreamer plugin: an audio filter for live loudness normalization and automatic gain control.

The plugin can be found as part of the GStreamer Rust plugin in the audiofx plugin. It’s also included in the recent 0.6.0 release of the GStreamer Rust plugins and available from crates.io.

Its code is based on Kyle Swanson’s great FFmpeg filter af_loudnorm, about which he wrote some more technical details on his blog a few years back. I’m not going to repeat all that here, if you’re interested in those details and further links please read Kyle’s blog post.

From a very high-level, the filter works by measuring the loudness of the input following the EBU R128 standard with a 3s lookahead, adjusts the gain to reach the target loudness and then applies a true peak limiter with 10ms to prevent any too high peaks to get passed through. Both the target loudness and the maximum peak can be configured via the loudness-target and max-true-peak properties, same as in the FFmpeg filter. Different to the FFmpeg filter I only implemented the “live” mode and not the two-pass mode that is implemented in FFmpeg, which first measures the loudness of the whole stream and then in a second pass adjusts it.

Below I’ll describe the usage of the filter in GStreamer a bit and also some information about the development process, and the porting of the C code to Rust.

Usage

For using the filter you most likely first need to compile it yourself, unless you’re lucky enough that e.g. your Linux distribution includes it already.

Compiling it requires a Rust toolchain and GStreamer 1.8 or newer. The former you can get via rustup for example, if you don’t have it yet, the latter either from your Linux distribution or by using the macOS, Windows, etc binaries that are provided by the GStreamer project. Once that is done, compiling is mostly a matter of running cargo build in the audio/audiofx directory and copying the resulting libgstrsaudiofx.so (or .dll or .dylib) into one of the GStreamer plugin directories, for example ~/.local/share/gstreamer-1.0/plugins.

After that boring part is done, you can use it for example as follows to run loudness normalization on the Sintel trailer:

gst-launch-1.0 playbin \
    uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm \
    audio-filter="audioresample ! rsaudioloudnorm ! audioresample ! capsfilter caps=audio/x-raw,rate=48000"

As can be seen above, it is necessary to put audioresample elements around the filter. The reason for that is that the filter currently only works on 192kHz input. This is a simplification for now to make it easier inside the filter to detect true peaks. You would first upsample your audio to 192kHz and then, if needed, later downsample it again to your target sample rate (48kHz in the example above). See the link mentioned before for details about true peaks and why this is generally a good idea to do. In the future the resampling could be implemented internally and maybe optionally the filter could also work with “normal” peak detection on the non-upsampled input.

Apart from that caveat the filter element works like any other GStreamer audio filter and can be placed accordingly in any GStreamer pipeline.

If you run into any problems using the code or it doesn’t work well for your use-case, please create an issue in the GStreamer bugtracker.

The process

As I wrote above, the GStreamer plugin is part of the GStreamer Rust plugins so the first step was to port the FFmpeg C code to Rust. I expected that to be the biggest part of the work, but as writing Rust is simply so much more enjoyable than writing C and I would have to adjust big parts of the code to fit the GStreamer infrastructure anyway, I took this approach nonetheless. The alternative of working based on the C code and writing the plugin in C didn’t seem very appealing to me. In the end, as usual when developing in Rust, this also allowed me to be more confident about the robustness of the result and probably reduced the amount of time spent debugging. Surprisingly, the translation was actually not the biggest part of the work, but instead I had to debug a couple of issues that were already present in the original FFmpeg code and find solutions for them. But more on that later.

The first step for porting the code was to get an implementation of the EBU R128 loudness analysis. In FFmpeg they’re using a fork of the libebur128 C library. I checked if there was anything similar for Rust already, maybe even a pure-Rust implementation of it, but couldn’t find anything. As I didn’t want to write one myself or port the code of the libebur128 C library to Rust, I wrote safe Rust bindings for that library instead. The end result of that can be found on crates.io as an independent crate, in case someone else also needs it for other purposes at some point. The crate also includes the code of the C library, making it as easy as possible to build and include into other projects.

The next step was to actually port the FFmpeg C code to Rust. In the end that was a rather straightforward translation fortunately. The latest version of that code can be found here.

The biggest difference to the C code is the usage of Rust iterators and iterator combinators like zip and chunks_exact. In my opinion this makes the code quite a bit easier to read compared to the manual iteration in the C code together with array indexing, and as a side effect it should also make the code run faster in Rust as it allows to get rid of a lot of array bounds checks.

Apart from that, one part that was a bit inconvenient during that translation and still required manual array indexing is the usage of ringbuffers everywhere in the code. For now I wrote those like I would in C and used a few unsafe operations like get_unchecked to avoid redundant bounds checks, but at a later time I might refactor this into a proper ringbuffer abstraction for such audio processing use-cases. It’s not going to be the last time I need such a data structure. A short search on crates.io gave various results for ringbuffers but none of them seem to provide an API that fits the use-case here. Once that’s abstracted away into a nice data structure, I believe the Rust code of this filter is really nice to read and follow.

Now to the less pleasant parts, and also a small warning to all the people asking for Rust rewrites of everything: of course I introduced a couple of new bugs while translating the code although this was a rather straightforward translation and I tried to be very careful. I’m sure there is also still a bug or two left that I didn’t find while debugging. So always keep in mind that rewriting a project will also involve adding new bugs that didn’t exist in the original code. Or maybe you’re just a better programmer than me and don’t make such mistakes.

Debugging these issues that showed up while testing the code was a good opportunity to also add extensive code comments everywhere so I don’t have to remind myself every time again what this block of code is doing exactly, and it’s something I was missing a bit from the FFmpeg code (it doesn’t have a single comment currently). While writing those comments and explaining the code to myself, I found the majority of these bugs that I introduced and as a side-effect I now have documentation for my future self or other readers of the code.

Fixing these issues I introduced myself wasn’t that time-consuming neither in the end fortunately, but while writing those code comments and also while doing more testing on various audio streams, I found a couple of bugs that already existed in the original FFmpeg C code. Further testing also showed that they caused quite audible distortions on various test streams. These are the bugs that unfortunately took most of the time in the whole process, but at least to my knowledge there are no known bugs left in the code now.

For these bugs in the FFmpeg code I also provided a fix that is merged already, and reported the other two in their bug tracker.

The first one I’d be happy to provide a fix for if my approach is considered correct, but the second one I’ll leave for someone else. Porting over my Rust solution for that one will take some time and getting all the array indexing involved correct in C would require some serious focusing, for which I currently don’t have the time.

Or maybe my solutions to these problems are actually wrong, or my understanding of the original code was wrong and I actually introduced them in my translation, which also would be useful to know.

Overall, while porting the C code to Rust introduced a few new problems that had to be fixed, I would definitely do this again for similar projects in the future. It’s more fun to write and in my opinion the resulting code is easier readable, and better to maintain and extend.

This week in GNOME Builder #2

This week we fixed some specific topics which were planned for the previous cycle. If anyone wants to contribute so see some of our “Builder wishlist” go there: Builder/ThreePointThirtyfive Last time i had forgotten to mention the great work of our translation team which contributed various translations to Builder. Thank you! New Features For several releases now Builder allows to debug applications with gdb. However it was not possible to interact with gdb directly.

July 10, 2020

Implementing Gtk based Container-Widget: Part — 2

Implementing Gtk based Container-Widget: Part — 2

Working on a single row implementation

Some background

This write-up is in continuation of its previous part — setting up basic container functionality.

In the past couple of weeks, we moved on from just adding children to actually repositioning them (child widgets of the container, NewWidget) when enough space is not available for all widget to fit in the given width. Though the grid structure is yet to put in place, the widget could be seen taking shape already (look at below gif).

Work so far

In the last part of this blog, we set-up the basic container functionality that involved overriding some virtual functions in order to display some widgets on the screen. In this part, we cover how to handle a single row of widgets and adjusting/repositioning the widgets (determined by some properties/variables) when the container’s width is not enough to adjust all widgets in one horizontal line of widgets.

Note: For now, we are trying to work with only one row of widgets.

We introduced two new properties for children of NewWidget container namely — weight and position. Using these properties we try to develop a way which determines which widget needs to be repositioned and in what order the repositioning should happen.

In the above gif, you could see how the higher weighted widgets are positioned below the lower weighted widgets when we shrink the window’s width.

This repositioning is modified by changing the weights assigned to widgets, which changes the repositioning behaviour as below.

Weights: GtkComboBoxText = 0, GtkEntry = 1, GtkImage = 0
Weights: GtkComboBoxText = 0, GtkEntry = 1, GtkImage = 1

Implementation detail

In the previous blog, the basic methods were already introduced and the good thing is that no new functions were introduced during the work involved in these two weeks.

Past weeks our main focus was on implementing two functions namely — measure and size_allocate. The former being very crucial and the later is the heart of our widget. The entire allocation logic goes inside the size_allocate function, which is where we put our magical code to reposition the widgets whenever necessary.

The measure function

The measure happens in the size-requisition phase where a parent calls on child widgets asking for their preferred sizes.

  • The first point of interest is that the measure function performs size calculation for the following cases:
    1. preferred_height:
    this is pretty straight forward, we add up preferred heights for all the widgets using gtk_widget_get_preferred_height.
    2. preferred_width:
    this again is similar as above, the only difference is that here we use gtk_widget_get_preferred_width to get widths of the widgets.
    3. preferred_height_for_width:
    in this case, a widget is supposed to return its height requirement for the given width value.
    4. preferred_width_for_height:
    similar to the third case, a widget is supposed to return its width requirement for the given height value.
  • All four cases have their related virtual function in the GtkWidget class which we already overrode in our basic implementation.
  • Of all four, two cases are of our concern here. Case 2 and case 3

Measure — Preferred width

  • It is implemented as follows:
    - Group widgets by their weights
    - Sum preferred sizes of widgets belonging to the same group
    - Take the maximum sum value from the groups

Measure — Preferred height for width

  • Its implementation goes as follows:
    - Group the widgets by their weights
    - Starting from the group with the lowest weight, fit maximum possible groups into one line width (a row has multiple lines)
    - At the end of the above process, every group should be assigned a line to go into
    - For each line, bring widgets to their natural sizes using the following function gtk_distribute_natural_allocation()
    - For each line, line-height is equal to the height of the tallest widget in the line
    - Lastly, the preferred height is sum of heights of all the lines

The allocation function

Now let’s talk about the actual positioning of child widgets in the container space. The logic inside the size_allocate function is our special ingredient for the NewWidget.

Goals

  • If a row can’t fit its widgets in the container’s width, split the row into multiple lines
  • Widgets are grouped according to their weights
  • A line can have multiple groups
  • All widgets of a group should be in the same line at all times
  • The original order of children (when all placed in one row) should be maintained all the time.
  • Also, in a line, the relative order of the children should be maintained.

Approach

  • Sort the list of children based on their weights.
  • Get the preferred widths for the widgets, using gtk_widget_get_preferred_width().
  • Starting from the group with the lowest weight, fit maximum possible groups into one line width (a row has multiple lines)
  • At the end of the above step, every group should be assigned a line to go into
  • For each line, bring widgets to their natural sizes using the following function gtk_distribute_natural_allocation() and then distribute extra space among all widgets
  • Next, iterate over each line and sort the widgets of each line based on their position value.
  • Lastly, for each of the child do the following:
    Get the preferred height for the width of the widget
    Adjust allocation for this child widget
    Allocate space to the child widget

And this is how we achieved what is shown here :).

An Appointment Up the Hill

Hey, Everyone, it has been a while since I wrote my last post, but during this period I’ve been working and tackling problems for implementing the authentication functionality. So the user can now successfully enter his/her EteSync account in evolution and be able to see his/her data (Address-books, Calenders and Tasks) \o/.

What have I been doing during this period?

First if you want to skip to the results and see the module in action click here 😀

In my last post I showed screenshots for contacts appearing in Evolution, and explained that the .source file was created manually and that the credentials were hard coded for retrieving a specific journal form a specific EteSync account.

After finishing this, I extended so that I can also retrieve calenders and tasks in the same manner which was quite easy as I already understood what should be done. Then I created an etesync-backend file, which generally handles the user’s collection account in evolution (retrieving/ creating /deleting) journals which are address-book or calenders .source files.

The next step was then to make a user enter his credentials, So it isn’t hard coded. In this stage I had faced some issues regarding the implementation, I asked for my mentors help. Some of the problems that I faced were I needed to create a new dialog that will appear ask the user for his credentials and retrieve the data from EteSync, this had some implementation problems for me at first. Other issues appeared while integrating had to change some pieces.

So just to sum up what have I been doing during this period:

  1. Extended the reading functionality to calenders and tasks.
  2. Added etesync-backend file which handles the creating/deleting/ retrieving journals.
  3. Added a Lookup file for looking up your account before adding it.
  4. Added a Credential promoter for taking in the user’s Encryption Password.
  5. Integrated the collection account authentication so that is isn’t hard coded any more.
  6. Also, did the same with the address-book and calender back-end.

Module in action

First you need to add your account in Evolution.

  • Open Evolution and then press the little down arrow next to “New” on the top-left.
  • Then Choose “Collection Account”
This the collection account lookup window
Click on “Advanced Options” to enter your local server
After looking up for your account press next to add it
This window will appear, right after you add your account asking for entering the Encryption Password
(username and password will be preloaded as entered before)
  • OK So at this point you should’ve entered your correct credentials, now evolution will load your journals so you can see your data.
  • Now I made some changes in the my contacts journal “Default” and went back to Evolution and refreshed the address book, it loaded the changes made.

So as you see, there is a great progress done, but still more to be made so EteSync and Evolution users can fully use the module without issues for reading and writing there data.

What Else?

  1. While the reading functionality is working, it may need some small fixes for some cases and testing.
  2. Preparing the module to be easily used by users for testing it (some adjustments in the CMakeList files.
  3. Add the writing functionality for calenders, tasks and address-books.
  4. Adding the ability to create or delete journals from Evolution.
  5. Small tweaks in the UI in authentication dialog.

These are the things that I can think of now, during the progress other things might appear along the way. So, stay tuned for more posts in the future, sharing with you my progress throughout the journey 😀

July 09, 2020

Easily speed up CI by reducing download size

Every time a CI pipeline runs on GitLab, it downloads the git repository for your project. Often, pipeline jobs are set up to make further downloads (of dependencies or subprojects), which are also run on each job.

Assuming that you’ve built a Docker image containing all your dependencies, to minimise how often they’re re-downloaded (you really should do this, it speeds up CI a lot), you can make further improvements by:

  1. Limiting the clone depth of your repository in the GitLab settings: Settings ? CI/CD, and change it to use a ‘git shallow clone’ of depth 1.
  2. Adding --branch, --no-tags and --depth 1 arguments to every git clone call you make during a CI job. Here’s an example for GLib.
  3. Adding depth=1 to your Meson .wrap files to achieve the same thing when (for example) meson subprojects download is called. See the same example merge request.

For GLib, the difference between git clone https://gitlab.gnome.org/GNOME/glib.git and git clone --depth 1 https://gitlab.gnome.org/GNOME/glib.git is 66MB (reducing from 74MB to 8MB), or a factor of 10. It won’t be as much for younger or smaller projects, but still worthwhile.

Organizational Proliferation Is Not the Problem You Think It Is

[ This blog post was cross-posted from the blog at Software Freedom Conservancy where I work. ]

I've been concerned this week about aggressive negative reaction (by some) to the formation of an additional organization to serve the Free and Open Source (FOSS) community. Thus it seems like a good moment to remind everyone why we all benefit when we welcome newcomer organizations in FOSS.

I've been involved in helping found many different organizations — in roles as varied as co-founder, founding Board member, consultant, spin-off partner, and “just a friend giving advice”. Most of these organizations fill a variety of roles; they support, house, fiscally sponsor, or handle legal issues and/or trademark, copyright, or patent matters for FOSS projects. I and my colleagues at Conservancy speak regularly about why we believe a 501(c)(3) charitable structure in the USA has huge advantages, and you can find plenty of blog posts on our site about that. But you can also find us talking about how 501(c)(6) structures, and other structures outside the USA entirely, are often the right choices — depending on what a FOSS project seeks from its organization. Conservancy also makes our policies, agreements, and processes fully public so that organizations can reuse our work, and many have.

Meanwhile, FOSS organizations must avoid the classic “not invented here” anti-pattern. Of course I believe that Conservancy has great ideas for how to help FOSS, and our work — such as fiscal sponsorship, GPL enforcement work, and the Outreachy internship program — are the highest priorities in FOSS. I also believe the projects we take under our auspices are the most important projects in FOSS today.

But not everyone agrees with me, nor should they. Our Executive Director, Karen Sandler, loves the aphorism “let a thousand flowers bloom”. For example, when we learned of the launch of Open Collective, we at Conservancy were understandably concerned that since they were primarily a 501(c)(6) and didn't follow the kinds of fiscal sponsorship models and rules that we preferred, that somehow it was a “threat” to Conservancy. But that reaction is one of fear, selfishness, and insecurity. Once we analyzed what the Open Collective folks were up to, we realized that they were an excellent option for a lot of the projects that were simply not a good fit for Conservancy and our model. Conservancy is deeply steeped in a long-term focus on software freedom for the general public, and some projects — particularly those that are primarily in service to companies rather than individual users (or who don't want the oversight a charity requires) — just don't belong with us. We regularly refer projects to Open Collective.

For many larger projects, Linux Foundation — as a 501(c)(6) controlled completely by large technology companies — is also a great option. We've often referred Conservancy applicants there, too. We do that even while we criticize Linux Foundation for choosing proprietary software for many tasks, including proprietary software they write from scratch for their outward-facing project services

Of course, I'm thinking about all this today because Conservancy has been asked what we think about the Open Usage Commons. The fact is they're just getting started and both the legal details of how they're handling trademarks, and their governance documents, haven't been released yet. We should all give them an opportunity to slowly publish more and review it when it comes along. We should judge them fairly as an alternative for fulfilling FOSS project needs that no else addresses (or, more commonly are being addressed very differently by existing organizations). I'm going to hypothesize that, like Linux Foundation, Open Usage Commons will primarily be of interest to more for-profit-company focused projects, but that's my own speculation; none of us know yet.

No one is denying that Open Usage Commons is tied to Google as part of their founding — in the same way that Linux Foundation's founding (which was originally founded as the “Open Source Development Labs”) was closely tied to IBM at the time. As near as I can tell, IBM's influence over Linux Foundation is these days no more than any other of their Platinum Members. It's not uncommon for a trade association to jumpstart with a key corporate member and eventually grow to be governed by a wider group of companies. But while appropriately run trade associations do balance the needs of all for-profit companies in their industry, they are decidedly not neutral; they are chartered to favor business needs over the needs of the general public. I encourage skepticism when you hear an organization claim “neutrality”. Since a trade association is narrowed to serving businesses, it can be neutral among the interests of business, but their mandate remains putting business needs above community. The ultimate proof of neutrality pudding is in the eating. As with multi-copyright held GPL'd projects, we can trust the equal rights for all in those — regardless of the corporate form of the contributors — because the document of legal rights makes it so. The same principle applies to any area of FOSS endeavor: examine the agreements and written rules for contributors and users to test neutrality.

Finally, there are plenty of issues where software freedom activists should criticize Google. Just today, I was sent a Google Docs link for a non-FOSS volunteer thing I'm doing, and I groaned knowing that I'd have to install a bunch of proprietary Javascript just to be able to participate. Often, software freedom activists assume that bad actions by an entity means all actions are de-facto problematic. But we must judge each policy move on its own merits to avoid pointless partisanship.

2020-07-08 Thursday

  • Mail chew; interested to see Open Usage announced for holding and managing FLOSS trademarks in a light-weight way. If it can reduce the galloping bureaucracy and the risk of in-fighting that can come with formal governance structures, as well as avoiding the extraordinarily overheads of formal entities, that sounds rather positive. Just having the pleasant, collegial engineering relationships in a project without the overhead would be great. Then again, I guess SFC, SPI, Public Software and others already provide nice containers for projects with varying degress of flexibility, lets see what happens.

libhandy-rs v0.6.0 is out!

crates.io

Recently I kind of took over the maintainership of libhandy-rs, the Rust bindings of libhandy. I have since then been preparing for a new release so that Rust & GTK app developers can update to the latest gtk-rs release as soon as possible. I also heavily depend on it on my various little apps.

The latest release which targets libhandy-0.0 v0.0.13 features various improvements:

  • Proper documentation support, note that the docs are built from the main branch
  • Builders support
  • Generate more missing bindings
  • Starting from this release, libhandy-rs have a libhandy::init() which should be called right after gtk::init()

The next release of libhandy-rs will be targeting libhandy-1 and will hopefully adds the latest missing bits for a complete Rust bindings of libhandy.

A big thanks to the GTK-rs community, I learned so much stuff by contributing :)

July 08, 2020

2020-07-08 Wednesday

  • A morning of marketing bits; catch up with Paolo, admin and community mail reading.

URI parsing and building in GLib

Marc-André Lureau has landed GUri support in GLib, and it’ll be available in GLib 2.65.1 (due out in the next few days).

GUri is a new API for parsing and building URIs, roughly equivalent to SoupURI already provided by libsoup — but since URIs are so pervasive, and used even if you’re not actually doing HTTP conversations, it makes sense to have a structured representation for them in GLib.

To parse a URI, use g_uri_parse() or g_uri_split():

g_autoptr(GError) local_error = NULL;
const gchar *uri_str;
g_autoptr(GUri) uri = NULL;
g_autoptr(GHashTable) query_params = NULL;

uri_str = "https://discourse.gnome.org/search?q=search%20terms#ember372";
uri = g_uri_parse (uri_str,
                   G_URI_FLAGS_PARSE_STRICT |
                   G_URI_FLAGS_ENCODED_QUERY,
                   &local_error);
if (uri == NULL)
  {
    /* Handle the error */
    g_error ("Invalid URI: %s", uri_str);
    return;
  }

g_assert_cmpstr (g_uri_get_scheme (uri), ==, "https");
g_assert_cmpstr (g_uri_get_host (uri), ==, "discourse.gnome.org");
g_assert_cmpstr (g_uri_get_path (uri), ==, "/search");
g_assert_cmpstr (g_uri_get_query (uri), ==, "q=search%20terms");
g_assert_cmpstr (g_uri_get_fragment (uri), ==, "ember372");

/* Parse the params further. Using g_uri_parse_params() requires that we pass G_URI_FLAGS_ENCODED_QUERY to g_uri_parse() above, otherwise the %-encoded values could be decoded to create more separators */
query_params = g_uri_parse_params (g_uri_get_query (uri), -1,
                                   "&",
                                   G_URI_PARAMS_NONE,
                                   &local_error);
if (query_params == NULL)
  {
    /* Handle the error */
    g_error ("Invalid query: %s", g_uri_get_query (uri));
    return;
  }

g_assert_cmpstr (g_hash_table_lookup (query_params, "q"), ==, "search terms");

Building a URI is a matter of calling g_uri_build() or g_uri_join(), which should be self-explanatory.

Please try it out! The API is unstable until GLib makes its 2.66.0 stable release (freezing on 2020-08-08), so now is the time to comment on things which don’t make sense or are hard to use.

​Chromium now migrated to the new C++ Mojo types

At the end of the last year I wrote a long blog post summarizing the main work I was involved with as part of Igalia’s Chromium team. In it I mentioned that a big chunk of my time was spent working on the migration to the new C++ Mojo types across the entire codebase of Chromium, in the context of the Onion Soup 2.0 project.

For those of you who don’t know what Mojo is about, there is extensive information about it in Chromium’s documentation, but for the sake of this post, let’s simplify things and say that Mojo is a modern replacement to Chromium’s legacy IPC APIs which enables a better, simpler and more direct way of communication among all of Chromium’s different processes.

One interesting thing about this conversion is that, even though Mojo was already “the new thing” compared to Chromium’s legacy IPC APIs, the original Mojo API presented a few problems that could only be fixed with a newer API. This is the main reason that motivated this migration, since the new Mojo API fixed those issues by providing less confusing and less error-prone types, as well as additional checks that would force your code to be safer than before, and all this done in a binary compatible way. Please check out the Mojo Bindings Conversion Cheatsheet for more details on what exactly those conversions would be about.

Another interesting aspect of this conversion is that, unfortunately, it wouldn’t be as easy as running a “search & replace” operation since in most cases deeper changes would need to be done to make sure that the migration wouldn’t break neither existing tests nor production code. This is the reason why we often had to write bigger refactorings than what one would have anticipated for some of those migrations, or why sometimes some patches took a bit longer to get landed as they would span way too much across multiple directories, making the merging process extra challenging.

Now combine all this with the fact that we were confronted with about 5000 instances of the old types in the Chromium codebase when we started, spanning across nearly every single subdirectory of the project, and you’ll probably understand why this was a massive feat that would took quite some time to tackle.

Turns out, though, that after just 6 months since we started working on this and more than 1100 patches landed upstream, our team managed to have nearly all the existing uses of the old APIs migrated to the new ones, reaching to a point where, by the end of December 2019, we had completed 99.21% of the entire migration! That is, we basically had almost everything migrated back then and the only part we were missing was the migration of //components/arc, as I already announced in this blog back in December and in the chromium-mojo mailing list.


Progress of migrations to the new Mojo syntax by December 2019

This was good news indeed. But the fact that we didn’t manage to reach 100% was still a bit of a pain point because, as Kentaro Hara mentioned in the chromium-mojo mailing list yesterday, “finishing 100% is very important because refactoring projects that started but didn’t finish leave a lot of tech debt in the code base”. And surely we didn’t want to leave the project unfinished, so we kept collaborating with the Chromium community in order to finish the job.

The main problem with //components/arc was that, as explained in the bug where we tracked that particular subtask, we couldn’t migrate it yet because the external libchrome repository was still relying on the old types! Thus, even though almost nothing else in Chromium was using them at that point, migrating those .mojom files under //components/arc to the new types would basically break libchrome, which wouldn’t have a recent enough version of Mojo to understand them (and no, according to the people collaborating with us on this effort at that particular moment, getting Mojo updated to a new version in libchrome was not really a possibility).

So, in order to fix this situation, we collaborated closely with the people maintaining the libchrome repository (external to Chromium’s repository and still relies in the old mojo types) to get the remaining migration, inside //components/arc, unblocked. And after a few months doing some small changes here and there to provide the libchrome folks with the tools they’d need to allow them to proceed with the migration, they could finally integrate the necessary changes that would ultimately allow us to complete the task.

Once this important piece of the puzzle was in place, all that was left was for my colleague Abhijeet to land the CL that would migrate most of //components/arc to the new types (a CL which had been put on hold for about 6 months!), and then to land a few CLs more on top to make sure we did get rid of any trace of old types that might still be in codebase (special kudos to my colleague Gyuyoung, who wrote most of those final CLs).


Progress of migrations to the new Mojo syntax by July 2020

After all this effort, which would sit on top of all the amazing work that my team had already done in the second half of 2019, we finally reached the point where we are today, when we can proudly and loudly announce that the migration of the old C++ Mojo types to the new ones is finally complete! Please feel free to check out the details on the spreadsheet tracking this effort.

So please join me in celebrating this important milestone for the Chromium project and enjoy the new codebase free of the old Mojo types. It’s been difficult but it definitely pays off to see it completed, something which wouldn’t have been possible without all the people who contributed along the way with comments, patches, reviews and any other type of feedback. Thank you all! 👌 ðŸ�»

IgaliaLast, while the main topic of this post is to celebrate the unblocking of these last migrations we had left since December 2019, I’d like to finish acknowledging the work of all my colleagues from Igalia who worked along with me on this task since we started, one year ago. That is, Abhijeet, Antonio, Gyuyoung, Henrique, Julie and Shin.

Now if you’ll excuse me, we need to get back to working on the Onion Soup 2.0 project because we’re not done yet: at the moment we’re mostly focused on converting remote calls using Chromium’s legacy IPC to Mojo (see the status report by Dave Tapuska) and helping finish Onion Soup’ing the remaining directores under //content/renderer (see the status report by Kentaro Hara), so there’s no time to waste. But those migrations will be material for another post, of course.

The Surrealist Clock of JavaScript

It’s been a long time since I last blogged. In the interim I started a new job at Igalia as a JavaScript Engine Developer on the compilers team, and attended FOSDEM in Brussels several million years ago in early February back when “getting on a plane and traveling to a different country” was still a reasonable thing to do.

In this blog post I would like to present Temporal, a proposal to add modern and comprehensive handling of dates and times to the JavaScript language. This has been the project I’m working on at Igalia, as sponsored by Bloomberg. I’ve been working on it for the last 6 months, joining several of my coworkers in a cross-company group of talented people who have already been working on it for several years.

Sculpture of one of Salvador Dalí's melting pocket watches draped over a tree branch.
This is the kind of timekeeping you get with the old JavaScript Date… (Public domain photograph by Julo)

I already collaborated on a blog post about Temporal, “Dates and Times in JavaScript”, so I won’t repeat all that here, but all the explanation you really need is that Temporal is a modern replacement for the Date object in JavaScript, which is terrible. You may also want to read “Fixing JavaScript Date”, a two-part series providing further background, by Maggie Pint, one of the originators of Temporal.

How Temporal can be useful in GNOME

I’m aware that this blog is mostly read by the GNOME community. That’s why in this blog post I want to talk especially about how a large piece of desktop software like GNOME is affected by JavaScript Date being so terrible.

Of course most improvements to the JavaScript language are driven by the needs of the web.1 But a few months ago this merge request caught my eye, fixing a bug that made the date displayed in GNOME wrong by a full 1,900 years! The difference between Date.getYear() not doing what you expect (and Date.getFullYear() doing it instead) is one of the really awful parts of JavaScript Date. In this case if there had been a better API without evil traps, the mistake might not have been made in the first place, and it wouldn’t have come down to a last-minute code freeze break.

In the group working on the Temporal proposal we are seeking feedback from people who are willing to try out the Temporal API, so that we can find out if there are any parts that don’t meet people’s needs and change them before we try to move the proposal to Stage 3 of the TC39 process. Since I think GNOME Shell and GNOME Weather, and possibly other apps, might benefit from using this API when it becomes part of JavaScript in the future, I’d be interested in finding out what we in the GNOME community need from the Temporal API.

It seems to me the best way to do this would be to make a port of GNOME Shell and/or GNOME Weather to the experimental Temporal API, and see what issues come up. Unfortunately, it would defeat the purpose for me to do this myself, since I am already overly familiar with Temporal and by now its shortcomings are squarely in my blind spot! So instead I’ll offer my help and guidance to anyone who wants to try this out. Please get in touch with me if you are interested.

How to try it out

Since Temporal is of course not yet a built-in object in JavaScript, to try it out we will need to import a polyfill. We have published a polyfill which is experimental only, for the purpose of trying out the API and integrating it with existing code. Here’s a link to the API documentation.

The polyfill is primarily published as an NPM library, but we can get it to work with GJS quite easily. Here’s how I did it.

First I cloned the tc39/proposal-temporal repo, and ran npm install and npm run build in it. This generates a file called polyfill/script.js which you can copy into your code, into a place in your imports path so that the importer can find it. Then you can import Temporal:

const {Temporal} = imports.temporal.temporal;

Note that the API is not stable, so only use this to try out the API and give feedback! Don’t actually include it in your code. We have every intention of changing the API, maybe even drastically, based on feedback that we receive.

Once you have tried it out, the easiest way to tell us about your findings is to complete the survey, but do also open an issue in the bug tracker if you have something specific.

Intl, or how to stop doing _("%B %-d %Y")

While I was browsing through GNOME Shell bug reports to find ones related to JavaScript Date, I found several such as gnome-shell#2293 where the translated format strings lag behind the release while translators figure out how to translate cryptic strings such as "%B %-d %Y" for their locales. By doing our own translations, we are actually creating the conditions to receive these kinds of bug reports in the first place. Translations for these kinds of formats that respect the formatting rules for each locale are already built into JavaScript engines nowadays, in the Intl API via libicu, and we could take advantage of these translations to take some pressure off of our translators.

In fact, we could do this right now already, no need to wait for the Temporal proposal to be adopted into JavaScript and subsequently make it into GJS. We already have everything we need in GNOME 3.36. With Intl, the function I linked above would become:

_updateTitle() {
    const locale = getCachedLocale();
    const timeSpanDay = GLib.TIME_SPAN_DAY / 1000;
    const now = new Date();
    const rtf = new Intl.RelativeTimeFormat(locale, {numeric: 'auto'});

    if (this._startDate <= now && now <= this._endDate)
        this._title.text = rtf.format(0, 'day');
    else if (this._endDate < now && now - this._endDate < timeSpanDay)
        this._title.text = rtf.format(-1, 'day');
    else if (this._startDate > now && this._startDate - now < timeSpanDay)
        this._title.text = rtf.format(1, 'day');
    else if (this._startDate.getFullYear() === now.getFullYear())
        this._title.text = this._startDate.toLocaleString(locale, {month: 'long', day: 'numeric'});
    else
        this._title.text = this._startDate.toLocaleString(locale, {year: 'numeric', month: 'long', day: 'numeric'});
}

(Note, this presumes a function getCachedLocale() which determines the correct locale for Intl by looking at the LC_TIME, LC_ALL, etc. evnvironment variables. If GNOME apps wanted to move to Intl generally, I think it might be worth adding such a function to GJS’s Gettext module.)

Whereas in the future with Temporal, it would be even simpler and clearer, and I couldn’t resist rewriting that method! We wouldn’t need to store a start Date at 00:00 and end Date at 23:59.999 which is really just a workaround for the fact that we are talking here about a date without a time component, that is purely a calendar day. Temporal covers this use case out of the box:

_updateTitle() {
    const locale = getCachedLocale();
    const today = Temporal.now.date();

    const {days} = today.difference(this._date);
    if (days <= 1) {
        const rtf = new Intl.RelativeTimeFormat(locale, {numeric: 'auto'});
        // Note: if this negation seems a bit unwieldy, be aware that we are
        // considering revising the API to allow negative-valued durations
        days = Temporal.Date.compare(today, this._date) < 0 ? days : -days;
        this._title.text = rtf.format(days, 'day');
    } else {
        const options = {month: 'long', day: 'numeric'};
        if (today.year !== this._date.year)
            options.year = 'numeric';

        this._title.text = this._date.toLocaleString(locale, options);
    }
}

Calendar systems

One exciting thing about Temporal is that it will support non-Gregorian calendars. If you are a GNOME user or developer who uses a non-Gregorian calendar, or develops code for users who do, then please get in touch with me! In the group of people developing Temporal everyone uses the Gregorian calendar, so we have a knowledge gap about what users of other calendars need. We’d like to try to close this gap by talking to people.

A Final Note

In the past months I’ve not been much in the mood to write blog posts. My mind has been occupied worrying about the health of my family, friends, and myself; feeling fury and shame at the inequalities of our society that, frankly, the pandemic has made harder to fool ourselves into forgetting if it doesn’t affect us directly; and fury at our governments that perpetuate these problems and resist meaningful attempts at reform.

With all that’s going on in the world, blogging about technical achievements feels a bit ridiculous and inconsequential, but, well, I’m writing this, and you’re reading this, and here we are. So keep in mind there are other important things too. Be safe, be kind, but don’t forget to stay furious after the dust settles.


[1] One motivation for why some are eagerly awaiting Temporal as part of the JavaScript language, as opposed to a library, is that it would be built-in to the browser. The most popular library for fixing the deficiencies of Date, moment.js, can mean an extra download of 20–100 kb, depending on whether you include all locales and support for time zones. This adds up to quite a lot of wasted data if you are downloading this on a large number of the websites you visit, but this specifically doesn’t affect GNOME. ↩

July 07, 2020

What if? Revision control systems did not have merge

A fun design exercise is to take an established system or process and introduce some major change into it, such as adding a completely new constraint. Then take this new state of things, run with it and see what happens. In this case let's see how one might design a revision control system where merging is prohibited. Or, formulated in a slightly different way:
What if merging is to revision control systems as multiple inheritance is to software design?

What is merging used for?

First we need to understand what merging is used for so that wa can develop some sort of a system that achieves the same results via some other mechanism. There are many reasons to use merges, but the most popular ones include the following.

An isolated workspace for big changes

Most changes are simple and consists of only one commit. Sometimes, however, it is necessary to make big changes with intermediate steps, such as doing major refactoring operations. These are almost always done in a branch and then brought in to trunk. This is especially convenient if multiple people work on the change.

Trunk commits are always clean

Bringing big changes in via merges means that trunk is always clean and buildable. More importantly bisection works reliably since all commits in trunk are known good. This is typically enforced via a gating CI. This allows big changes to have intermediate steps that are useful but broken in some way so they would not pass CI. This is not common, but happens often enough to be useful.

An alternative to merging is squashing the branch into a single commit. This is suboptimal as it destroys information breaking for example git blame -kind of functionality as all changes made point to a single commt made by a single person (or possibly a bot).

Fix tracking

There are several systems that do automatic tracking of bug fixes to releases. The way this is done is that a fix is written in its own branch. The bug tracking system can then easily see when the fix gets to the various release branches by seeing when the bugfix branch has been merged to them.

A more linear design

In practice many (possibly even most) projects already behave like this. They keep their histories linear by rebasing, squashing and cherry picking, never merging. This works but has the downsides mentioned above. If one spends some time thinking about this problem the fundamental disconnect comes fairly clear. A "linear" revision control system has only one type of a change which is the commit whereas "real world" problems have two different types: logical changes and individual commits that make up the logical change. This structure is implicit in the graph of merge-based systems, but what if we made it explicit? Thus if we have a commit graph that looks like this:



the linear version could look like this:


The two commits from the right branch have become one logical commit in the flat version. If the revision control system has a native understanding of these kinds of physical and logical commits all the problematic cases listed could be made to work transparently. For example bisection would work by treating all logical commits as only one change. Only after it has proven that the error occurred inside a single logical commit would bisection look inside it.

This, by itself, does not fix bug tracing. As there are no merges you can't know which branches have which fixes. This can be solved by giving each change (both physical and logical) a logical ID which remains the same over rebase and edit operations as opposed to the checksum-based commit ID which changes every time the commit is edited. This changes the tracking question from "which release branches have merged this feature fix branch" to "which release branches have a commit with this given logical ID" which is a fairly simple problem to solve.

This approach is not new. LibreOffice has tooling on top of Git that does roughly the same thing as discussed here. It is implemented as freeform text in commit messages with all the advantages and disadvantages that brings.

One obvious question that comes up is could you have logical commits inside logical commits. This seems like an obvious can of worms. On one hand it would be mathematically symmetrical and all that but on the other hand it has the potential to devolve into full Inception, which you usually want to avoid. You'd probably want to start by prohibiting that and potentially permitting it later once you have more usage experience and user feedback.

Could this actually work?

Maybe. But the real question is probably "could a system like this replace Git" because that is what people are using. This is trickier. A key question would whether you can automatically convert existing Git repos to the new format with no or minimal loss of history. Simple merges could maybe be converted in this way but in practice things are a lot more difficult due to things like octopus merges. If the conversion can not be done, then the expected market share is roughly 0%.

Rebuild of EvanGTGelion: Getting Things GNOME 0.4 released!

We are very proud to be announcing today the 0.4 release of Getting Things GNOME (“GTG”), codenamed “You Are (Not) Done”. This much-awaited release is a major overhaul that brings together many updates and enhancements, including new features, a modernized user interface and updated underlying technology.

Screenshot
Screenshot of GTG 0.4

Beyond what is featured in these summarized release notes below, GTG itself has undergone over 630 changes affecting over 500 files, and received hundreds of bug fixes, as can be seen on here and here.

We are thankful to all of our supporters and contributors, past and present, who made GTG 0.4 possible. Check out the “About” dialog for a list of contributors for this release.

A summary of GTG’s development history and a high-level explanation of its renaissance can be seen in this teaser video:

A demonstration video provides a tour of GTG’s current features:

A few words about the significance of this release

As a result of the new lean & agile project direction and contributor workflow we have formalized, this release—the first in over 6.5 years—constitutes a significant milestone in the revival of this community-driven project.

This milestone represents a very significant opportunity to breathe new life into the project, which is why I made sure to completely overhaul the “contributor experience”, clarifying and simplifying the process for new contributors to make an impact.

I would highly encourage everybody to participate towards the next release. You can contribute all sorts of improvements to this project, be it bug fixes or new features, adopting one of the previous plugins, doing translation and localization work, working on documentation, or spreading the word. Your involvement is what makes this project a success.

— Jeff (yours truly)

“When I switched from Linux to macOS a few years ago, I never found a todo app that was as good as GTG, so I started to try every new shiny (and expensive) thing. I used Evernote, Todoist, Things and many others. Nothing came close. I spent the next 6 years trying every new productivity gadget in order to find the perfect combo.
In 2019, Jeff decided to take over GTG and bring it back from the grave. It’s a strange feeling to see your own creation continuing in the hands of others. Living to see your software being developed by others is quite an accomplishment. Jeff’s dedication demonstrated that, with GTG, we created a tool which can become an essential part of chaos warrior’s productivity system. A tool which is useful without being trendy, even years after it was designed. A tool that people still want to use. A tool that they can adapt and modernise. This is something incredible that can only happen with Open Source.”

— Lionel Dricot, original author of GTG (quote edited with permission)

It has been over seven years since the 0.3rd impact. This might very well be the Fourth Impact. Shinji, get in the f%?%$ing build bot!

— Gendo Ikari

Release notes

Technology Upgrades

GTG and libLarch have been fully ported to Python 3, GTK 3, and GObject introspection (PyGI).

User Interface and Frontend Improvements

General UI overhaul

The user interface has been updated to follow the current GNOME Human Interface Guidelines (HIG), style (see GH GTG PR #219 and GH GTG PR #235 for context) and design patterns:

  • Client-side window decorations using the GTK HeaderBar widget. Along with the removal of the menu bars, this saves a significant amount of space and allows for more content to be displayed on screen.
  • The Preferences dialog was redesigned, and its contents cleaned up to remove obsolete settings (see GH GTG PR #227).
  • All windows are properly parented (set as transient) with the main window, so that they can be handled better by window managers.
  • Symbolic icons are available throughout the UI.
  • Improvements to padding and borders are visible throughout the application.

Main window (“Task Browser”)

  • The menu bar has been replaced by a menu button. Non-contextual actions (for example: toggle Sidebar, Plugins, Preferences, Help, and About) have been moved to the main menu button.
  • Searching is now handled through a dedicated Search Bar that can be toggled on and off with the mouse, or the Ctrl+F keyboard shortcut.
  • The “Workview” mode has been renamed to the “Actionable” view. “Open”, “Actionable”, and “Closed” tasks view modes are available (see GH GTG PR #235).
  • An issue with sorting tasks by title in the Task Browser has been fixed: sorting is no longer case-sensitive, and now ignores tag marker characters (GH GTG issue #375).
  • Start/Due/Closed task dates now display as properly translated in the Task Browser (GH GTG issue #357)
  • In the Task Browser’s right-click context menus, more start/due dates choices are available, including common upcoming dates and a custom date picker (GH GTG issue #244).

Task Editor

  • The Calendar date picker pop-up widgets have been improved (see GH GTG PR #230).
  • The Task Editor now attempts to place newly created windows in a more logical way (GH GTG issue #287).
  • The title (first line of a task) has been changed to a neutral black header style, so that it doesn’t look like a hyperlink.

New Features

  • You can now open (or create) a task’s parent task (GH GTG issue #138).
  • You can now select multiple closed tasks and perform bulk actions on them (GH GTG issue #344).
  • It is now possible to rename or delete tags by right-clicking on them in the Task Browser.
  • You can automatically generate and assign tag colors. (LP GTG issue #644993)
  • The Quick Add entry now supports emojis 🤩
  • The Task Editor now provides a searchable “tag picker” widget.
  • The “Task Reaper” allows deleting old closed tasks for increased performance. Previously available as a plugin, it is now a built-in feature available in the Preferences dialog (GH GTG issue #222).
  • The Quick Deferral (previously, the “Do it Tomorrow” plugin) is now a built-in feature. It is now possible to defer multiple tasks at once to common upcoming days or to a custom date (GH GTG issue #244).
  • In the unlikely case where GTG might encounter a problem opening your data file, it will automatically attempt recovery from a previous backup snapshot and let you know about it (LP GTG issue #971651)

Backend and Code Quality improvements

  • Updates were made to overall code quality (GH GTG issue #237) to reduce barriers to contribution:
    • The code has been ported to use GtkApplication, resulting in simpler and more robust UI code overall.
    • GtkBuilder/Glade “.ui” files have been regrouped into one location.
    • Reorganization of various .py files for consistency.
    • The debugging/logging system has been simplified.
    • Various improvements to the test suite.
    • The codebase is mostly PEP8-compliant. We have also relaxed the PEP8 max line length convention to 100 characters for readability, because this is not the nineties anymore.
  • Support is available for Tox, for testing automation within virtualenvs (see GH GTG PR #239).
  • The application’s translatable strings have been reviewed and harmonized, to ensure the entire application is translatable (see GH GTG PR #346).
  • Application CSS has been moved to its own file (see GH GTG PR #229).
  • Outdated plugins and synchronization services have been removed (GH GTG issue #222).
  • GTG now provides an “AppData” (FreeDesktop AppStream metadata) file to properly present itself in distro-agnostic software-centers.
  • The Meson build system is now supported (see GH GTG PR #315).
    • The development version’s launch script now allows running the application with various languages/locales, using the LANG environment variable for example.
    • Appdata and desktop files are named based on the chosen Meson profile (see GH GTG PR #349).
    • Depending on the Meson profile, the HeaderBar style changes dynamically to indicate when the app is run in a dev environment, such as GNOME Builder (GH GTG issue #341).

Documentation Updates

  • The user manual has been rewritten, reorganized, and updated with new images (GH GTG issue #243). It is also now available as an online publication.
  • The contributor documentation has been rewritten to make it easier for developers to get involved and to clarify project contribution guidelines  (GH GTG issue #200). Namely, updates were made to the README.md file to clarify the set-up process for the development version, as well as numerous new guides and documentation for contributors in the docs/contributors/ folder.

Infrastructure and other notable updates

  • The entire GTG GNOME wiki site has been updated (GH GTG issue #200), broken links have been fixed, references to the old website have been removed.
  • We have migrated from LaunchPad to GitHub (and eventually GitLab), so references to LaunchPad have been removed.
  • We now have social media accounts on Mastodon and Twitter (GH GTG issue #294).
  • Flatpak packages on Flathub are going to be our official direct upstream-to-user software distribution mechanism (GH GTG issue #233).

Notice

In order to bring this release out of the door, some plugins have been disabled and are awaiting adoption by new contributors to test and maintain them. Please contribute to maintain your favorite plugin. Likewise, we had to remove the DBus module (and would welcome help to bring it back into a better shape, for those who want to control the app via DBus).


Getting and installing GTG 0.4

We hope to have our flatpak package ready in time for this announcement, or shortly afterwards. See the install page for details.


Spreading this announcement

We have made some social postings on Twitter, on Mastodon and on LinkedIn that you can re-share/retweet/boost. Please feel free to link to this announcement on forums and blogs as well!

The post Rebuild of EvanGTGelion: Getting Things GNOME 0.4 released! appeared first on The Open Sourcerer.

July 06, 2020

Meet the GNOMEies: Kristi Progri

With GUADEC two weeks away, this was the perfect time to talk to Program Manager and GUADEC organizer Kristi Progri. To see her amazing work live, register for GUADEC today!

A photo of Kristi Progri. She is wearing a red shirt and fabulous bright red lipstick.
Photo courtesy of Kristi Progri. Licensed CC-BY-NC-ND-SA.

Tell us a little bit more about yourself.

For people who have known me for a long time, I am Kiki. That’s my nickname, which comes from when I was playing basketball and I had four other team mates with the same name.

I was born and grew up in the country with the largest number of bunkers in the world, left over from the communist era. I finished my bachelor studies in International Affairs and Diplomacy and my Master’s Degree is in ‘Information Systems Security’

A few years ago I co-founded the Open Source Diversity initiative and for around five years I was the chairwoman of a local hackerspace in my hometown that promotes all Free & Open Source technologies and data. For many years I was part of the organizing team for many years of the biggest open source conference in Albania.

What is your role within the GNOME community?

I am the Program Coordinator in the GNOME Foundation where I help to organize various events, leading many initiatives within the community including the Engagement Team, and working closely with all the volunteers and contributors. I also coordinate internships and help with general Foundation activities.

Do you have any other affiliations you want to share?

Before joining GNOME, I was very active in Mozilla community. I have been part of the Tech Speakers program and a Mozilla Representative for more than seven years now. I have organized many events and workshops and also have participated as a speaker talking about Free Software communities at many events around the globe.

Why did you get involved in GNOME?

I was introduced to Free Software when I was in high school, my friend had a computer running Debian and he started explaining how it worked. This was the first time I heard about it and I immediately understood that I would never be part of these communities. It looked so complicated and not my cup of tea, but it looks like I was very wrong. Once I went for to a hackerspace meeting I completely changed my mind and from that moment the hackerspace become my second home.

Why are you still involved with GNOME?

Diversity, people, community, sorting out dramas in and outside community, are some very important keywords that drive me to love working in such environment. I am working full time, so GNOME gets a big part of my attention everyday, which I am happy to share.

What are you working on right now?

My working desk is full of post-it notes of to do tasks :D

My main thing now is organizing GUADEC online edition, working as well Google Season of Docs, University Outreach Initiative, other activities and tasks part of the Engagement team, and many others things which I am sure I have missed.

What are you excited about right now – either in GNOME or free and open source software in general?

We are building a new GNOME Community in Africa and spreading our community more in Asia, I am so excited to know what the future will bring us and how big GNOME will get. I feel like we are gaining momentum and I see very motivated people coming and contributing.

What is a major challenge you see for the future of GNOME?

As in many Free Software communities we have a big challenge with how to get newcomers on board and to keep them motivated to continue contributing. We need to have a very good structured way within the community to guide people for the tasks we need contributors and show them the way. Another major challenge I see is how GNOME will adapt with the new changes that are occurring in the world due to Covid-19 in terms of events, conferences, and hackfest organization .

What do you think GNOME should focus on next?

Financial sustainability and keeping the shiny growth rate we have right now should be one of the most important focuses. As previously mentioned these are difficult times we are currently living in, in the making the world a bit unsafe and therefore this might mean that finding the resources and donors will be challenging.

What else should we have asked about that we didn’t? Please answer :)

Whats you favorite physical activity? Weightlifting

Answers edited for length.

User-specific XKB configuration - part 2

This is the continuation from this post.

Several moons have bypassed us [1] in the time since the first post, and Things Have Happened! If you recall (and of course you did because you just re-read the article I so conveniently linked above), libxkbcommon supports an include directive for the rules files and it will load a rules file from $XDG_CONFIG_HOME/xkb/rules/ which is the framework for custom per-user keyboard layouts. Alas, those files are just sitting there, useful but undiscoverable.

To give you a very approximate analogy, the KcCGST format I described last time are the ingredients to a meal (pasta, mince, tomato). The rules file is the machine-readable instruction set to assemble your meal but it relies a lot on wildcards. Feed it "spaghetti, variant bolognese" and the actual keymap ends up being the various components put together: "pasta(spaghetti)+sauce(tomato)+mince". But for this to work you need to know that spag bol is available in the first place, i.e you need the menu. This applies to keyboard layouts too - the keyboard configuration panel needs to present a list so the users can clickedy click-click on whatever layout they think is best for them.

This menu of possible layouts is provided by the xkeyboard-config project but for historical reasons [2], it is stored as an XML file named after the ruleset: usually /usr/share/X11/xkb/rules/evdev.xml [3]. Configuration utilities parse that file directly which is a bit of an issue when your actual keymap compiler starts supporting other include paths. Your fancy new layout won't show up because everything insists on loading the system layout menu only. This bit is part 2, i.e. this post here.

If there's one thing that the world doesn't have enough of yet, it's low-level C libraries. So I hereby present to you: libxkbregistry. This library has now been merged into the libxkbcommon repository and provides a simple C interface to list all available models, layouts and options for a given ruleset. It sits in the same repository as libxkbcommon - long term this will allow us to better synchronise any changes to XKB handling or data formats as we can guarantee that the behaviour of both components is the same.

Speaking of data formats, we haven't actually changed any of those which means they're about as easy to understand as your local COVID19 restrictions. In the previous post I outlined the example for the KcCGST and rules file, what you need now with libxkbregistry is an XKB-compatible XML file named after your ruleset. Something like this:


$ cat $HOME/.config/xkb/rules/evdev.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xkbConfigRegistry SYSTEM "xkb.dtd">
<xkbConfigRegistry version="1.1">
<layoutList>
<layout>
<configItem>
<name>us</name>
</configItem>
<variantList>
<variant>
<configItem>
<name>banana</name>
<shortDescription>banana</shortDescription>
<description>US (Banana)</description>
</variant>
</variantList>
</layoutList>
<optionList>
<group allowMultipleSelection="true">
<configItem>
<name>custom</name>
<description>Custom options</description>
</configItem>
<option>
<configItem>
<name>custom:foo</name>
<description>Map Tilde to nothing</description>
</configItem>
</option>
<option>
<configItem>
<name>custom:baz</name>
<description>Map Z to K</description>
</configItem>
</option>
</group>
</optionList>
</xkbConfigRegistry>
This looks more complicated than it is: we have models (not shown here), layouts which can have multiple variants and options which are grouped together in option group (to make options mutually exclusive). libxkbregistry will merge this with the system layouts in what is supposed to be the most obvious merge algorithm. The simple summary of that is that you can add to existing system layouts but you can't modify those - the above example will add a "banana" variant to the US keyboard layout without modifying "us" itself or any of its other variants. The second part adds two new options based on my previous post.

Now, all that is needed is to change every user of evdev.xml to use libxkbregistry. The gnome-desktop merge request is here for a start.

[1] technically something that goes around something else doesn't bypass it but the earth is flat, the moon is made of cheese, facts don't matter anymore and stop being so pedantic about things already!
[2] it's amazing what you can handwave away with "for historical reasons". Life would be better if there was less history to choose reasons from.
[3] there's also evdev.extras.xml for very niche layouts which is a separate file for historical reasons [2], despite there being a "popularity" XML attribute

July 05, 2020

Initial work on GNOME Gingerblue

I began work on GNOME Gingerblue on July 4th, 2018, two years ago and I am going to spend the next four years to complete it for GNOME 4.

GNOME Gingerblue will be a Free Software program for musicians who would compose, record and share original music to the Internet from the GNOME Desktop.

The project isn’t yet ready for distribution with GNOME 3 and the GUI and features such as sound recording must be implemented.

At the moment I have only released GNOME Gingerblue version 0.1.4 with very few features:

  • Song Files in $HOME/Music/
  • Setup Wizard
  • XML Parsing

The GNOME release team complained at the early release cycle in July and call the project empty, but I estimate it will take at least 4 years to complete 1.0.0 in reasonable time for GNOME 4 to be released between 2020 and 2026.

The Internet community can’t have Free Music without Free Recording Software for GNOME, but GNOME 4 isn’t built in 1 day.

I am trying to get gtk_record_button_new() into GTK+ 4.0.

I hope to work more on the next minor release of GNOME Gingerblue during Christmas 2020 and perhaps get recording working as a new feature in 0.2.0.

Meanwhile you can visit the GNOME Gingerblue project domain www.gingerblue.org with the GNOME wiki page, test the initial GNOME Gingerblue 0.1.4 release that writes Song files in $HOME/Music/ with Wizard GUI and XML parsing from August 2018, or spend money on physical goods such as the Norsk Kombucha GingerBlue soda or the Ngs Ginger Blue 15.6″ laptop bag.

July 03, 2020

GNOME Internet Radio Locator 3.0.1 for Fedora Core 32

GNOME Internet Radio Locator 3.0.1 (Washington)

GNOME Internet Radio Locator 3.0.1 features updated language translations, new, improved map marker palette and now also includes radio from Washington, United States of America; WAMU/NPR, London, United Kingdom; BBC World Service, Berlin, Germany; Radio Eins, and Paris, France; France Inter/Info/Culture, as well as 118 other radio stations from around the world with audio streaming implemented through GStreamer.  The project lives on www.gnomeradio.org and Fedora 32 RPM packages for version 3.0.1 of GNOME Internet Radio Locator are now also available:

gnome-internet-radio-locator.spec

gnome-internet-radio-locator-3.0.1-1.fc32.src.rpm

gnome-internet-radio-locator-3.0.1-1.fc32.x86_64.rpm

This week in GNOME Builder #1

Hello! My name is Günther Wagner and i try to give some insights in the current development of GNOME Builder. All these changes can already been tested with GNOME Builder Nightly so go ahead and give us feedback! This newsletter is called “This week in …” but probably we won’t post every week. So the interval will be a little bit arbitrary. Let’s start! New Features We included a new code spellchecker plugin leverage the fantastic codespell-project.

Epiphany GSoC Milestone

During the past month I have been hacking on Epiphany’s Preferences dialog. The first piece of submitted work was splitting the dialog source code files into smaller ones. The split didn’t reflect any visual changes on Epiphany’s user interface so I decided to postpone writing this blog post. Personally I prefer to have some form of visual content in my blog posts 🙂

That leads me to the second piece of submitted work which does include modifications in Epiphany’s interface. If a picture is worth a thousand words, then a gif is worth a million so I’ll use them to illustrate the changes 🙂

This is how the Passwords dialog was invoked before the latest commits:

The main disadvantage with this method was that it would spawn a dialog from within another dialog which should be avoided, as explained in the original Gitlab issue which I used as a reference.

Passwords is now a view nested inside the Preferences dialog and is presented like this:

This approach also has the benefit of being intuitive on mobile and touch devices. When inside the Passwords view the user can swipe back to return to the main Preferences view. Lastly, instead of clicking on the small gear button, the user can now click/tap anywhere on the whole Passwords row inside the Privacy page in order to invoke the view.

A more subtle change is the red Clear all button which has been moved inside a GtkActionBar at the bottom of the view. The reasoning behind this was also concerning touch devices as in the previous layout it could have been very easy to tap the Back button instead of the Clear all button or vice versa. Another benefit is that the Clear all label is a bit more explicit than the trash icon.

Cookies were merged into Personal Data

Epiphany used to have a separate view for clearing Cookies but now they have been moved into the Clear Personal Data view because of the following reasons:

  • The previous Cookies view was slow as it contained a large GtkListBox
  • Cookies are actually a category of stored data

Next up

These have been the most substantial recent changes worth mentioning in this post. Stay tuned for upcoming news regarding the History dialog !

Lastly, a thanks message to this year’s project mentors, Michael Catanzaro and Jan-Michael Brummer, and also thanks to Alexander Mikhaylenko for helping out with guidance in how to use libhandy ! 🙂

GSoC Progress Update

In my last blog post, I explained how selection mode was implemented in Games. That was one of the first steps to support Collections in Games, as an efficient way to select games to add/remove from collection is crucial for managing collections. In this post I’ll be talking about how “Favorites Collection” will be implemented in GNOME Games.

Collections

The first thing to do was to introduce a Collection interface to define a behavior that all types of collections must follow. All collections must have an ID and a title. Apart from that, all collections must provide a way to add and remove games from it. And on adding or removing a game from the collection, it should emit a “game added” or “game removed” signal respectively. A collection must also implement a load(), which when called, should load the games belonging to a collection from the database. Since there’s going to be different types of collections, how a collection has to be loaded might differ from each other.

Every collection has its own GameModel and must implement a get_game_model(). A GameModel is a ListModel which stores the list of games in a collection, and get_game_model() returns its GameModel which can be bound to the flowbox of a GamesPage (a widget where games can be displayed with thumbnail and title).

Other than these, all collections must also implement on_game_added(), on_game_removed() and on_game_replaced(). These are unrelated to games being added or removed to or from a collection. These has to do with games being discovered, and when some games are no longer available to the app. When a game is discovered by tracker or a cached game is loaded, it is added to a games hash table. This emits a game_added signal (unrelated to a collection’s game_added), which every collection listens to. If the added game belongs to the collection, it adds this game to the collection. Similarly on_game_removed() and on_game_replaced() handles stuff related to when a game which was cached but is no longer found by the app, and when a game has been renamed, moved to a different directory, or when it’s still the same cached game but with different UID etc.

With the general behavior of a collection defined, it was time to introduce a FavoritesCollection which implements Collection.

Favorites Collection

As obvious from the name, a “Favorites collection” is a collection that stores games that a user marks as favorite. Favorite games are marked with a star icon on the thumbnail. A game can be added to favorites from the main Games Page or from the Platforms Page. A game can removed from favorites from the above said pages as well as from the Favorites Collection Page. Games are added or removed from favorites “automagically” depending on the list of currently selected games. If all of the selected games are favorite then clicking on the favorite action in the action bar will remove them from favorites. If none of the selected games are favorite, then they are added to favorite. If selected games are a mix of favorite and non favorite games, then all the non favorite games are added to favorites. The icon in the favorite action button in the action bar, dynamically changes from starred to semi-starred to non-starred depending on the selected games. This along with tool tips help users know what the button will do.

Collections: Behind The Scenes

Database

So once a FavoritesCollection was ready to go, I needed to work on how it will be stored in the database and how it should load those collections.

Favorite games are stored in the database by having a new is_favorite column in the games table. games table stores all games, including “manually added” games, and games found by tracker. So adding or removing any type of game from Favorites is a matter of updating the is_favorite column in games.

However, I can’t just simply add a new column in the games table creation query. In order to migrate from the old database with no is_favorite column to the new one with the column, I’ll have to give some extra commands to the database. This shouldn’t be done always, but should depend on the version of the database. As of now, there isn’t a database versioning system in Games, so a simple database migration support was quickly implemented. This migration leverages a .version file in the data directory. This was used to migrate from old data directory structure to a newer one. However the file is empty and data directory migration (not database migration) only checks if the .version file exists or not. This is fine for a one time migration support. But having versions in the .version file might come in handy later on, so I now configured the database migration to write the version number into the .version so it can be used later on. No .version file is treated as version 0, empty .version is treated as version 1, and from version 2+, the .version file will have the version number in it. So in this use case, the database migration to support favorite games should be applied for all versions less than 2. Luckily, migration in this case only required to drop the games table and create it again with the new is_favorite column :). And after applying this migration, bump the database version and write it into the .version file. Now any future migrations may make use of this .version file too.

The database allows to add, remove, fetch complete list of favorite games, and check if a game is favorite. That’s about it for Favorites. For other types of collections, which will be introduced in the upcoming days, they would mostly be stored in different tables.

Collection Manager

As of now, the only available collection is Favorites Collection. But as said, soon there will be “Recently Played” and custom collections that users can create and manage. So when a CollectionManager is introduced, it has to be designed keeping in mind that it has to handle all types of collections. But most of the collection specific behavior can be neatly abstracted.

CollectionManager handles the creation of all types of collections. It consists of a hash table to store all collections. And as of now, it has a FavoritesCollection object. Since any actions related to a collection is better to be handled by CollectionManager it also provides a favorite_action() which accepts a list of games and depending on it adds or removes them to favorites. Apart from that, on creation of CollectionManager, it calls load() on every collection in the hash table, which loads all the games that belongs to that particular collection from the database.

Collection Model

It is a ListModel that contains all the available collections, which can be bound to the CollectionsPage’s flowbox. CollectionsPage (similar to GamesPage) is a widget which displays available collection to the user with thumbnails and title.

Collections: What You See

With all these put together we get a functional Favorites Collection. Here are some pictures:

Main Games Page

Games Page. With a star in the thumbnails of favorite games


Collections Page

Collections Page. Might look a bit lonely but will soon be accompanied by other collections : )


Favorite Collections Subpage

Favorites Page. This is more or less how any other collection would look like too.


As you can see, the notable visual changes here are:

  • A new Collections Page which can be navigated to, from the view switcher
  • A star on thumbnails of games marked as favorite, in Games Page and Platforms Page
  • A collection thumbnail which is generated from the covers of games that belong to that collection
  • A Collection (Sub)Page which when clicked opens a page with games that belong to that collection

By the way, you can also see the new game covers with the blurred background that I was experimenting with in the last blog post.

So this is where my progress is at currently. You can see the relevant MR here.

Whats Next?

In the upcoming weeks I’ll be implementing the rest of the collections. Currently my plan is to implement “Recently Played” collection next as it would the simpler one to implement next. There are also some smaller stuff which needs some work such as search and selection support for Collections. But that is planned to be at the end, after introducing all types of collection.

Conclusion

The work is going great and all the challenges have been fun to solve so far. Many thanks to my mentor, Alexander for being very helpful as always. Thank you all for reading my blog posts.

See you next time :)

Full Throttle

Coding period for GSoC 2020 has started and I have begun my work on my summer project. As said in my introductory post, I will be working on adding functionality to create and manage game collections in GNOME Games with help from Alexander (@alexm). After the project is complete, it will provide users with a shiny new ability to add any games to their own custom collections. And some additional feature to provide users with a quickly accessible, automatically generated collections such as recently played, favorites and hidden games.

I started out by separating the work into independently manageable chunks so that I can open several smaller merge requests, rather than a single large one, which I can imagine would be horrible to manage, and even worse for Alexander to review. And my code, however small it is, usually needs a lot of fixing.

So the first chunk I decided to work on is… Selection Mode! I decided selection mode would be the best part to start with so that when I get to modifying the database part to store all the collections and the games in it, I will have all the necessary functionality to test it with actual real world data rather than some made up data using temporary spaghetti code.

Selection Mode

Selection mode will help the user select games to be added to their collections. It may also be useful later on for anything that requires a user to choose a list of games (maybe even a list of collections), since its not tightly integrated with collections.

Selection mode in Games is just like in most other apps. There will be a button in the header bar which can be clicked to enable selection mode. It changes the header bar into a blue “selection-mode” styled header bar with options to search, change selection modifier and cancel the selection mode.

On to implementation details, I use GtkButtons for the selection mode and cancel buttons. A GtkToggleButton for the search which already works as is. A GtkMenuButton for the selection modifier with select-all and select-none actions. A simple is-selection-mode prop which is bound, propagated and listened to, for it all to come together. And when selection mode is enabled the header bar changes, and the GtkCheckButtons in an overlay inside the game thumbnail reveals itself smoothly. Each game has it’s thumnail in a GameIconView, and several of them are children to a GtkFlowBox. And GameIconView has a checked property which when set automatically handles all the stuff of adding/removing games to a GenericSet named selected_games. Since Vala arrays do not have an inbuilt remove element function, Alexander directed me to GenericSet which works nicely here.

The select all action can be used to select all games in the current page. Which means that apart from selecting all games in the main games page, it can also select all games that only belong to currently selected platform, or select all games that only match the user’s search query.

Since Games uses Libhandy (A very cool library for adaptive and sleek widgets and UI), its a very adaptive app, and the selection mechanism requires some slight tweaks. At first I had some trouble with all the edge cases when the HdyLeaflet folds from desktop view to a more compact UI. Thanks to Alexander, I realized I shouldn’t be manually changing a lot of unrelated props in unrelated places, and that made the code much much better with a lot fewer bugs.

And now, I present to you the current state of selection mode:

Selection in action

Some things to note:

  • While most of the selection code is ready to go, it still isn’t really complete without a bottom bar which presents the user options to add games to a collection, or mark it as favorite/hidden. This will be done in the up coming weeks with work on the database and CollectionsPage
  • For those nice blue checkboxes to work without any workarounds, it needs a little Adwaita theme fix. Alexander helped me open my first MR to GTK which will remove the need for any css style workarounds to get that blue checkboxes for use cases such as in this case.
  • There is a slight hiccup with selecting multiple games quickly. It happens even when input is “slow enough” for some real world usage, so this might annoy some users who are a bit quick with their inputs. But we couldn’t yet pinpoint what causes it, but it happens with other apps that use GtkFlowBox too, so probably not something that Games is doing.

The next part to work on as said above would be modifying the database to support collections. It will need new queries to add games to collections, get games in a certail collection, delete, rename etc. And then I will work on a CollectionPage where users can see Recently Played, Favorites, and then their custom collections and manage it.

I think I’m moving in the right pace, maybe even a bit fast, but thats alright I guess, becuase my exams can come up at any day in the upoming weeks, and no one really knows exactly when, due to recent events.

You can view the selection mode MR here.

Other stuffs I’ve been doing

If you are still here, I can share some other stuff I’ve been poking around with :)

Finally scratched my itch to checkout OpenGL

Some games/emulators produce a rotated display output which makes the game unplayable. So I thought I’d give a shot at trying to implement display rotation for Games. Games uses retro-gtk which is a GTK+ Libretro frontend framework. It helps Games work with libretro games/emulators. So that’s the place where I should poke around with to implement display rotation.

I honestly don’t know what I expected, but I played with the first thing that I suspected and voila, it actually worked, the output was rotated. It turned out that I was playing with texture coordinates and simply “cycled” it and the display was rotated. The End…except no, it isn’t that simple, because that’s a bit of a hack™.

What I did, could technically be used, but it’ll not flexible later on. One particular use case suggested by Alexander was adding animations on rotating, so that its more elegant. So I should look for a more flexible solution. And so began my journey in lands of OpenGL. I never knew a thing about OpenGL, but I was always interested in how graphics/video cards worked. And I have all the time I need, since its a lock down due to COVID-19. And the solution was to use GLSL shaders and framebuffer objects or shortly fbo.

So shaders are little programs that run on the graphic cards which is incredibly parallel. So to rotate a texture (think of texture as a game’s output in this case), I need to write a simple vertex shader which rotates every vertices (here 4 since its a rectangle) of the texture, and a basic pass-through fragment shader (which is a copy pasta since I don’t need to modify the pixels in the texture). So I do that. But I cant just use these shaders yet, because retro-gtk use another set of shaders to support video filters such as smooth, CRT effects which can be applied on top of the game’s video output for a retro feel. So I will need to first render the output with video filter effects and then rotate it using my brand new shaders. And for that I need to use an offscreen framebuffer. Think of framebuffer as something that holds information about color of pixels, and how “deep” a pixel is from the screen etc. For this use case, really think of it as a temporary place to store a texture. Because what I did was to draw the texture with video filter onto the offscreen framebuffer, then use our shiny new rotate shader to rotate the output in the offscreen framebuffer and “draw” it onto the default framebuffer which is what you see on the screen. This method of rendering to an offscreen framebuffer for sampling it in the next stage of the pipeline is often called “render to texture”.

I was stuck (blank output) at rendering to the default framebuffer until Alexander helped me understand that at the final stage I should bind to GtkGLArea’s framebuffer and not the default framebuffer, since retro-gtk uses a GtkGLArea for rendering with OpenGL. Other issue I was stuck with for a month was that rotating corrupted the window’s headerbar in client side rendering. I was very frustrated with this bug, because it seems to be working but something that is unfixable by me made it unusable. Or so I thought. Again thanks to Alexander, I realized I shouldn’t call OpenGL functions outside functions provided by GtkGLArea. Finally with it all working here’s a demo:

Hopefully it will be available soon. Regardless, this was an awesome experience. I worked in C, a short introduction to OpenGL, and learned basic dynamic memory management.

You can see this work here.

A Cairo Adventure

This is one of my recent projects. I felt Game’s thumbnail covers could have a bit more eye candy instead of the letter-boxed covers which you can see in my first selection mode picture. So I thought of blurred enlarged background of the cover behind the cover for the 1:1 cover aspect ratio instead of letter-boxing it. Alexander gave me the green light and I began search for how to do it. I assumed someone on the internet already had this done blur with cairo. So I created a simple app to test out different blur implementations. But most of them did not produce the right effect we were looking for and I was too lazy to tweak it for the required effect. And after trying about 4 different implementations I almost gave up. But then Alexander directed me at GTK’s own blur used for drawing shadows. So I tried it out and it kinda works, but only that the colors go away and picture becomes wavy with higher blur radius. That’s because I had commented out an assert that made sure the image I give it is in A8 format (only alpha), so that it doesn’t crash when I give it RGB24 images just for an experiment :p.

But I had an idea. In the hopes that GTK’s cairo blur was initially implemented in RGB24 I checked out the gtkcairoblur.c’s history, and… it was! It supported ARGB32, RGB24, and A8. After some quick ctrl+c ctrl+v which is what I’m good at, It worked! And now I need to port it to Vala so that it can be used in Games (without extern funcs).

You can see my work up until now here. I added flatpak and CI support cause why not :). And finally here’s a sample of what the covers would look like with blurred background:

Sample Blurred Covers

Conclusion

I’m really having a great time contributing to GNOME. It’s exciting and the community is very nice and helping. And especially thanks to my mentor, Alexander, for all the help.

I’ll try blogging at least once or twice a month :). But until then goodbye!

July 02, 2020

Splitting up the Frame Clock

Readers be advised, this is somewhat of a deep dive into the guts of Mutter. With that out in the open, lets start!

Not too long ago mutter saw a merge request land, that has one major aim: split up the frame clock so that when using the Wayland session, each monitor is driven by its own frame clock. In effect the goal here is that e.g. a 144 Hz monitor and a 60 Hz monitor being active in the same session will not have to wait for each other to update, and that the space they occupy on the screen will draw at their own pace. A window on the 144 Hz monitor will paint at 144 Hz, and mutter will composite to the monitor at 144 Hz, while a window on the 60 Hz monitor will paint at 60 Hz and Mutter will composite to the monitor at 60 Hz.

glxgears on a 75 Hz monitor next to weston-simple-egl on a 60 Hz monitor.

All of this is roughly achieved by the changes summarized below.

Preface

In the beginning of times, Clutter was an application toolkit. As such, it assumed (1) the existence of a window compositor, and (2) the compositor is a different process. Back then, Wayland was still in its early infancy, and those assumptions wouldn’t conflict with writing a X11 window manager. After all, a X11 window manager is pretty much just another client application.

Over time, however, Clutter started to grow Wayland integration in itself. Deeper and deeper surgeries were made to it to accomodate it being used by a Wayland compositor.

In 2016, the Cogl and Clutter codebases were merged with Mutter codebase, and they all live in the same repository now. However, to this day, relics from the time when Clutter was an application toolkit are still present in Mutter’s Clutter. One such relic is ClutterMasterClock .

ClutterMasterClock

ClutterMasterClock was the main frame clock that drove Clutter painting. As an application toolkit, only a single, global frame clock was necessary; but as a compositor toolkit, this design doesn’t fit the requirements for multi-monitor setups.

Over the last cycles, there has been some attempts to make it handle multiple monitors slightly better by juggling multiple monitors with their own refresh rates and clocks using various tricks, but the fundamental design was standing in the way for making substantial progress, so it has been completely decommissioned.

Enters ClutterFrameClock.

ClutterFrameClock is the new frame clock object that aims to drive a single “output”. Right now, it has a fixed refresh rate, and a single “frame listener” and “presenter” notifying about frames being presented. It is also possible to have multiple frame clocks running in parallel.

However, ClutterFrameClock alone isn’t enough to achieve independence of monitor redraws.

Stage Views

Mutter has a single stage that covers the union of all monitor rectangles. But how does it render different contents to each one of them?

That’s one of the main responsibilities of ClutterStageView.

ClutterStageView was the answer to the need of drawing the stage at different framebuffers. ClutterStageView corresponds roughly to one monitor. Each ClutterStageView holds the on-screen framebuffer that the monitor displays; if using shadow framebuffers, ClutterStageView also handles them; and finally, it also handles the monitor rotation.

Now, ClutterStageView also handles the monitor’s frame clock. By handling the frame clock, each view is also responsible of notifying about frames being presented, and handling the frame clock dispatching

The frame scheduling related logic (including flip counting, schedule time calculation, etc) was spread out in ClutterMasterClockDefault, ClutterStage, ClutterStageCogl, MetaRendererNative, MetaStageNative, and MetaStageX11, but has now now been concentrated to ClutterFrameClock and ClutterStageView alone.

Actors, Actors Everywhere

When animating interface elements, the core object that does that is ClutterTimeline and its subclass, ClutterTransition .

Timelines and transitions saw frames whenever the master clock ticked. With the master now clock gone, they need to find an appropriate frame clock to drive them. In most (and after this change effectively all) cases a timeline was used to directly drive an animation related to an actor. This indirect relationship is now made explicit, and the timeline uses the actor to find what stage view it is being displayed on, and with that information, picks an appropriate frame clock to attach to.

For transitions, used extensively by GNOME Shell to implement animations, this is handled by making a ClutterAnimatable provide the actor, and for stand-alone timelines, it’s a property set directly on the timeline before it’s started.

This means that when an actor moves across the stage and enters a different stage view, the timeline will be notified about this and will decide whether to migrate to a different frame clock.

What About X11?

In the X11 session, we composite the whole X11 screen at once, without any separation between monitors. This remains unchanged, with the difference being where scheduling takes place (as mentioned in an earlier point). The improvements described here are thus limited to using the Wayland session.

Be aware of API changes

This is quite a substantial change in how painting works in mutter, API changes could not be avoided. With that in mind, the changes needed are small, and mostly handled transparently by GNOME Shell itself. In fact, in all of GNOME Shell’s Javascript code, only two places needed change.

To be specific, for extension developers, there are two things to keep in mind:

  • If you use a St.Adjustment. You must now pass an actor when constructing it. This actor will determine what frame clock will drive the adjustment.
  • Some signals saw their type signatures change, namely ClutterStage::presented, ClutterStage::after-paint.

Final Thoughts

This is a big achievement to Mutter, GNOME Shell, its users, and especially to the contributors that were part of this. The road to reach this point was long and tortuous, and required coordinated efforts of dozens of contributors over the course of at least 5 years. We’d like to take a moment to appreciate this milestone and congratulate each and every single contributor that was part of this. Thank you so much!

This Month in Mutter & GNOME Shell | May and June 2020

The volunteers and contributors working on Mutter and GNOME Shell have been busy in the past couple of months — so much so that we didn’t have bandwidth to write the May development report!

As a consequence, this development summary will have an above average number of changes to highlight.

GNOME Shell

Preparations for Customizable App Grid

As part of the preparations for a customizable application grid, a new layout manager was written and replaced the current icon grid code. This new layout manager is in many ways more suitable for current and future changes:

  • It follows the delegation pattern that is common to Clutter. As such, it is a layout manager, and not an UI element itself.
  • It allows more precise control over the how the grid is displayed.
  • It uses modern JavaScript practices and is, in general, a more maintainable and comprehensive code.

The most visible impact is that it now selects a row x column configuration that is closest to the aspect ratio of the display:

New layout manager on portrait mode

There are still improvements to make, especially with ultra-wide displays, but the foundation work is already there, and it will be vastly easier to fine-tune the behavior of the app grid on different scenarios.

Also as part of the preparations for a customizable application grid, the Frequent tab was removed. You can read more about the reasons for this removal in the corresponding issue.

Actor Tree Inspector

GNOME Shell’s development tool, the Looking Glass, received a handy new tab to inspect the actor tree:

Actor tree tab
The new actor tree tab in the Looking Glass

This new inspector has been useful for developing GNOME Shell, and hopefully it’ll help extension developers too.

App Folder Dialog Updates

App folder dialogs received a bunch of visual and behavioral improvements, such as covering the entire monitor, and not changing the size of the app grid itself. Take a look:

These dialogs are now paginated, and fixed to 9 app icons per page:

Paginated folder dialogs with 9 items

Like the app grid, folder dialogs now also have better support for touchpad gestures and Drag n’ Drop.

Updates to the Message List Popup

This year, GNOME Shell has a Google Summer of Code intern working on the messages dialog. As a preparation for this project, some cleanups and reorganizations of the message list popup landed. More work in this front is happening, and an influx of improvements is expected to come soon, stay tuned!

Other Changes

GNOME Shell now supports the PrefersNonDefaultGPU key of the Desktop File specification, and will set the appropriate environment variables to launch applications using a dedicated GPU when available.

An unfortunate oversight was causing the Do Not Disturb setting to be reset on startup. This bug was fixed. A potential D-Bus race condition when creating MPRIS media players was corrected. App icons do not vertically stretch in the top bar anymore. These bugfixes were backported to GNOME 3.36.

The rendered contents of labels are now cached in the GPU.

The code that deals with workspaces in GNOME Shell is old, but a large number of cleanups to it has landed (!1119, !1251, !1294, !1297, !1298, !1307, !1310, !1313, !1320, !1333), and even more is under review. These cleanups were much needed in order to improve the overall quality and maintainability of the codebase.

The Extensions app saw some improvements too. The Logout button now works correctly.

When the host system changes timezones, GNOME Shell now properly updates the timezone offsets of the “World Clocks” section of the messages popover.

Finally, the Wacom buttom mapping on-screen display received various quality-of-life improvements.

Mutter

Layout Machinery Optimizations

A few exciting optimizations and improvements to Clutter’s layout machinery landed, and they bring groundwork for future improvements as well.

The removal of allocation flags allowed skipping the allocation phase of actors whose absolute position (that is, the on-screen position after performing the linear transformation of the actor vertices) didn’t change.

While routinely profiling Mutter, it was noticed that an abnormally high number of safety type checks were happening in a rendering hot path, during the redraw cycle. Those checks were removed.

Combined, these changes are of notable significance due to how expensive it is to recalculate the layout of actors. Some of them are also required for per-CRTC frame clocks.

Rendering Pipeline Improvements

Cogl now supports setting a maximum mipmap level, in addition to the minimum one, and background set a maximum mipmap level. This avoids creating mipmaps  that won’t be used.

Last year, MetaShapedTexture was made into a ClutterContent implementation. This change was important for a multitude of reasons, and will play a special role in the future with upcoming cleanups. However, it also introduced an unforeseen regression: Clutter paints ClutterContents before running the main painting routines, and this broke the existing culling mechanism of Mutter. After some investigation, culling was fixed again.

At last, MetaShapedTexture now uses a lighter, more appropriate function to combine opaque areas of windows.

Other Changes

Mutter saw a very, very, very, very large number of code cleanups. In fact, these cleanups combined got rid of almost the entirety of deprecated code!

Mutter also received a series of improvements to its test suit. These improvements range from fixing broken tests, make CI more reliable, add more tests, reorganize the entire test suit, among other changes.

Damage tracking, especially when combined with shadow framebuffers, is now working reliably and correctly. Importing DMA buffers is more careful about failures when importing scanout buffers. Finally, a couple of small memory leaks were plugged.

Web-augmented graphics overlay broadcasting with WPE and GStreamer

Graphics overlays are everywhere nowadays in the live video broadcasting industry. In this post I introduce a new demo relying on GStreamer and WPEWebKit to deliver low-latency web-augmented video broadcasts.

Readers of this blog might remember a few posts about WPEWebKit and a GStreamer element we at Igalia worked on. In december 2018 I introduced GstWPE and a few months later blogged about a proof-of-concept application I wrote for it. So, learning from this first iteration, I wrote another demo!

The first demo was already quite cool, but had a few down-sides:

  1. It works only on desktop (running in a Wayland compositor). The Wayland compositor dependency can be a burden in some cases. Ideally we could imaginge GstWPE applications running “in the cloud”, on machines without GPU, bare metal.
  2. While it was cool to stream to Twitch, Youtube and the like, these platforms currently can ingest only RTMP streams. That means the latency introduced can be quite significant, depending on the network conditions of course, but even in ideal conditions the latency was between one and 2 seconds. This is not great, in the world we live in.

To address the first point, WPE founding engineer, Žan Doberšek enabled software rasterizing support in WPE and its FDO backend. This is great because it allows WPE to run on machines without GPU (like continuous integration builders, test bots) but also “in the cloud” where machines with GPU are less affordable than bare metal! Following up, I enabled this feature in GstWPE. The source element caps template now has video/x-raw, in addition to video/x-raw(memory:GLMemory). To force swrast, you need to set the LIBGL_ALWAYS_SOFTWARE=true environment variable. The downside of swrast is that you need a good CPU. Of course it depends on the video resolution and framerate you want to target.

On the latency front, I decided to switch from RTMP to WebRTC! This W3C spec isn’t only about video chat! With WebRTC, sub-second live one-to-many broadcasting can be achieved, without much efforts, given you have a good SFU. For this demo I chose Janus, because its APIs are well documented, and it’s a cool project! I’m not sure it would scale very well in large deployments, but for my modest use-case, it fits very well.

Janus has a plugin called video-room which allows multiple participants to chat. But then imagine a participant only publishing its video stream and multiple “clients” connecting to that room, without sharing any video or audio stream, one-to-many broadcasting. As it turns out, GStreamer applications can already connect to this video-room plugin using GstWebRTC! A demo was developed by tobiasfriden and saket424 in Python, it recently moved to the gst-examples repository. As I kind of prefer to use Rust nowadays (whenever I can anyway) I ported this demo to Rust, it was upstreamed in gst-examples as well. This specific demo streams the video test pattern to a Janus instance.

Adapting this Janus demo was then quite trivial. By relying on a similar video mixer approach I used for the first GstWPE demo, I had a GstWPE-powered WebView streaming to Janus.

The next step was the actual graphics overlays infrastructure. In the first GstWPE demo I had a basic GTK UI allowing to edit the overlays on-the-fly. This can’t be used for this new demo, because I wanted to use it headless. After doing some research I found a really nice NodeJS app on Github, it was developed by Luke Moscrop, who’s actually one of the main developers of the Brave BBC project. The Roses CasparCG Graphics was developed in the context of the Lancaster University Students’ Union TV Station, this app starts a web-server on port 3000 with two main entry points:

  • An admin web-UI (in /admin/ allowing to create and manage overlays, like sports score boards, info banners, and so on.
  • The target overlay page (in the root location of the server), which is a web-page without predetermined background, displaying the overlays with HTML, CSS and JS. This web-page is meant to be fed to CasparCG (or GstWPE :))

After making a few tweaks in this NodeJS app, I can now:

  1. Start the NodeJS app, load the admin UI in a browser and enable some overlays
  2. Start my native Rust GStreamer/WPE application, which:
    • connects to the overlay web-server
    • mixes a live video source (webcam for instances) with the WPE-powered overlay
    • encodes the video stream to H.264, VP8 or VP9
    • sends the encoded RTP stream using WebRTC to a Janus server
  3. Let “consumer” clients connect to Janus with their browser, in order to see the resulting live broadcast.

(If the video doesn’t display, here is the Youtube link.)

This is pretty cool and fun, as my colleague Brian Kardell mentions in the video. Working on this new version gave me more ideas for the next one. And very recently the audio rendering protocol was merged in WPEBackend-FDO! That means even more use-cases are now unlocked for GstWPE.

This demo’s source code is hosted on Github. Feel free to open issues there, I am always interested in getting feedback, good or bad!

GstWPE is maintained upstream in GStreamer and relies heavily on WPEWebKit and its FDO backend. Don’t hesitate to contact us if you have specific requirements or issues with these projects :)

July 01, 2020

Summer Maps

Since it's been a while since the last post, I thought I should share a little update about some going ons with Maps.

There's now a new night mode, utilizing Mapbox' dark street tile set:











































Another thing that has been requested from time to time is showing labels on the satellite mode (“hybrid” aerial). Originally the plan was more along the line of rendering vector tile data on the client-side and have this rendered as a separate layer on top of the regular “vanilla” aerial tile set. But since vector tile support has not materialized yet, another idea has been to take advantage of Mapbox' hybrid raster tiles (”satellite-streets” as they call them). So I decided to implement that, to finally have this feature:

 So, when selecting the aerial view, a checkbox appears allowing to switch on the hybrid mode.

Another thing I have missed for a while was having some sort of regression testing, e.g. some form of unit tests. I decided to roll a custom quite simplistic solution consisting of a small bit of Meson “code” to dynamically build launch scripts invoking GJS on each of a set of .js files and have the Meson test clause execute them, as can be seen here: https://gitlab.gnome.org/GNOME/gnome-maps/-/tree/master/tests.
It currently only have a few test cases, but it's a start, I guess :-)

Furthermore I took some time to make the rendering of various places where numbers and times are shown to use the locale-depending formatting functionallity in ES (JavaScript) to get rid of some remaining places that still used hard-coded %d-like format strings, resulting in always using western-style digits, as can be seen in the following after this, using a Persian (فارسی) locale:


But, maybe we should keep the most most exiting thing til last… a little over a year ago I started a new project (libshumate) with the intention of trying to build a GTK 4 implementation of a libchamplain-like API for rendering map tiles (and markers and such). Lately Corentin Noël (tintou) took up the ball and has managed to get up to a state where it's working enough to actual display stuff (and scroll and zoom around):

This is the simple “launcher” demo from within the project, actually displaying a map in a GTK 4 world.
And since everything is GTK widget, you can use the GTK Inspector to look around at the internals for testing/debugging:

And “everything is a widget”, like the actual tiles, so you can for example toggle off visibility of a single map tile, since it's just a regular GTK widget, like so:


I'm very impressed with Corentin's work!
It's very exiting, I think it's at a point where it should probably be possible to do WiP work using in Maps (but for now probably with only barebones rendering of actual map view working).

v3dv status update 2020-07-01

About three weeks ago there was a big announcement about the update of the status of the Vulkan effort for the Raspberry Pi 4. Now the source code is public. Taking into account the interest that it got, and that now the driver is more usable, we will try to post status updates more regularly. Let’s talk about what’s happened since then.

Input Attachments

Input attachment is one of the main sub-features for Vulkan multipass, and we’ve gained support since the announcement. On Vulkan the support for multipass is more tightly supported by the API. Renderpasses can have multiple subpasses. These can have dependencies between each other, and each subpass define a subset of “attachments”. One attachment that is easy to understand is the color attachment: This is where a given subpass writes a given color. Another, input attachment, is an attachment that was updated in a previous subpass (for example, it was the color attachment on such previous subpass), and you get as a input on following subpasses. From the shader POV, you interact with it as a texture, with some restrictions. One important restriction is that you can only read the input attachment at the current pixel location. The main reason for this restriction is because on tile-based GPUs (like rpi4) all primitives are batched on tiles and fragment processing is rendered one tile at a time. In general, if you can live with those restrictions, Vulkan multipass and input attachment will provide better performance than traditional multipass solutions.

If you are interested in reading more details on this, you can check out ARM’s very nice presentation “Vulkan Multipass mobile deferred done right”, or Sascha Willems’ post “Vulkan input attachments and sub passes”. The latter also includes information about how to use them and code snippets of one of his demos. For reference, this is how the input attachment demos looks on the rpi4:

Compute Shader

Given that this was one of the most requested features after the last update, we expect that this will be likely be the most popular news from this post: Compute shaders are now supported.

Compute shaders give applications the ability to perform non-graphics related tasks on the GPU, outside the normal rendering pipeline. For example they don’t have vertices as input, or fragments as output. They can still be used for massivelly parallel GPGPU algorithms. For example, this demo from Sascha Willems uses a compute shader to simulate cloth:

Storage Image

Storage Image is another recent addition. It is a descriptor type that represents an image view, and supports unfiltered loads, stores, and atomics in a shader. It is really similar in most other ways to the well-known OpenGL concept of texture. They are really common with compute shaders. Compute shaders will not render (they can’t) directly any image, and it is likely that if they need an image, they will update it. In fact the two Sascha Willem demos using storage images also require compute shader support:

Performance

Right now our main focus for the driver is working on features, targetting a compliant Vulkan 1.0 driver. Having said so, now that we both support a good range of features and can run non-basic applications, we have devoted some time to analyze if there were clear points where we could improve the performance. Among these we implemented:
1. A buffer object (BO) cache: internally we are allocating and freeing really often buffer objects for basically the same tasks, so there are a constant need of buffers of the same size. Such allocation/free require a DRM call, so we implemented a BO cache (based on the existing for the OpenGL driver) so freed BOs would be added to a cache, and reused if a new BO is allocated with the same size.
2. New code paths for buffer to image copies.

Bugfixing!!

In addition to work on specific features, we also spent some time fixing specific driver bugs, using failing Vulkan CTS tests as reference. Thanks to that work, the Sascha Willems’ radial blur demo is now properly rendering, even though we didn’t focus specifically on working on that demo:

Next?

Now that the driver supports a good range of features and we are able to test more applications and run more Vulkan CTS Tests with all the needed features implemented, we plan to focus some efforts towards bugfixing for a while.

We also plan to start to work on implementing the support for Pipeline Cache, which allows the result of pipeline construction to be reused between pipelines and between runs of an application.

GSoC 2020: the first milestone

An update for my GSoC project

What is best in open source projects?

Open source project maintainers have a reputation of being grumpy and somewhat rude at times. This is a not unexpected as managing an open source project can be a tiring experience. This can lead to exhaustion and thus to sometimes being a bit too blunt.

But let's not talk about that now.

Instead, let's talk about the best of times, the positive outcomes, the things that really make you happy to be running an open source project. Patches, both bug fixes and new features are like this. So is learning about all the places people are using your project. Even better if they are using it ways you could not even imagine when you started. All of these things are great, but they are not the best.

The greatest thing is when people you have never met or even heard of before come to your project and then on their own initiative take on leadership in some subsection in the project.

The obvious thing to do is writing code, but this also covers things like running web sites, proofreading documentation, wrangling with CI, and even helping other projects to start using your project. At this point I'd like to personally list all the people who have contributed to Meson in this way but it would not be fair as I'd probably miss out some names. More importantly this is not really limited to any single project. Thus, I'd like to send out the following message to everyone who has ever taken ownership of any part of an open source project:

Keep on rocking. You people are awesome!

Refactoring Fractal: Remove Backend (II)

So the time came for removing the Backend struct finally! The bits that were left in the previous patch have been removed, which were not just state but a ThreadPool and a cache for some info. Those were fitted in AppOp without too much thought on consistency of it.

But what does this actually mean for the internal structure of the code?

The result is that any state or utility that was needed for requests and modifying the UI is held only from a single place in the app. With it, the loop in Backend has been removed as well, and instead of sending messages to the receiver loop from the backend, those are sent from a spawned thread (to keep the UI thread unlocked) that sends the HTTP request directly and retrieves the response. Put in a simpler way, I replaced message passing to the backend loop with spawning threads, which was done anyways in the loop to be able to have multiple requests at the same time.

I acknowledge that doing this kind parallelism with system threads in 2020 is a very crude way of doing the task, to say the least, but using coroutines requires a significant amount of work in other areas of the app right now.

But I didn’t stop there. By having replaced message passing to a loop with calling the right function in a separate thread, I could take the other half of the task done in the receiver loop and bring it to the same thread after completion of the request/response function. In the process of doing this, many of the variants of BKResponse (the enum that the backend loop sent back to the app loop), leaving only those that carry error information. I didn’t do the same for error conditions because many of them were managed in a generic way, and would have duplicated too much code.

That would seem enough, except for the fact that having a loop to manage a single value within the same crate does the same as having a function that processes the data in the same way, except with greater overhead. So I went on and changed the function that setup and ran that remaining loop and repurposed it to act as a dispatcher of errors on a call basis by extracting the giant match that was inside the loop doing the actual work. After that, that dispatcher is called from the same thread where requests are made, without any loop nor message passing involved.

So now not only all state is in AppOp and there is no backend loop, there are no big busy loops at all in the app code and error management becomes simpler. Another win is that less data has to cross thread contexts, with the improvement in performance that it brings (or at least it should, I didn’t benchmark it and I don’t know the impact it actually has).

The merge request with the modifications is here and has been already accepted.

The next thing I will do is to tackle that error dispatcher function. I find very problematic to have a single match as big as that one for future extensibility of error management. There is a bit more I have to think but I’m mostly settled on turning all the variants into separate structs that implement a trait that replaces the dispatcher. The functions that do all the request-response stuff would return those structs directly in case of error, which most of them will be dedicated to each function. This would allow as well to completely get rid of intermediate conversions to a common error type for Fractal as it’s done now and lose a lot less information.

June 29, 2020

Fractal: Refactoring and the review process

In this year GSoC, Alejandro is working on Fractal, moving code from the backend to the client, to try to simplify the code used to communicate with the matrix.org server and maybe in the future we can replace fractal-matrix-api with the matrix-rust-sdk. And then we'll have less code in our project to maintain.

This is a great work, something needed in a project with a technological debt of several years. I created this project to learn Rust, and also I was learning about the matrix protocol during the project build. And other contributors do the same. So we've been building one thing on top another for a lot of years.

In this kind of community driven projects it's the way to go. For some time we've people interested and developers think about the design and start change some parts or to write new functionality following a new design pattern. But voluntary developers motivation change in time and they left the project and the next one continues the work with a different vision.

It's not something bad, it's the greatness of the open source. Different people has different motivations to participate in a free software project, and every contribution is welcome. I'm the maintainer of the project and I've spent a lot of time building Fractal, but I don't have the same motivation now to work on the project, to it's good to have other people working on it so it can continue alive.

Alejandro is doing a great work and he's not a 4 months contributor. He's working on the backend refactoring for 2 years now, step by step and he has plans for the future.

Refactoring a big project is always hard, because there's a lot of code movement and always there's the fear to regressions.

Rust is a great language and it shines when big code refactoring comes to the scene. If it compiles, you know that there's no memory errors, dangling pointers and that kind of problems. If it builds it will work.

But maybe it will work different, so the review process is needed to ensure that the application continues working.

Automated tests are really useful for big code changes and project refactoring, because you have a quick picture and some certainty that the project is working. But we don't have tests in fractal :D, so someone should do that.

So here I am. Reviewing large MR, with a lot of lines. At least gitlab makes this process a bit easier.

What I'm trying to do in the review process is to just read the whole diff and check if there's some problem in the code. And after every change, I run the app and I do some tests, trying to use the functionality that could be broken by those new changes.

This takes a lot of time and it's not something fun to do... But someone has to do that. And during that process, sometimes I learn something new. Reading code is an interesting task and try to find bugs in code while reading it, is something useful. To think about that code, what it does and why.

Now firmware can depend on available client features

At the moment we just blindly assume the capabilities of the front-end client when installing firmware. We can somewhat work around this limitation by requiring a new enough fwupd daemon version, but the GUI client software may be much older than the fwupd version or just incomplete. If you maintain a text or graphical client that uses fwupd to deploy updates then there’s an additional API call I’d like you to start using so we can fix this limitation.

This would allow, for instance, the firmware to specify that it requires the client to be able to show a runtime detach image. This would not be set by a dumb command line tool using FwupdClient, but would be set by a GUI client that is capable of downloading a URL and showing a PNG to the user.

Clients that do not register features are assumed to be dumb and won’t be offered firmware that has a hard requirement on showing a post-install “you need to restart the hardware manually” image and caption. The three known actions you can register for client feature support are can-report, detach-action and the recently added update-action. See this commit for more details about what each feature actually means.

If you’re using libfwupd then it’s a simple call to fwupd_client_set_feature_flags() otherwise you’ll have to call the SetFeatureFlags() on the main D-Bus interface before requesting the list of updates. Simple!

June 28, 2020

scikit-survival 0.13 Released

Today, I released version 0.13.0 of scikit-survival. Most notably, this release adds sksurv.metrics.brier_score and sksurv.metrics.integrated_brier_score, an updated PEP 517/518 compatible build system, and support for scikit-learn 0.23.

For a full list of changes in scikit-survival 0.13.0, please see the release notes.

Pre-built conda packages are available for Linux, macOS, and Windows via

 conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source following these instructions.

The time-dependent Brier score

The time-dependent Brier score is an extension of the mean squared error to right censored data:

$$ \mathrm{BS}^c(t) = \frac{1}{n} \sum_{i=1}^n I(y_i \leq t \land \delta_i = 1) \frac{(0 - \hat{\pi}(t | \mathbf{x}_i))^2}{\hat{G}(y_i)} + I(y_i > t) \frac{(1 - \hat{\pi}(t | \mathbf{x}_i))^2}{\hat{G}(t)} , $$

where $\hat{\pi}(t | \mathbf{x})$ is a model’s predicted probability of remaining event-free up to time point $t$ for feature vector $\mathbf{x}$, and $1/\hat{G}(t)$ is an inverse probability of censoring weight.

The Brier score is often used to assess calibration. If a model predicts a 10% risk of experiencing an event at time $t$, the observed frequency in the data should match this percentage for a well calibrated model. In addition, the Brier score is also a measure of discrimination: whether a model is able to predict risk scores that allow us to correctly determine the order of events. The concordance index is probably the most common measure of discrimination. However, the concordance index disregards the actual values of predicted risk scores – it is a ranking metric – and is unable to tell us anything about calibration.

Let’s consider an example based on data from the German Breast Cancer Study Group 2.

from sksurv.datasets import load_gbsg2
from sksurv.preprocessing import encode_categorical
from sklearn.model_selection import train_test_split
X, y = load_gbsg2()
X = encode_categorical(X)
X_train, X_test, y_train, y_test = train_test_split(
X, y, stratify=y["cens"], random_state=1)

We want to train a model on the training data and assess its discrimination and calibration on the test data. Here, we consider a Random Survival Forest and Cox’s proportional hazards model with elastic-net penalty.

from sksurv.ensemble import RandomSurvivalForest
from sksurv.linear_model import CoxnetSurvivalAnalysis
rsf = RandomSurvivalForest(max_depth=2, random_state=1)
rsf.fit(X_train, y_train)
cph = CoxnetSurvivalAnalysis(l1_ratio=0.99, fit_baseline_model=True)
cph.fit(X_train, y_train)

First, let’s start with discrimination as measured by the concordance index.

rsf_c = rsf.score(X_test, y_test)
cph_c = cph.score(X_test, y_test)

The result indicates that both models perform equally well, achieving a concordance index of 0.688, which is significantly better than a random model with 0.5 concordance index. Unfortunately, it doesn’t help us to decide which model we should choose. So let’s consider the time-dependent Brier score as an alternative, which asses discrimination and calibration.

We first need to determine for which time points $t$ we want to compute the Brier score for. We are going to use a data-driven approach here by selecting all time points between the 10% and 90% percentile of observed time points.

import numpy as np
lower, upper = np.percentile(y["time"], [10, 90])
times = np.arange(lower, upper + 1)

This returns 1690 time points, for which we need to estimate the probability of survival for, which is given by the survival function. Thus, we iterate over the predicted survival functions on the test data and evaluate each at the time points from above.

rsf_surv_prob = np.row_stack([
fn(times)
for fn in rsf.predict_survival_function(X_test, return_array=False)
])
cph_surv_prob = np.row_stack([
fn(times)
for fn in cph.predict_survival_function(X_test)
])

Note that calling predict_survival_function for RandomSurvivalForest with return_array=False requires scikit-survival 0.13.

In addition, we want to have a baseline to tell us how much better our models are from random. A random model would simply predict 0.5 every time.

random_surv_prob = 0.5 * np.ones((y_test.shape[0], times.shape[0]))

Another useful reference is the Kaplan-Meier estimator, that does not consider any features: it estimates a survival function only from y_test. We replicate this estimate for all samples in the test data.

from sksurv.functions import StepFunction
from sksurv.nonparametric import kaplan_meier_estimator
km_func = StepFunction(*kaplan_meier_estimator(y_test["cens"], y_test["time"]))
km_surv_prob = np.tile(km_func(times), (y_test.shape[0], 1))

Instead of comparing calibration across all 1690 time points, we’ll be using the integrated Brier score (IBS) over all time points, which will give us a single number to compare the models by.

from sksurv.metrics import integrated_brier_score
random_brier = integrated_brier_score(y, y_test, random_surv_prob, times)
km_brier = integrated_brier_score(y, y_test, km_surv_prob, times)
rsf_brier = integrated_brier_score(y, y_test, rsf_surv_prob, times)
cph_brier = integrated_brier_score(y, y_test, cph_surv_prob, times)

The results are summarized in the table below:

RSF Coxnet Random Kaplan-Meier
c-index 0.688 0.688 0.500
IBS 0.194 0.188 0.247 0.217

Despite Random Survival Forest and Cox’s proportional hazards model performing equally well in terms of discrimination, there seems to be a notable difference in terms of calibration, with Cox’s proportional hazards model outperforming Random Survival Forest.

As a final note, I want to clarify that the Brier score is only applicable for models that are able to estimate a survival function. Hence, it currently cannot be used with Survival Support Vector Machines.

June 26, 2020

The First Milestone

Hello everyone!

I'm working on the notifications panel revamp this year with GNOME, it's been four weeks since GSoC has started, and I'd like to share with you my progress so far:

  • A new layout for the notification bubble;

  • A new layout for the MPRIS indicator (the bubble that holds the media controls);

  • A new layout for the weather section;

Let's get started.

The first strategy

At the beginning of the project, I felt like I needed to start from an "easy" part, in order to become more confident until I finally get to the point where I can completely rewrite some pieces of code.

The layouts seemed like a good starting point, as they would be a nice and slow way to get started by refactoring some existing code, without touching the 'core' yet.

I can say my general knowledge of the codebase (and my confidence, too), was growing fast, week by week. In the first week, for example, I remember struggling to create a simple button on the screen, because I was confused about how to add actors and how to display them.

I've spent a lot of time reading code, reading the developers' documentation, and also making a lot of questions to Florian.

I couldn't be more satisfied with my approach, and you'll see the results that came from coding and by reading a lot of code(most of the time).

With that's said, let's get to the new implementations.


The starting point

The first piece of code I've decided to work with was the weather section.

It consisted of a grid layout with the sections attached to it. I've updated the maximum number of forecasts that should be displayed, and updated the widget that holds the icon, the temperature, and added a new widget for the summary of the forecast.

Also, the city name is now followed by its country on the label.

The weather section.

Last but not least, the calendar now displays the weather section above the world clocks section. (Check Tobias' mockup )

Notifications layouts

Next, I was going to revamp the layout of the bubble notifications, which now displays the app icon on the upper left corner, followed by the name of the app, and then the notification content itself, with title, body, and an optional image, depending on the app. The previous code had a Message class, which was responsible for implementing the UI of the notifications without distinction of roles (Notification or MPRIS).

A little class diagram representing notifications architecture

I was facing a challenge because, in the new design for the notifications, almost nothing was shared between them anymore.

Talking to Florian, it became clear that this whole code needed to be split, so the corresponding UI for each notification type would be implemented in its subclass.

My second commit  moves the notification UI and some methods to the subclasses that inherit from Message.

I've started from mapping the methods used by both the MediaMessage and the NotificationMessage classes, to decide what should be kept on their base class, and then I repeated the process to decide what should be moved to each subclass.

After that, I recreated the UI on each subclass, and as I was implementing them, I've realized that by reducing the base class, it was now much easier to create the layouts without worrying about if the UI would fit the different types of notifications.

Example of a new regular notification

Showing app icons

My fifth commit updates the getIcon() method on the subclasses of Source, so this way we can always get the app icon to display it on the top left corner of a regular notification. To display the secondary icon of a regular notification, I created a new property called _icon, that will get the icon that the notification is sending us, and will be displayed in the bubble if it's different from the source icon. The MediaMessage class, haven't had any changes, because the icon this time becomes the album cover.

Next steps

Now that I have a good idea of how notifications work on the Shell, I'm starting the grouping part, which I think will be the most challenging and exciting coding phase.

Thanks for reading!

June 24, 2020

Making my doorbell work

I recently moved house, and the new building has a Doorbird to act as a doorbell and open the entrance gate for people. There's a documented local control API (no cloud dependency!) and a Home Assistant integration, so this seemed pretty straightforward.

Unfortunately not. The Doorbird is on separate network that's shared across the building, provided by Monkeybrains. We're also a Monkeybrains customer, so our network connection is plugged into the same router and antenna as the Doorbird one. And, as is common, there's port isolation between the networks in order to avoid leakage of information between customers. Rather perversely, we are the only people with an internet connection who are unable to ping my doorbell.

I spent most of the past few weeks digging myself out from under a pile of boxes, but we'd finally reached the point where spending some time figuring out a solution to this seemed reasonable. I spent a while playing with port forwarding, but that wasn't ideal - the only server I run is in the UK, and having packets round trip almost 11,000 miles so I could speak to something a few metres away seemed like a bad plan. Then I tried tethering an old Android device with a data-only SIM, which worked fine but only in one direction (I could see what the doorbell could see, but I couldn't get notifications that someone had pushed a button, which was kind of the point here).

So I went with the obvious solution - I added a wifi access point to the doorbell network, and my home automation machine now exists on two networks simultaneously (nmcli device modify wlan0 ipv4.never-default true is the magic for "ignore the gateway that the DHCP server gives you" if you want to avoid this), and I could now do link local service discovery to find the doorbell if it changed addresses after a power cut or anything. And then, like magic, everything worked - I got notifications from the doorbell when someone hit our button.

But knowing that an event occurred without actually doing something in response seems fairly unhelpful. I have a bunch of Chromecast targets around the house (a mixture of Google Home devices and Chromecast Audios), so just pushing a message to them seemed like the easiest approach. Home Assistant has a text to speech integration that can call out to various services to turn some text into a sample, and then push that to a media player on the local network. You can group multiple Chromecast audio sinks into a group that then presents as a separate device on the network, so I could then write an automation to push audio to the speaker group in response to the button being pressed.

That's nice, but it'd also be nice to do something in response. The Doorbird exposes API control of the gate latch, and Home Assistant exposes that as a switch. I'm using Home Assistant's Google Assistant integration to expose devices Home Assistant knows about to voice control. Which means when I get a house-wide notification that someone's at the door I can just ask Google to open the door for them.

So. Someone pushes the doorbell. That sends a signal to a machine that's bridged onto that network via an access point. That machine then sends a protobuf command to speakers on a separate network, asking them to stream a sample it's providing. Those speakers call back to that machine, grab the sample and play it. At this point, multiple speakers in the house say "Someone is at the door". I then say "Hey Google, activate the front gate" - the device I'm closest to picks this up and sends it to Google, where something turns my speech back into text. It then looks at my home structure data and realises that the "Front Gate" device is associated with my Home Assistant integration. It then calls out to the home automation machine that received the notification in the first place, asking it to trigger the front gate relay. That device calls out to the Doorbird and asks it to open the gate. And now I have functionality equivalent to a doorbell that completes a circuit and rings a bell inside my home, and a button inside my home that completes a circuit and opens the gate, except it involves two networks inside my building, callouts to the cloud, at least 7 devices inside my home that are running Linux and I really don't want to know how many computational cycles.

The future is wonderful.

(I work for Google. I do not work on any of the products described in this post. Please god do not ask me how to integrate your IoT into any of this)

comment count unavailable comments

June 21, 2020

Back On Track

In this blog post, I would like to give an overview of what had been done in the last couple of weeks, and the plan for the next few weeks. You can see my Project on GSoC GNOME Projects Page (https://summerofcode.withgoogle.com/projects/#6096970302619648), also you can see the Project Issue on GitLab (https://gitlab.gnome.org/GNOME/gitg/-/issues/270).

During the Community Bonding Period, I met with Alberto(my mentor) on Hangouts. We got to know each other more, and he gave me a task so that I get more familiar with the libgit2 and it’s wrapper that we use in Vala and in gitg libgit2-glib. During the implementation of the task, I got more comfortable with the workflow of gtk development, and I read more about the Meson Build System and how to build Gtk Application with it.

The task was to create a tool to compare two commits with each other using libgit2-glib and show the result to the user in TextView Widget. The user would have to enter the SHAs of the two commits, then we would use those two SHAs to compute the difference between the two commits and show the result to the user. Here is the link for the project.

I had to read more about the libgit2 library to understand more how it works. There are some useful examples on the website here, I also read more about the different data structures used to handle storing the “Commits”, and the “Diff” in each repository.

I had some difficulties during the implementation, since I didn’t really know how delegate methods work, so I had to read more about them to understand what and why they’re used.

Right now, I’m trying to explore the different ways of implementing the UI, so that we enable selecting only two commits at a time to be compared. So far, I’ve found two approaches. First approach, was to provide a selection function, using the set_select_function method in the Gtk.TreeSelection Class, which is just a way to have more control over the selection of nodes.

The Second approach, was to override the button_press_event and basically use the same concept behind first approach, but this way we have even more control over what to select and what not. Since we control the selection before the clicking happens and we call the select method ourselves. I’m still not quite sure of what the best practices in gtk are, or what is the right way of doing things and especially in this particular case. So I won’t integrate any of the mentioned solutions until I get feedback from my mentors.

I’ve also mentioned Allan Day in the Project Issue, so that he could give us his insights of how the UI should look like while selecting different commits. I’m looking forward to hearing his opinion.

That being said, I’m back on track again, and I’ll be working on the UI parts of my project until I take my finals in July.

Wish me good luck.

June 20, 2020

Tracker in Summer

Lots of effort is going into Tracker at the moment. I was waiting for a convenient time to blog about it all, but there isn’t a convenient moment on a project like this, just lots of interesting tasks all blocked on different things.

kawhi-watch-14

App porting

With the API changes mostly nailed down, our focus moved to making initial Tracker 3 ports of the libraries and apps that use Tracker. This is a crucial step to prove that the new design works as we expect, and has helped us to find and fix loads of rough edges. We want to work with the maintainers of each app to finish off these ports.

If you want to help, or just follow along with the app porting, the process is being tracked in this GNOME Initiatives issue.

The biggest success story so far is GNOME Music. The maintainers Jean and Marinus are regular collaborators in #tracker and in our video meetings, and we’ve already got a (mostly) working port to Tracker 3. You can download a Flatpak build from that merge request, but note that it requires tracker-miners 3.0 installed on your host.

We’re hoping we can work around the host dependency in most cases, but I got excited and made unofficial Fedora packages of Tracker 3 which allowed me to try it out on my laptop.

We are also happy that GTK can be built against Tracker 3 already, and excited for the work in progress on Rygel. At the time of writing, the other apps with Tracker 3 work in progress Boxes, Files, Notes, Photos, Videos. Some of these use the new tracker3 Grilo plugin which we hope a Grilo maintainer will be able to review and merge soon. All help with finishing these branches and the remaining apps will be very welcome.

Release strategy

We have been putting thought into how to release Tracker 3. We need collaboration on two sides: from app maintainers who we need to volunteer their time and energy to review, test and merge the Tracker 3 changes in their apps, and from distros who we need to volunteer their time to package the new version and release it.

We have some tricky puzzles to solve, the main one being how an app might switch to Tracker 3 without breaking on Ubuntu 20.04 and other distros that are unlikely to include Tracker 3, but are likely to host the latest Flatpak apps.

We are hoping to find a path forward that satisfies everyone, again, you can follow the discussion in Initiative issue #17.

As you can see, we are volunteering a lot of our time at the moment to make sure this complicated project is a success.

Data exporting

We made it more convenient to export data from Tracker databases, with the tracker export command. It’s nice to have a quick way to see exactly what is stored there. This feature will also be crucial for exporting app data such as photo albums and starred files from the centralized Tracker 2 database.

Hardware testing with umockdev

The removable device support in Tracker goes largely untested, because you need to actually plug and unplug a real USB to exercise it. As always, for a volunteer driven project like Tracker it’s vital that testing the code is as easy as possible.

I recently discovered umockdev and decided to give it a spin. I started with the power management code because it’s super simple – on low battery notication, we stop the indexer. I’m happy with the test code but unfortunately it fails on GNOME’s CI runners with an error from umockdev:

sendmsg_one: cannot connect to client's event socket: Permission denied

I’m not sure when I’ll be motived to dig into why this fails, since the problem only reproduces on the CI runners, so if anyone has a pointer on what’s wrong then please comment on the MR.

GUADEC

Due to the COVID-19 pandemic, GUADEC will be an online event but Tracker will be covered in two talks, “Tracker: The Future is Present” on the Friday, and my talk “Move Fast and Break Things” on Thursday.

The pandemic also means I’m likely to be spending the whole summer here in Galicia which can hardly be seen as bad luck. Here’s a photo of a beautiful spot I discovered recently about 30km from where I live:

Next steps

Carlos is working on some final API tweaks before we make another Tracker 2.99 beta release, after which the API should be fully stable. The Flatpak portal is also nearly ready.

We hope to see progress with app ports. This depends more and more on when app developers can volunteer their time to collaborate with us. Progress in the next few weeks will decide whether we target GNOME 3.38 (September 2020) or GNOME 3.40 (March 2021) for switching everything over to Tracker 3.

Unlike GTK 4, I can’t show any cool screenshots. I do have some ideas about how to demonstrate the improvements over Tracker 2, however … watch this space!

As always, we are available on IRC/Matrix in #tracker and you are welcome to join our online meetings.

June 19, 2020

Friends of GNOME Update June 2020

Welcome to the Friends of GNOME Update

A photo of ten people on a rooftop. Some have their arms crossed. They look Very Serious.
“Group picture (testing)” by mariosp is licensed under CC BY-SA 2.0

A Victory for Open Source!

We are so, so excited to share the settlement in the legal case levied by Rothschild Patent Imaging against the GNOME Foundation. Ten months after Rothschild Patent Imagining first alleged that GNOME was in violation of one of their patents. In the settlement, Rothschild dropped all charges. Additionally, their patent portfolio is now available for any project using an Open Source Initiative approved license.

You can read an interview between Executive Director Neil McGovern and OpenUK’s Amanda Brock about the case.

GNOME on the Road

The Pan African GNOME Summit might have been postponed, but the organizers are hard at work making community meetings happen. At the first meetup, Neil, Program Coordinator Kristi Progri, and GNOME contributor Sriram Ramkrishna presented on various topics. Melissa Wu, organizer of the Community Engagement Challenge, joined for the second.

GUADEC 2020

The GUADEC 2020 schedule is in place, we have some amazingly generous sponsors, and registration is open!

Why register for a remote, free, online conference? Registering for GUADEC 2020 helps the GUADEC team and the Foundation. By understanding who is attending, where you are coming from, and what your needs are, we are able to plan better conferences in the future. Please consider [registering today][].

Community Engagement Challenge Updates

The deadline for the Community Engagement Challenge is coming up on July 1. Don’t forget to submit your ideas on how we can bring new contributors into free and open source software.

For the Challenge, we’ve recruited four amazing judges: Gina Likins, Manuel Haro Márquez, Murray Saunders, and Allison Randal. They represent a wide range of experience across free software, education, and community and technical excellence.

We Finished the Annual Report!

We published our annual report! Check it out if you want to know what the Foundation accomplished in 2019 and highlights from community successes.

We Had a Fundraiser!

Thank you thank you thank you to everyone who supported the Spring fundraiser. For it, we asked people to think of their donations as votes for where we should focus efforts in the upcoming months. We had two “buckets,” WebKitGTK development for GTK4 and supporting building a stronger GNOME community in Africa. I’d like to also thank Caroline, Emmauele, and Regina Nkemchor Adjeo for their help.

GTK (and Accessibility) Updates

Core GTK Developer Emmanuele Bassi has, as always, been working hard on pushing forward GTK development. In addition to working on vital infrastructure like technical documentation, Emmanuele wrote an outline for upcoming accessibility rework.

Flathub (In China)

Flathub uses a Content Delivery Network (CDN) that does not work in China. Our SysAdmin team noticed this and went on a quest to find a way to bring Flathub to China. We are now using Oracle Cloud to deliver service to China.

Welcome to the New Board!

Just days ago, GNOME Foundation members voted in the Foundation’s annual Board of Directors elections. We’re excited to welcome (and welcome back) Regina Nkemchor Adejo, Robert McQueen, Felipe Borges, and Ekaterina Gerasimova. This will be Regina’s first term on the Board.

Thank you to our departing Board members! Running a foundation is hard work, and we appreciate their volunteer efforts to set vision, direct the Foundation’s activities, make decisions on finances, and go to a lot of important meetings.

GNOME Stands with Black Lives Matter

Earlier in June, Neil published a statement in solidarity with Black Lives Matter. Personally, I am proud to be a member and employee of an organization that understands our role in fighting racism, and how we as a free software community can do better and need to.

From the Community

Thank you!

Thank you for everything you do for GNOME! Whether you are a Friend of GNOME, Foundation member, donor, contributor, or enthusiast, we wouldn’t be here without you!