August 01, 2020

A minimal jhbuild GNOME session in Debian

I recently setup a GNOME development environment (after about seven years!). That meant starting from scratch since my old notes and scripts were completely useless.

My goal for this setup was once again to have the bare minimum jhbuild modules on top of a solid base system provided by my distro. The Linux desktop stack has changed a bit, specially around activation, dbus, and systemd, so I was a bit lost on how to do things properly.

Luckily around the time I was trying to figure this out, I ran into Florian Müllner’s excellent post on how to work on shell hacking on a Silverblue system.

After removing the container-related bits, I was able to get a reliable jhbuild session integrated into my system.

Here is how to run a development GNOME session, fully integrated into your system.

Register a new session

First, you need to tell GDM about your new session. I was able to do that by creating /usr/share/wayland-sessions/jhbuild.desktop:

[Desktop Entry]
Name=jhbuild GNOME
Comment=This session logs you into a development GNOME

Other distros might have slightly different paths. Check the installed files of your GDM package.

Create a wrapper gnome-session-jhbuild

The above jhbuild.desktop won’t do anything unless you create an executable that starts the session. Most of the script is actually moving some systemd/dbus plumbing around so the right jhbuild services get started.

Put this in /usr/local/bin/gnome-session-jhbuild:

# All credit to fmuellner!
set -x

jhbuild() {
  /home/diegoe/.local/bin/jhbuild "$@"

export $(jhbuild run env | grep JHBUILD_PREFIX= | sed 's:\r::')


if [ ! -e $USER_UNIT_DIR ]
  # Pick up systemd units defined in jhbuild
  ln -s $JHBUILD_PREFIX/lib/systemd/user $USER_UNIT_DIR
  systemctl --user daemon-reload


if [ ! -e $DBUS_SERVICE_DIR ]
  ln -s $JHBUILD_PREFIX/share/dbus-1 $DBUS_SERVICE_DIR

# $PATH and a few other things are likely overriden by your distro on your
# ~/.profile file (see /etc/skel/.profile) -- this might cause .desktop files
# to be seen as "broken" (because TryExec/Exec can't confirm there's a valid
# binary to use)
jhbuild run gnome-session

systemctl --user daemon-reload

Don’t forget to mark the gnome-session-jhbuild script as executable!

A minimal jhbuild module list

I can currently get a stable session running with the following modules built:

$ jhbuild buildone adwaita-icon-theme glib dconf glib-networking gvfs gtk+ gtksourceview glade gnome-session gnome-desktop gnome-session gnome-settings-daemon gnome-control-center gjs mutter gnome-shell 

You might need to buildone a few others to satisfy dependencies, but most of the time it’s better to try and fill the missing pieces with the distro -dev packages.

If for some reason your new session doesn’t start and you get stuck unable to log into any session, this is likely because of stuck services or processes. A reboot will clean things up, but make sure you change back to your system’s default session, or you’ll keep trying to log into the broken one!

A word on $PATH and installed apps

I ran into a hair pulling issue where I couldn’t get any of my jhbuild installed apps to appear in the overview unless I had the equivalent app installed on the base system. $PATH was being lost as soon as a gnome-session binary started.

Well, turns out the answer is rather obvious. gnome-session starts a login shell which means that your system takes charge and sets a few things for you, this means that the $PATH visible in calls to jhbuild run or jhbuild shell is dropped, among other things.

The solution in my system was to add the following to my ~/.profile file:

# add jhbuild PATH
if [ -n "$UNDER_JHBUILD" ]; then

The $UNDER_JHBUILD var is set whenever you are running inside jhbuild so you can use it to do other fun things in your shell, like adding an emoji to your prompt:

To get something like the above prompt you can add this to your ~/.bashrc:

if [ -n "$UNDER_JHBUILD" ]; then
  PS1="� $PS1"

libhandy: project update

GSoC 2020 progress

Work so far

Since the last update, we have progressed a lot in achieving a significant milestone; that is handling multiple rows in our widget. For me working through this implementation involved understanding the GtkGrid implementation, then developing an idea around it to add the adaptive factor to our brand new widget.

One issue that has been lingering for a while was to find a way for accepting column weights through XML layouts.

The issue persists in the latest code, but for the time being, this is our workaround: currently, we have a weight property for every child widget (which defaults to 0) and then the column’s weight is derived from the widgets belonging to that column.
So if widgets belonging to the same column have different weights defined in XML (or assigned programmatically), its unpredictable what weight the column will end up having. So, it is to be taken care that every widget belonging to the same column don’t have different weights.

That does not sound good, but thankfully, Adrien recently came up with a suggestion of keeping a property which accepts comma-separated values. We will be implementing this in the coming days. This will remove the unpredictable weight issue with our current approach (Yay!).

HdyGrid demo in action

Though the demo looks the same as the one in the previous post, the internal working has changed and supports multiple rows in one HdyGrid object. While the previous post used two HdyGrid objects for handling two different rows. You may take a look at the demo XML here and MR !530.

What’s next?

The current state of MR (!530) (with updated code) is under review. Once the core logic is finalised in this review, we will proceed to add more properties and polishing the widget as a whole.

I would devote some time in adding a better demo in the handy-demo application which will try to showcase the feature properly.

Also, I have an idea of adding some transitions to the widgets when repositioning happens. But this is yet to discuss with the community about its feasibility and implementation into the widget, and only then it will be final.

About GUADEC-2020 :D

First of all, this was my first GUADEC; so it was a completely new experience for me. Second, I really appreciate the variety of sessions that were included. Hats off for the hard work put forward by the entire organising team 🙌🙌.

The session “How can I make my project more environmentally friendly?� is what intrigued me the most (follow the link to get presentation materials of the session). It was exciting to know about some of the details of how the analysis goes. The task of stretching environment responsibility to this level is something that I never stumbled upon earlier. It was the best session, IMO.

Also, each one of the interns got a chance to speak at the GUADEC in the “Intern Lightning talks� session, telling about themselves and their project :).

The Second milestone

We've reached the end of the second coding period.

Time is flying!

In this post, I'm going to tell you about my progresses grouping the notifications.

Grouping notifications

By discussing with my mentor how could the best approach be, I found out that the notifications were already grouped on the code level, but these groups were not being represented in the UI.

In the code, there's a class named Source, which is responsible for the group. It handles the info's about the app that have sent us any notification and store them.

There's also a class named Notification, that creates a single notification, with title, banner, and has optional parameters such as playing sounds etc.

Each Source has an array property that contains its notification objects, which gives us the groups.

We needed a way to display these groups in the UI. The way the notification messages(a message is a representation of a notification - the bubble we have in the UI) were being created and displayed in Shell until now, was by an iteration of the existing sources, and we had something like:

  • NotificationSection

    • Extends the base MessageListSection class, which is responsible for the list of all the existing notification messages. Has methods to add, remove and move messages from the list;

    • Creates the messages through a loop on all the sources available;

  • NotificationMessage

The NotificationSection was responsible for create the NotificationMessage's that were displayed in the calendar. There was no distinction of sources. As the notifications came, they were being displayed and stacked into the calendar list.

The process of building a notification message could be represented in a simplistic way as:

Simple notification creation flow.

To create the grouping representation we had to introduce a new abstraction that would be responsible for a single source, and then it would be responsible for creating the messages. In other words, we introduced a new level on the notifications handling class hierarchy.

The new SourceSection class represents this new level between NotificationMessage and NotificationSection. It was created to, as its name says, represent the group that would handle its own notifications, not depending on the generic listing class to create the messages anymore.

The SourceSection also extends the MessageListSection because we also need to list the groups somehow, and with this approach, we could reuse the existing code.

The layout manager

Now that we have the groups represented in the code level, we need to display them as groups on the UI as well.

In the mockups, there are two possible states for a group:

  • Collapsed: only the most recent notification is displayed if there's more than one in the stack.

  • Expanded: all notifications for a single app are shown in a list style, and the bubbles can expand to display actions, if available for that message.

To achieve something like this, we had to create a custom layout manager, that would be responsible for these two distinct behaviors.

This layout manager, called SourceMessageLayout, extends the Clutter'sBoxLayout class, which the MessageListSection uses to create the list, and we can replace the layout manager only of the SourceSection class, without affect the inherited methods at all.

Current state

Curious to see how these solutions fit together?

Enough talking, here we go!

Current state of the grouping layout.

Please keep in mind that this is still a development in progress, and this is not the final solution yet.

Next steps

It's so satisfying to see the project growing and the pieces fitting!

However, there's a lot to improve the solution, and those are my next steps to finish the project:

  • Add animations for collapsing/expanding;

  • Blur the sections around the expanded group;

  • Add actions to the notifications messages;

  • Handle the edge cases;


Lastly, I'd like to talk about GUADEC which this year was completely remote.

This was my first talk at a conference, in a language that I'm not a native speaker. I want to thank my mentor and the GNOME community for creating a comfortable environment for the interns to talk about their projects.

Thanks for reading!

July 31, 2020

Implementing Recently Played Collection in GNOME Games

In my previous blog post, I talked about how I added a Favorites Collection to Games. Favorites Collection lists all the games that’s marked as favorite. In this post I’ll talk about what went into adding a Recently Played Collection, which helps you get to recently played games more quickly.

Since most of the ground work for supporting non-user collections are already done as part of introducing Favorites Collection, it required much less work to add another non-user collection. For Recently Played collection, the main differences from Favorites Collection in terms of implementation are:

  • An option to add the game into Recently Played is not required, since the app knows which games were played at what time
  • Games in Recently Played collection should be sorted by the last played date-time, instead of name

Implementing Recently Played Collection

Getting into the details, the first thing to do was to was introduce a last_played prop to the Game object. It stores a DateTime of when a game was last played, and updates it every time a game is played. If the game was never played it’ll stay null.

The next thing was to modify the database to support this new prop. For that I added a new text column that stores the “stringified” last_played to the Games table using the simple migration system I talked about in the last post. I also added queries to fetch the games that were recently played so that it could be loaded, and queries to update the last_played column, and that was it for database.

I then introduced a RecentlyPlayedCollection which implements the Collection interface. The load function fetches the list of recently played games (those games which has a non null last_played column) and then add them to it’s GameModel as and when the app discovers those games in the disk. The collection is also added to the CollectionModel by the CollectionManager so that its shown in the Collections Page along with Favorites Collection.

The Recently Played Collection is pretty much useless unless its sorted by the last_played prop. And Games in the GameModel were sorted by name. So I implemented sort support to GameModel, by having a new sort_type prop for it. It stores a custom SortType enum which currently contains BY_NAME and BY_LAST_PLAYED. On setting the sort_type prop, all the compare functions used by the Sequence of the GameModel will be set depending on the new sort type, and the Sequence is sorted again. This sort support can also be used for sorting games in any FlowBox that are bound with a GameModel object.

So with that most of the collection part of Recently Played Collection was done. Next thing to do was simply update last_played whenever a game is played. For better compatibility with future when multiple games can be played simultaneously, it was decided that last_played should be updated when a user quits a game. And so, we now have a functional Recently Played Collection.

Apart from that it was decided that empty non-user collections should be hidden, especially because the collection thumbnails for empty collections gets boring quickly. So a filter was added and set to the CollectionsMainPage to only show non-empty collections. Since right now only non-user collections exist, the Collections Page could look boring when both of the collections are empty, but that is intended to change when user collections arrive soon. This is where my work is at right now, you can see the relevant MR here. And here are some pictures of the changes:

Collections Page

Collections Page with the new Recently Played Collection

Collections Sub-Page

Inside the Recently Played Collection


Last week I got to experience my first GUADEC which is an awesome annual event conducted by GNOME where users, contributors and several developers meet and discuss all things GNOME, Linux, and pretty much anything. I, along with other interns from GSoC and outreach programs, had the opportunity to give an “Intern Lightning Talk” where we talked about our progress and experience at GNOME. It was a very fun experience, and I was also very excited to hear about the latest bleeding edge stuff that’s being worked on.


The work is going great, GUADEC was great. Thanks to my mentor, Alexander Mikhaylenko for all the help, and cheers to all the folks here at GNOME for GUADEC. See you next time!

GSoC Progress Update

In my last blog post, I explained how selection mode was implemented in Games. That was one of the first steps to support Collections in Games, as an efficient way to select games to add/remove from collection is crucial for managing collections. In this post I’ll be talking about how “Favorites Collection” will be implemented in GNOME Games.


The first thing to do was to introduce a Collection interface to define a behavior that all types of collections must follow. All collections must have an ID and a title. Apart from that, all collections must provide a way to add and remove games from it. And on adding or removing a game from the collection, it should emit a “game added” or “game removed” signal respectively. A collection must also implement a load(), which when called, should load the games belonging to a collection from the database. Since there’s going to be different types of collections, how a collection has to be loaded might differ from each other.

Every collection has its own GameModel and must implement a get_game_model(). A GameModel is a ListModel which stores the list of games in a collection, and get_game_model() returns its GameModel which can be bound to the flowbox of a GamesPage (a widget where games can be displayed with thumbnail and title).

Other than these, all collections must also implement on_game_added(), on_game_removed() and on_game_replaced(). These are unrelated to games being added or removed to or from a collection. These has to do with games being discovered, and when some games are no longer available to the app. When a game is discovered by tracker or a cached game is loaded, it is added to a games hash table. This emits a game_added signal (unrelated to a collection’s game_added), which every collection listens to. If the added game belongs to the collection, it adds this game to the collection. Similarly on_game_removed() and on_game_replaced() handles stuff related to when a game which was cached but is no longer found by the app, and when a game has been renamed, moved to a different directory, or when it’s still the same cached game but with different UID etc.

With the general behavior of a collection defined, it was time to introduce a FavoritesCollection which implements Collection.

Favorites Collection

As obvious from the name, a “Favorites collection” is a collection that stores games that a user marks as favorite. Favorite games are marked with a star icon on the thumbnail. A game can be added to favorites from the main Games Page or from the Platforms Page. A game can removed from favorites from the above said pages as well as from the Favorites Collection Page. Games are added or removed from favorites “automagically” depending on the list of currently selected games. If all of the selected games are favorite then clicking on the favorite action in the action bar will remove them from favorites. If none of the selected games are favorite, then they are added to favorite. If selected games are a mix of favorite and non favorite games, then all the non favorite games are added to favorites. The icon in the favorite action button in the action bar, dynamically changes from starred to semi-starred to non-starred depending on the selected games. This along with tool tips help users know what the button will do.

Collections: Behind The Scenes


So once a FavoritesCollection was ready to go, I needed to work on how it will be stored in the database and how it should load those collections.

Favorite games are stored in the database by having a new is_favorite column in the games table. games table stores all games, including “manually added” games, and games found by tracker. So adding or removing any type of game from Favorites is a matter of updating the is_favorite column in games.

However, I can’t just simply add a new column in the games table creation query. In order to migrate from the old database with no is_favorite column to the new one with the column, I’ll have to give some extra commands to the database. This shouldn’t be done always, but should depend on the version of the database. As of now, there isn’t a database versioning system in Games, so a simple database migration support was quickly implemented. This migration leverages a .version file in the data directory. This was used to migrate from old data directory structure to a newer one. However the file is empty and data directory migration (not database migration) only checks if the .version file exists or not. This is fine for a one time migration support. But having versions in the .version file might come in handy later on, so I now configured the database migration to write the version number into the .version so it can be used later on. No .version file is treated as version 0, empty .version is treated as version 1, and from version 2+, the .version file will have the version number in it. So in this use case, the database migration to support favorite games should be applied for all versions less than 2. Luckily, migration in this case only required to drop the games table and create it again with the new is_favorite column :). And after applying this migration, bump the database version and write it into the .version file. Now any future migrations may make use of this .version file too.

The database allows to add, remove, fetch complete list of favorite games, and check if a game is favorite. That’s about it for Favorites. For other types of collections, which will be introduced in the upcoming days, they would mostly be stored in different tables.

Collection Manager

As of now, the only available collection is Favorites Collection. But as said, soon there will be “Recently Played” and custom collections that users can create and manage. So when a CollectionManager is introduced, it has to be designed keeping in mind that it has to handle all types of collections. But most of the collection specific behavior can be neatly abstracted.

CollectionManager handles the creation of all types of collections. It consists of a hash table to store all collections. And as of now, it has a FavoritesCollection object. Since any actions related to a collection is better to be handled by CollectionManager it also provides a favorite_action() which accepts a list of games and depending on it adds or removes them to favorites. Apart from that, on creation of CollectionManager, it calls load() on every collection in the hash table, which loads all the games that belongs to that particular collection from the database.

Collection Model

It is a ListModel that contains all the available collections, which can be bound to the CollectionsPage’s flowbox. CollectionsPage (similar to GamesPage) is a widget which displays available collection to the user with thumbnails and title.

Collections: What You See

With all these put together we get a functional Favorites Collection. Here are some pictures:

Main Games Page

Games Page. With a star in the thumbnails of favorite games

Collections Page

Collections Page. Might look a bit lonely but will soon be accompanied by other collections : )

Favorite Collections Subpage

Favorites Page. This is more or less how any other collection would look like too.

As you can see, the notable visual changes here are:

  • A new Collections Page which can be navigated to, from the view switcher
  • A star on thumbnails of games marked as favorite, in Games Page and Platforms Page
  • A collection thumbnail which is generated from the covers of games that belong to that collection
  • A Collection (Sub)Page which when clicked opens a page with games that belong to that collection

By the way, you can also see the new game covers with the blurred background that I was experimenting with in the last blog post.

So this is where my progress is at currently. You can see the relevant MR here.

Whats Next?

In the upcoming weeks I’ll be implementing the rest of the collections. Currently my plan is to implement “Recently Played” collection next as it would the simpler one to implement next. There are also some smaller stuff which needs some work such as search and selection support for Collections. But that is planned to be at the end, after introducing all types of collection.


The work is going great and all the challenges have been fun to solve so far. Many thanks to my mentor, Alexander for being very helpful as always. Thank you all for reading my blog posts.

See you next time :)

GTK 3.99

This week, we’re releasing GTK 3.99, which can only mean one thing: GTK4 is getting really close!

Back in February, when 3.98 was released, we outlined the features that we wanted to land before making a feature-complete 3.99 release. This was the list:

  • Event controllers for keyboard shortcuts
  • Movable popovers
  • Row-recycling list and grid views
  • Revamped accessibility infrastructure
  • Animation API
How have we done?

We’ve dropped animation API from our 4.0 blocker list, since  it requires more extensive internal restructuring, and we can’t complete it in time. But all the other features have found their way into the various 3.98.x snapshots, with the accessibility infrastructure being the last hold-out that landed very recently.

Some of the features have already been covered here, for example Movable popovers and  Scalable Lists. The others will hopefully receive some detailed review here in the future. Until then, you can look at Emmanuele’s GUADEC talk if you are curious about the new accessibility infrastructure.

What else is new ?

One area I want to highlight is the amount of work that has gone into fleshing out the new scalable list infrastructure. Our filter and sort models do their work incrementally now, so the UI can remain responsive while large lists are getting filtered or sorted in the background.

A new macOS GDK backend has been merged. It still has some rough corners that we hope to smooth over between now and the 4.0 release.

And many small regressions have been fixed, from spinbutton sizing to treeview cell editing to autoscrolling to Inspector navigation to slightly misrendered shadows.

Can I port yet?

GTK 3.99 is the perfect time to take a first look at porting applications.

We are very thankful to the early adopters who have braved the 3.96 or 3.98 snapshots with trial ports and provided us with valuable feedback. With so many changes, it is inevitable that we’ve gotten things wrong in the API, and getting that feedback while we can still address things will really help us. Telling us about things we forgot to cover in the docs, missing examples or gaps in the migration guide is also very much appreciated.

We are aware that some porting efforts will be stopped short by indirect dependencies on GTK 3. For example, if you are using a webkit webview or GtkSourceView or vte, you might find it difficult to try out GTK 4.

Thankfully, porting efforts are already well underway for some of these libraries. Other libraries, such as libgweather will need some work to separate their core functionality from the GTK 3 dependency.

Can I help?

As mentioned in the previous section any feedback on new APIs, documentation and the porting guide is very welcome and helpful.

There are many other areas where we could use help. If you are familiar with OS X APIs, you could make a real difference in completing the macOS backend.

We have also started to integrate an ANGLE-based GL renderer, but our shaders need to be made to work with EGL before we can take advantage of it. Help with that would be greatly appreciated.

Whats next?

We are committed to releasing GTK 4 before the end of year. Between now and then, we are doing more work on accessibility backends, improving the macOS backend, writing documentation, and examples.

v3dv status update 2020-07-31

Iago talked recently about the work done testing and supporting well known applications, like the Vulkan ports of the Quake1, Quake 2 and Quake3. Let’s go back here to the lastest news on feature and bugfixing work.

Pipeline cache

Pipeline cache objects allow the result of pipeline construction to be reused. Usually (and specifically on our implementation) that means caching compiled shaders. Reuse can be achieved between pipelines creation during the same application run by passing the same pipeline cache object when creating multiple pipelines. Reuse across runs of an application is achieved by retrieving pipeline cache contents in one run of an application, saving the contents, and using them to preinitialize a pipeline cache on a subsequent run.

Note that it may happens that a pipeline cache would not improve the performance of an application once it starts to render. This is because application developers are encouraged to create all the pipelines in advance, to avoid any hiccup during rendering. On that situation pipeline cache would help to reduce load times. In any case, that is not always avoidable. In that case the pipeline cache would allow to reduce the hiccup, as a cache hit is far faster than a shader recompilation.

One specific detail about our implementation is that internally we keep a default pipeline cache, used if the user doesn’t provide a pipeline cache when creating a pipeline, and also to cache the custom shaders we use for internal operations. This allowed to simplify our code, discarding some custom caches that were already implemented.

Uniform/storage texel buffer

Uniform texel buffers define a tightly-packed 1-dimensional linear array of texels, with texels going through format conversion when read in a shader in the same way as they are for an image. They are mostly equivalent to OpenGL buffer texture, so you can see them as textures backed up by a VkBuffer (through a VkBufferView). With uniform texel buffers you can only do a formatted load.

Storage texel buffers are the equivalent concept, but applied to images instead of textures. Unlike uniform texel buffers, they can also be written to in the same way as for storage images.


Multisampling is a technique that allows to reduce aliasing artifacts on images, by by sampling pixel coverage at multiple subpixel locations and then averaging subpixel samples to produce a final color value for each pixel. We have already started working on this feature, and included some patches on the development branch, but it is still a work in progress. Having said so, it is enough to get Sascha Willems’s basic multisampling demo working:

Sascha Willems multisampling demo run on rpi4


Again, in addition to work on specific features, we also spent some time fixing specific driver bugs, using failing Vulkan CTS tests as reference. So let’s take a look of some screenshots of Sascha Willem’s demos that are now working:

Sascha Willems deferred demo run on rpi4

Sascha Willems texture array demo run on rpi4

Sascha Willems Compute N-Body demo run on rpi4


We plan to work on supporting the following features next:

  • Robust access
  • Multisample (finish it)

Previous updates

Just in case you missed any of the updates of the vulkan driver so far:

Vulkan raspberry pi first triangle
Vulkan update now with added source code
v3dv status update 2020-07-01
V3DV Vulkan driver update: VkQuake1-3 now working

July 29, 2020

Record Live Audio as Ogg Vorbis in GNOME Gingerblue 0.2.0

Today I released GNOME Gingerblue version 0.2.0 with the basic new features:

  • Record Live Vorbis Audio stream in <Name> - <Song> - <ISO 8601 timestamp>.ogg
  • Timestamp ISO 8601 Audio File in G_USER_DIRECTORY_MUSIC ($HOME/Music/)
  • Store ISO 8601 Timestamp Song Files in G_USER_DIRECTORY_MUSIC ($HOME/Music/)
  • Meta Info Setup Wizard
  • XML Parsing

I began work on GNOME Gingerblue on July 4th, 2018, two years ago and I am going to spend the next four years to complete it for GNOME 4.

GNOME Gingerblue will be a Free Software program for musicians who would compose, record and share original music to the Internet from the GNOME Desktop.

The project isn’t yet ready for distribution with GNOME 3 and the GUI and features such as meta tagging and Internet uploads must be implemented.

The GNOME release team complained at the early release cycle in July and call the project empty, but I estimate it will take at least 4 years to complete 4.0.0 in reasonable time for GNOME 4 to be released between 2020 and 2026.

The Internet community can’t have Free Music without Free Recording Software for GNOME, but GNOME 4 isn’t built in 1 day.

I am trying to get gtk_record_button_new() into GTK+ 4.0.

I hope to work more on the first major release of GNOME Gingerblue during Christmas 2020 and perhaps get meta tags working as a new feature in 1.0.0.

Meanwhile you can visit the GNOME Gingerblue project domain with the GNOME wiki page, test the initial GNOME Gingerblue 0.2.0 release that writes and records Song files from the microphone in $HOME/Music/ with Wizard GUI and XML parsing from August 2018, or spend money on physical goods such as the Norsk Kombucha GingerBlue soda or the Ngs Ginger Blue 15.6″ laptop bag.

About that "Google always builds everything from source every time" thing

One of the big battles of dependencies is whether you should use somehow prebuilt libraries (e.g. the Debian-style system packaging) or vendor all the sources of your deps and build everything yourself. Whenever this debat gets going, someone is going to do a "well, actually" and say some kind of variation of this:
Google vendors all dependencies and builds everything from source! Therefore that is clearly the right thing to do and we should do that also.
The obvious counterargument to this is the tried-and-true if all your friends jumped off a bridge would you do it too response known by every parent in the world. The second, much lesser known counterargument is that this statement is not actually true.

Google does not actually rebuild all code in their projects from source. Don't believe me? Here's exhibit A:

The original presentation video can be found here. Note that the slide and the video speak of multiple prebuilt dependencies, not just one [0]. Thus we find that even Google, with all of their power, money, superior engineering talent and insistence to rebuild everything from source, do not rebuild everything from source. Instead they have to occasionally use prebuilt third party libraries just like the rest of the world. Thus a more accurate form of the above statement would be this:
Google vendors most dependencies and builds everything from source when possible and when that is not the case they use prebuilt libraries but try to keep quiet about it in public because it would undermine their publicly made assertion that everyone should always rebuild everything from source.
The solution to this is obvious: you just rebuild all things that you can from sources and get the rest as prebuilt libraries. What's the big deal here? By itself there would not be, but this ideology has consequences. There are many tools and even programming languages designed nowadays that only handle the build-from-source case because obviously everyone has the source code for all their deps. Unfortunately this is just not true. No matter how bad prebuilt no-access-to-source libraries are [1], they are also a fact of life and must be natively supported. Not doing that is a major adoption blocker. This is one of the unfortunate side effects of dealing with all the nastiness of the real world instead of a neat idealized sandbox.

[0] This presentation is a few years old. It is unknown whether there are still prebuilt third party libraries in use.

[1] Usually they are pretty bad.

July 28, 2020

Pitivi: Object Tracking

I’ve been selected as a student developer at Pitivi for Google Summer of Code 2020. My project is to create an object tracking and blurring feature.

In this post, I introduce a feature in development which allows the user to track an object inside a video clip.

Object tracking in action

Before diving into the aspects, let’s see it in action. YouTube

In the video, the user selects the clip to be used and clicks on the “Track object” button. In the next screen (tracker perspective), the user chooses a frame and selects the object to be tracked using a drag-and-drop motion. The user then sets the tracking algorithm and initiates the tracking. Live tracking is displayed. The tracked object appears on the left pane. The user has the option to delete the tracked object.


The cvtracker is a plugin from gst-plugins-bad project (which is also a part of my GSoC project). It allows us to track the object by running the clip through a pipeline. The tracking data is available through the bus and buffer metadata.

The tracking in pitivi is implemented using a pipeline, which runs the clip and feeds it to the cvtracker. We extract the region-of-interest (ROI) data from the buffer.

An Object Manager class stores all the tracked objects in a clip. Technically, the object data is saved to the asset metadata. So every clip that gets generated using the asset has access to all the tracked objects.

Tracking data

For receiving the tracking data from the cvtracker, we use fakesink with the properties: fakesink name=sink signal-handoffs=TRUE.

Then we connect the handoff signal to the callback function:

def __tracker_handoff_cb(self, unused_element, buffer, unused_pad, roi_data):
       video_roi = GstVideo.buffer_get_video_region_of_interest_meta_id(buffer, 0)
       if video_roi:
           roi_data[buffer.pts] = (video_roi.x, video_roi.y, video_roi.w, video_roi.h)
           self.log("lost tracker at: %s" + str(buffer.pts / Gst.SECOND))

Further developments

There’s more coming! Sometimes the tracking can be a little inaccurate, so we’re working on a feature to adjust the tracking of an object. Basically the user can manually adjust the tracking data using a simple and user friendly interface, integrated right into the tracker perspective. More on that in another post.

Revisiting Basic and Permissions Page

Porting of Basic and Permissions pages, have been covered in the previous posts, but like the heading suggests there sure was something left. The candidates which remained to be ported were the volume usage widget featuring the pie-chart and the change permissions dialogue which can be used to change permissions of enclosed files in a folder.

The Volume Widget

The volume widget in itself is a GtkGrid which packs GtkDrawingArea’s and GtkLabels, The dimensions of the grid being 5x4 where the first 5x2 worth of room is occupied by the GtkDrawingArea and the rest is occupied by legend which conveys the size and free-space of the selected volume along with their representations in the pie chart. The colored-boxes used to indicate representational colors are also GtkDrawingArea. Styled accordingly with a CSS style class to obtain the required fill-color. Well that leaves us with the pie-chart ! Now, handling that is not as easy as a pie, but not too hard either. The task of drawing the pie chart was simply left to the preexisting signals and callbacks

    g_signal_connect (window->pie_chart, "draw",
                      G_CALLBACK (paint_pie_chart), window);
    g_signal_connect (window->used_color, "draw",
                      G_CALLBACK (paint_legend), window);
    g_signal_connect (window->free_color, "draw",
                      G_CALLBACK (paint_legend), window);

That wraps the volume_widget.

Change Permissions Dialog

This story of porting is old, if you have read the previous posts perhaps you can take a guess and ask me to fire-up glade, pick-up GtkDialog from Toplevel containers/ windows, throw-in a grid, araange labels and buttons and than all thats left is to obtain GObject references through the GtkBuilder APIand wire them together with code. But this is when you realize Glade doesn’t offer all variations of a Composite Widget like GtkDialog, lets see:

The reason for this being. Glade doesn’t suppoty use-header-bar flag supported by GtkDialog as a result of which, we need to compose the XML for dialog’s UI with our own hands, to enable the usage of use-header-bar flag so that buttons could be located there.

Your Dialog Your XML

This is the blog that helped us get our Handwritten XML to produce the outcome-we desired: How do I Dialogs

<?xml version="1.0" encoding="UTF-8"?>
  <requires lib="gtk+" version="3.22"/>
  <object class="GtkDialog" id="change_permissions_dialog">
    <property name="title" translatable="yes">Change Permissions for Enclosed Files</property>
    <property name="modal">True</property>
    <property name="destroy_with_parent">True</property>
    <property name="type_hint">dialog</property>
    <property name="use-header-bar">1</property>
    <child type="action">
      <object class="GtkButton" id="cancel">
        <property name="visible">True</property>
        <property name="label">Cancel</property>
    <child type="action">
      <object class="GtkButton" id="change">
        <property name="visible">True</property>
        <property name="label">Change</property>
 <!-- GtkGrid goes here -->
      <action-widget response="cancel">cancel</action-widget>
      <action-widget response="ok">change</action-widget>

So this is what we ended up with ! Now this could be used like any other .ui XML file and the old story of porting UI could continue as usual !

July 27, 2020

Filesystem deduplication is a sidechannel

First off - nothing I'm going to talk about in this post is novel or overly surprising, I just haven't found a clear writeup of it before. I'm not criticising any design decisions or claiming this is an important issue, just raising something that people might otherwise be unaware of.

With that out of the way: Automatic deduplication of data is a feature of modern filesystems like zfs and btrfs. It takes two forms - inline, where the filesystem detects that data being written to disk is identical to data that already exists on disk and simply references the existing copy rather than, and offline, where tooling retroactively identifies duplicated data and removes the duplicate copies (zfs supports inline deduplication, btrfs only currently supports offline). In a world where disks end up with multiple copies of cloud or container images, deduplication can free up significant amounts of disk space.

What's the security implication? The problem is that deduplication doesn't recognise ownership - if two users have copies of the same file, only one copy of the file will be stored[1]. So, if user a stores a file, the amount of free space will decrease. If user b stores another copy of the same file, the amount of free space will remain the same. If user b is able to check how much free space is available, user b can determine whether the file already exists.

This doesn't seem like a huge deal in most cases, but it is a violation of expected behaviour (if user b doesn't have permission to read user a's files, user b shouldn't be able to determine whether user a has a specific file). But we can come up with some convoluted cases where it becomes more relevant, such as law enforcement gaining unprivileged access to a system and then being able to demonstrate that a specific file already exists on that system. Perhaps more interestingly, it's been demonstrated that free space isn't the only sidechannel exposed by deduplication - deduplication has an impact on access timing, and can be used to infer the existence of data across virtual machine boundaries.

As I said, this is almost certainly not something that matters in most real world scenarios. But with so much discussion of CPU sidechannels over the past couple of years, it's interesting to think about what other features also end up leaking information in ways that may not be obvious.

(Edit to add: deduplication isn't enabled on zfs by default and is explicitly triggered on btrfs, so unless it's something you've enabled then this isn't something that affects you)

[1] Deduplication is usually done at the block level rather than the file level, but given zfs's support for variable sized blocks, identical files should be deduplicated even if they're smaller than the maximum record size

comment count unavailable comments

July 26, 2020

GNOME Internet Radio Locator 3.0.2 for Fedora Core 32

GNOME Internet Radio Locator 3.0.1 (Washington)

GNOME Internet Radio Locator 3.0.2 features updated language translations, new, improved map marker palette and now also includes radio from Washington, United States of America; WAMU/NPR, London, United Kingdom; BBC World Service, Berlin, Germany; Radio Eins, Norway; NRK, and Paris, France; France Inter/Info/Culture, as well as 118 other radio stations from around the world with audio streaming implemented through GStreamer.  The project lives on and Fedora 32 RPM packages for version 3.0.2 of GNOME Internet Radio Locator are now also available:




To install GNOME Internet Radio Locator 3.0.2 on Fedora Core 32 in Terminal:

sudo dnf install

GNOME Extensions BoF – 18:00 UTC July 26, 2020

We will be having a conversation around extensions and the future of how we will be handling them based on policy, community, and other important factors.

If you are an extensions writer, then I would urge you to join our BoF to help understand what we will be doing with extensions going forward and also provide feedback. We do not want to do this in a vacuum.

Looking forward to hearing from the community. Please understand some of things we will be talking about is:

* Centralilzing the location of extensions to the GNOME gitlab (not necessarily develop your extensions there, but if you want it on e.g.o then it will need to pass QA tests)

* Automatic QAing of extensions prior to release of gnome-shell.
* Policies
* Documentation
* Community building

See you there!

July 25, 2020

Pinebook Pro longer term usage report

I bought a Pinebook Pro in the first batch, and have been using it on and off for several months now. Some people I know wanted to know if it is usable as a daily main laptop.

Sadly, it is not. Not for me at least. It is fairly close though.

Random bunch of things that are annoying or broken

I originally wanted to use stock Debian but at some point the Panfrost driver broke and the laptop could not start X. Eventually I gave up and switched to the default Manjaro. Its installer does not support an encrypted root file system. A laptop without an encrypted disk is not really usable as a laptop as you can't take it out of your house.

The biggest gripe is that everything feels sluggish. Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs. As an extreme example switching between channels in Slack takes five to ten seconds. It is unbearably slow. The wifi is not very good, it can't connect reliably to an access point in the next room (distance of about 5 meters). The wifi behaviour seems to be distro dependent so maybe there are some knobs to twiddle.

Video playback on browsers is not really nice. Youtube works in the default size, but fullscreen causes a massive frame rate drop. Fullscreen video playback in e.g. VLC is smooth.

Basic shell operations are sluggish too. I have a ZSH prompt that shows the Git status of the current directory. Entering in a directory that has a Git repo freezes the terminal for several seconds. Basically every time you need to get something from disk that is not already in cache leads to a noticeable delay.

The screen size and resolution scream for fractional scaling but Manjaro does not seem to provide it. Scale of 1 is a bit too small and 2 is way too big. The screen is matte, which is totally awesome, but unfortunately the colors are a bit muted and for some reason it seems a bit fuzzy. This may be because I have not used a sub-retina level laptop displays in years.

The trackpad's motion detector is rubbish at slow speeds. There is a firmware update that makes it better but it's still not great. According to the forums someone has already reverse engineered the trackpad and created an unofficial firmware that is better. I have not tried it. Manjaro does not provide a way to disable tap-to-click (a.k.a. the stupidest UI misfeature ever invented including the emojibar) which is maddening. This is not a hardware issue, though, as e.g. Debian's Gnome does provide this functionality. The keyboard is okayish, but sometimes detects keypresses twice, which is also annoying.

For light development work the setup is almost usable. I wrote a simple 3D model viewer app using Qt Creator and it was surprisingly smooth all round, the 3D drivers worked reliably and so on. Unfortunately invoking the compiler was again sluggish (this was C++, though, so some is expected). Even simple files that compile instantly on x86_64 took seconds to build.

Can the issues be fixed?

It's hard to say. The Panfrost driver is under heavy development, so it will probably keep getting better. That should fix at least the video playback issues. Many of the remaining issues seem to be on the CPU and disk side, though. It is unknown whether there are enough optimization gains to be had to make the experience fully smooth and, more importantly, whether there are people doing that work. It seems feasible that the next generation of hardware will be fast enough for daily usage.

Bonus content: compiling Clang

Just to see what would happen, I tried whether it would be possible to compile Clang from source (it being the heaviest fairly-easy-to-build program that I know of). It turns out that you can, here are the steps for those who want to try it themselves:
  • Checkout Clang sources
  • Create an 8 GB swap file and enable it
  • Configure Clang,  add -fuse-ld=gold to linker flags (according to Clang docs there should be a builtin option for this but in reality there isn't) and set max parallel link jobs to 1
  • Start compilation with ninja -j 4 (any more and the compilation jobs cause a swap storm)
  • If one of the linker jobs cause a swap storm, kill the processes and build the problematic library by hand with ninja bin/
  • Start parallel compilation again and if it freezes, repeat as above
After about 7-8 hours you should be done.

GUADEC 2020: Newcomers Workshop

Trust you all are enjoying GUADEC 2020! , It’s been going well thanks to the organising team’s efforts and everyone’s love 😀

We are hosting Newcomer’s workshop BOF on Monday 27 Jul 2020, (15:00 â†’ 17:00 UTC) , This is a great place to be at if you are someone who’s looking to explore how to contribute to GNOME. We will be going through the project’s practically and will be sharing you the information that gives you the head-start to your journey at GNOME!

For yours and our convenience we have prepared a wiki post which helps you go through the initial setup for participating in this workshop, Kindly just go through this and setup your systems as per the instructions.

We are excited to see what you do next on your GNOME adventure!

Stay Tuned and Enjoy GUADEC 😇

GUADEC 2020: Intern lightning talks

Hi, I hope you are all enjoying GUADEC! I am just passing by to let you know that on Monday 27th, 18:00 UTC, we will have our traditional Intern lightning talks where you will get to see our Outreachy and Google Summer of Code  interns present their projects.

This year we will also have a few past interns sharing their stories about their experiences as interns and how GNOME has helped their professional careers. (Track 1)

Stay tuned!

July 24, 2020

Tying knots, writing functions implemented

Welcome everyone, two weeks ago I wrote a post about finishing the authentication functionality. so now users can add there EteSync account and retrieve their data.

Today I’d like to say you can also add/modify/delete data from journals, as I implemented these writing functionalities. There were also other issue fixes and adjusting coding style. I also removed code duplicates so the code becomes more cleaner.

As you can see the work:

The adding functionality:

The deleting functionality:

The modifying functionality:

So we made a great progress until now which is a good thing, as it gives more time for testing and fixing issues, but still there is work to be done.

So what am I gonna be working on next is extending the writing functionality to be able to create/delete journals (address-books, calendars).

July 23, 2020

Using Red Hat Satellite with the LVFS

A months weeks ago I alluded that you could run the LVFS in an offline mode, where updates could be synced to a remote location and installed on desktops and servers in a corporate setting without internet access. A few big companies asked for more details, and so we created some official documentation which should help. It works, but you need to script things manually and set up your system in a custom way.

For those companies using Red Hat Satellite there’s now an even easier way. Marc Richter has created a public Red Hat knowledge base article on how to configure Satellite 6 so that firmware updates can be deployed on client systems without access to the public internet. This is something that’s been requested by Red Hat customers for some time, and I really appreciate all the research and debugging Marc had to do so that this kind of content could be written. Feel free to share the link to this content as it’s available without a Red Hat subscription. Comments welcome!

2020-07-23 Thursday

  • Mail chew, sales & marketing call, estate agent photography over lunch; back to work.

July 22, 2020

2020-07-22 Wednesday

  • Mail, Collabora Productivity all-hands call at length, what a team! more mail & calls.
  • Got Quest USB3 video connection working: apparently the software loosened up: nice, great response, and Windlands 2. Frantic house cleaning.

Fractal: Update progress

It’s being a busy month, but productive nonetheless!

Since the last update about how things were going most of the error handling stuff has been reworked, as announced. There are a few bits remaining but they are in very specific places that require prior work in other areas. The approach chosen was to have a common trait that handled the error and each backend function now has a (mostly) specific error type that implements that said trait. Managing errors for new requests is as easy as creating a new type for the error that indicates all possible cases, composing over foreign error types if required, and implementing the trait HandleError to manage how the error should be shown in the GUI and/or logged, or just marking the trait if the default implementation is good enough. The MR requests that led to this work are:

This allowed me to further cut down the error module, which became too broad to get useful diagnostics.

There has been other changes as well: replacing the custom clone! macro with the one in glib-rs with the ability to avoid manual management of weak references (in the process I spotted a bug in the glib macro related to modules and namespaces, which I fixed), fixing an old bug that messed with the account settings and making the types in the code more strict by separating URLs from local paths within the same field of some structs and arguments in functions (intermingling with different kinds of data in the same place and type is the perfect recipe for hard to uncover bugs).

There has been some experimentation on the way, like trying to remove AppOp as a global static variable and weak reference, but there were many blockers to achieve them. Moreover, as I progressed further, I discovered that the codebase needs too much restructuring to be able to add multi-account support within the summer. I talked with my mentor and we settled on replacing all our custom (de)serialization of the API with the ruma or matrix-rust-sdk library, paving the way for encrypted messages support and implying the removal of fractal-matrix-api. The original plan remains for further work after the GSoC, although I’ve opened a WIP merge request for the long term where I’ll be sending changes to rework the UI management.

Some (late) video and audio advice for GUADEC online

GUADEC 2020 has moved online because of the pandemic, and that means that many of us will be streaming our voice and faces.

Seeing as I have a fancy B.A. on Communication Studies & Film, I thought I might share some guerilla film making tips for our new online reality.

Here are some tips on how to sound and look good online.

Bright pink sunset from the window seat of a plane in Lima
Pictured: Imaginary GUADEC travel (Lima, December 2019)

The executive summary

  • From most important to least: Clear voice, good framing, good big soft lights
  • Sit facing a big window, or on a light colored room with good lighting that hits your face as much as possible
  • Frame yourself in the center, and try to raise your camera to your eye level with books or boxes
  • Use your earbuds’ microphone, but keep the cable away from your body by resting it on the desk, or using some masking tape to secure it to your clothes to avoid rubbing noises
  • Tune your microphone volume to the lowest possible sensitivity that gives your clear voice, low “noise” (hums and static). Record yourself, test playback at 20-25% of your maximum playback volume on earbuds.

The best lighting

The ideal situation is to sit facing a big window. With light coming into the room, on a cloudy day. You can put the window slightly in front and to your left/right if you can’t sit directly in front.

If you have no usable window, you can use lamps. If you have tall lamps that you can point to a white wall, or white ceiling, that’s perfect. That will give you a big soft source of light.

You have lamps but no white wall or white ceiling. Try to “make” a wall with white reflective cardboard, a lot of white printer paper, polystyrene (like a pro), or aluminum wrapping if you are desperate.

You only have your ceiling lights. Sit below and behind your ceiling light. You should be able to lightly tilt your head up and see your bulb or fluorescent. White rooms with many ceiling lights help bounce light to your face.

Remember you can borrow bulbs from other rooms or lamps if you find yourself with a light that is in the perfect spot, but too weak, or too blue (“cool light”) or too orange (“warm”).

The best framing

Put yourself in the center of the frame. Sit front and center of your camera, and try not to slouch to the sides, or worse, point the camera to your ceiling or desk.

Line your camera lens with your eyes. You can use books or boxes to raise your camera. You probably know this pitfall as “nostrils cam” and “receding hair line cam”.

Cleanup your background, and some padding around it. Make sure that whatever ends up in frame is flattering, or at least not distracting, plus some “padding” around, in case you accidentally rotate your camera.

Don’t put bright lights on your background. Bright light sources like a TV, monitor, or window will confuse your camera and might make you look washed out.

Put some distance between you and the background. Otherwise it will look like you are in a cell, a dungeon, or an unmarked CIA location.

The best audio

Microphones are “simple” but full of small technical details that are out of scope for this guide. The good news is that you likely already have a good enough mic that can be made to sound slightly better or in some cases much better.

For a more in depth look into how to properly record your voice, covering mics and voice use itself, see this “How to be an online voice actor” playlist by SBN3.

I only have my laptop mic, or my phone’s cheap earbuds+mic. This is fine. You can still get good audio from this. See below.

How to set up your cheap laptop/earbuds (or any) mic

  • Set yourself up following the previous lighting and framing advice. Get comfortable
  • Pick the mic that makes you the most comfortable
  • Open your (system) sound settings, and a sound recording app
  • Set your system sound settings microphone volume to 50%. Record yourself as if you were talking across the table, plus some seconds of silence
  • Play back your recording at around 20-25% playback volume. Can you hear yourself clearly without having to raise the playback volume much more?
  • If you can’t hear yourself, raise your sound settings microphone volume some more. Rinse and repeat until you hear yourself properly

You want the lowest possible microphone volume (sensitivity) where you have clear voice, without background “hums”, “static” or “refrigerator noise”, and without having to raise playback volume above 40% or so on speakers, or 30% on headphones.

Now repeat the test with your next best option. Keep going until you are satisfied. Remember that price or looks can be deceiving. Your earbuds might as well be your best microphone.

Earbuds troubleshooting

My wired earbuds sound better, but they make “bottle noises” when rubbing with my clothes. Keep the cable as far from your clothes as possible. Any part of the cable can transmit these “bottle noises”. Let the excess cable rest on your desk, not your clothes or laptop.

I still can’t avoid the noises. Get a clothespin, or some good quality masking or gaffers tape, and secure the mic to your desk, or your shirt. You can do fine with two or three “fix points” on your clothes. Don’t overdo it or you’ll end up increasing rubs.

Also avoid “loose” clothes, or noisy accessories like necklaces or long complex earrings. And needless to say, don’t play with the cable during the meeting or call.

(Also don’t chew your mic or cable while talking. Yes, people do this)

I have a fancy mic

If you have a USB mic, like a “podcaster” mic, make sure you compare them against your earbuds or laptop mic. Specially your earbuds. While there are many good external mics, usually earbuds trump them because they are almost automatically set up at the perfect position and distance to make a good recording of your voice.

That’s not to dismiss your fancy mic, but make sure you test it after doing some tuning as suggested above.

A note on polar patterns

The ideal position of your fancy mic will depend on its particular polar pattern (the shape of the “net” it throws to catch your voice). Most “desk” mics sold for calls or podcasting are usually designed to work the best on close range.

Here’s a good visual explanation and examples of polar patterns for Shure microphones, but these are standard across brands.

Assuming you have a desk/podcast marketed external mic, that actually sounds better than your other options, here’s some advice to make the most out of it.

Put your fancy mic “one fist away” from your mouth. Most of these mics are designed for voice, so you want them as close a possible, with the lowest possible sensitivity in your sound settings.

Use books or boxes if you need to raise the mic to be at a comfortable level that also gives you good audio. Your mic should ideally be “one fist away” from your mouth, and slightly off center (slight diagonal from your mouth) to avoid most breathing and wording “pops”.

Remember this slightly depends on the polar pattern of your mic. For example, Blue Yeti mics “look” like you should “point them” to your mouth, like a karaoke mic, but actually they have to be “standing” (fully vertical) in front of your mouth (like an “old timey radio” mic).

Disable automatic gain in your fancy mic. Automatic volume/gain is a blunt tool and you don’t need it if you properly adjust your base volume as explained above.

Make sure your mic has fresh batteries, if it uses batteries. Some mics run on batteries and you’ll have weird bugs or noises if you have low enough batteries.

Check that your microphone does not “ground” with other devices. Depending on your setup, some times electricity can “loop” through your microphone causing the classic “ground” noise (that permanent electrical buzz). Try unplugging other devices, using a different USB port, or even unplugging your AC adapter. Also check that your sensitivity is not excessive.

If your mic has a “hardware” boost, it’s worth trying it out. If you have raised the sound settings levels and you are getting noise or poor voice quality, turn on the mic “boost”, lower the sound settings mic volume and try again. Some mics have very good “boosters” that produce much better recordings. But this is very hit and miss. Try it without, and then with. Compare the best two settings.

There’s an almost infinite list of tech nitpicks and pet peeves for the above, but this should put you in the right direction at least.

Let me know if I omitted a good hack, or a better explanation for any of the above. I’ll be in #guadec on if you need help or have comments.

See you at GUADEC (online)!

July 21, 2020

Automatic retry on error and fallback stream handling for GStreamer sources

A very common problem in GStreamer, especially when working with live network streams, is that the source might just fail at some point. Your own network might have problems, the source of the stream might have problems, …

Without any special handling of such situations, the default behaviour in GStreamer is to simply report an error and let the application worry about handling it. The application might for example want to restart the stream, or it might simply want to show an error to the user, or it might want to show a fallback stream instead, telling the user that the stream is currently not available and then seamlessly switch back to the stream once it comes back.

Implementing all of the aforementioned is quite some effort, especially to do it in a robust way. To make it easier for applications I implemented a new plugin called fallbackswitch that contains two elements to automate this.

It is part of the GStreamer Rust plugins and also included in the recent 0.6.0 release, which can also be found on the Rust package (“crate”) repository


For using the plugin you most likely first need to compile it yourself, unless you’re lucky enough that e.g. your Linux distribution includes it already.

Compiling it requires a Rust toolchain and GStreamer 1.14 or newer. The former you can get via rustup for example, if you don’t have it yet, the latter either from your Linux distribution or by using the macOS, Windows, etc binaries that are provided by the GStreamer project. Once that is done, compiling is mostly a matter of running cargo build in the utils/fallbackswitch directory and copying the resulting (or .dll or .dylib) into one of the GStreamer plugin directories, for example ~/.local/share/gstreamer-1.0/plugins.


The first of the two elements is fallbackswitch. It acts as a filter that can be placed into any kind of live stream. It consumes one main stream (which must be live) and outputs this stream as-is if everything works well. Based on the timeout property it detects if this main stream didn’t have any activity for the configured amount of time, or everything arrived too late for that long, and then seamlessly switches to a fallback stream. The fallback stream is the second input of the element and does not have to be live (but it can be).

Switching between main stream and fallback stream doesn’t only work for raw audio and video streams but also works for compressed formats. The element will take constraints like keyframes into account when switching, and if necessary/possible also request new keyframes from the sources.

For example to play the Sintel trailer over the network and displaying a test pattern if it doesn’t produce any data, the following pipeline can be constructed:

gst-launch-1.0 souphttpsrc location= ! \
    decodebin ! identity sync=true ! fallbackswitch name=s ! videoconvert ! autovideosink \
    videotestsrc ! s.fallback_sink

Note the identity sync=true in the main stream here as we have to convert it to an actual live stream.

Now when running the above command and disconnecting from the network, the video should freeze at some point and after 5 seconds a test pattern should be displayed.

However, when using fallbackswitch the application will still have to take care of handling actual errors from the main source and possibly restarting it. Waiting a bit longer after disconnecting the network with the above command will report an error, which then stops the pipeline.

To make that part easier there is the second element.


The second element is fallbacksrc and as the name suggests it is an actual source element. When using it, the main source can be configured via an URI or by providing a custom source element. Internally it then takes care of buffering the source, converting non-live streams into live streams and restarting the source transparently on errors. The various timeouts for this can be configured via properties.

Different to fallbackswitch it also handles audio and video at the same time and demuxes/decodes the streams.

Currently the only fallback streams that can be configured are still images for video. For audio the element will always output silence for now, and if no fallback image is configured for video it outputs black instead. In the future I would like to add support for arbitrary fallback streams, which hopefully shouldn’t be too hard. The basic infrastructure for it is already there.

To use it again in our previous example and having a JPEG image displayed whenever the source does not produce any new data, the following can be done:

gst-launch-1.0 fallbacksrc uri= \
    fallback-uri=file:///path/to/some/jpg ! videoconvert ! autovideosink

Now when disconnecting the network, after a while (longer than before because fallbacksrc does additional buffering for non-live network streams) the fallback image should be shown. Different to before, waiting longer will not lead to an error and reconnecting the network causes the video to reappear. However as this is not an actual live-stream, right now playback would again start from the beginning. Seeking back to the previous position would be another potential feature that could be added in the future.

Overall these two elements should make it easier for applications to handle errors in live network sources. While the two elements are still relatively minimal feature-wise, they should already be usable in various real scenarios and are already used in production.

As usual, if you run into any problems or are missing some features, please create an issue in the GStreamer bug tracker.

July 20, 2020

I finished my master’s degree \o/

In the last couple of months, I was busy writing my thesis to conclude my master’s degree in computer science at the University of Bologna, therefore, I wasn’t much active in the GNOME community I hope that now I have much more time to dedicate to writing software ;).
The title of the thesis is ” Blockchain-based end-to-end encryption for Matrix instant messaging“. I researched an interesting experiment that uses an Ethereum based system to fully end-to-end encrypt a Matrix conversation.


Privacy and security in online communication is an important topic today, especially in the context of instant messaging. A lot of progress has been made in recent years to ensure that conversations are secure against attacks by third parties, but privacy from the service provider itself remains difficult. There are a number of solutions offering end-to-end encryption, but most of them rely on a centralized server, proprietary clients, or both.
In order to have fully secure instant messaging conversations, a decentralized and end-to-end encrypted communication protocol is needed. This means there is no single point of control, and each message is encryped directly on the user’s device such that only the recipient can decrypt it.
This work proposes an end-to-end encryption system for the Matrix protocol based on blockchain technology. Matrix is a decentralized protocol and network for real-time communication that is currently mostly used for instant messaging. This protocol was selected because of its versatility and extensibility.
Using the Secret Store feature in OpenEthereum, the proposed system encrypts data using keys stored on the Ethereum blockchain. Access control to the keys is also handled by the Secret Store via a smart contract.
The proposed encryption system has multiple advantages over alternative schemes: The underlying blockchain technology reduces the risk of data loss because of its decentralized and distributed nature. Thanks to the use of smart contracts this system also allows for the creation of an advanced access control system to decryption keys.
In order to test and analyze the proposed design, a reference implementation was created in the form of a library. This library can be used for future research, but also as a building block for different applications to easily implement end-to-end encryption based on blockchain technology.

If you’re interested you can read the full thesis: Blockchain-based end-to-end encryption for Matrix instant messaging.

Tracker at GUADEC 2020

GNOME’s conference is online this year, for obvious reasons. I spent the last 3 month teaching online classes so hopefully I’m prepared! I’m sad that there’s no Euro-trip this year and we can’t hang out in the pub, but nice that we’re saving hundreds of plane journeys.

There will be two talks related to Tracker: Carlos and I speaking about Tracker 3 (Friday 23rd July, 16.45 UTC), and myself on how to deal with challanges of working on GNOME’s session-wide daemons (Thursday 22nd July, 16.45 UTC). There are plenty of other fascinating talks, including inevitably one scheduled the same time as ours which you should, of course, watch as a replay during the break 🙂

Self-contained Tracker 3 apps

Let’s go back one year. The plan for Tracker 3 emerged when I spoke to Carlos Garnacho at GUADEC 2019 in Thessaloniki probably over a Freddo coffee like this one…

5 people drinking coffee in Thessaloniki

We had lots of improvements we want to make, but we knew we were at the limit of what we could to Tracker while keeping compatibility with the 10+ year old API. Changing a system service isn’t easy though (hence the talk). I’m a fan of the ‘Flatpak model’ of app deployment, and one benefit is that it can allow the latest apps to run on older LTS distributions. But there’s no magic there – this only works if the system and session-wide services follow strict compatibility rules.

Anything that wants to be running as a system service in combination with any kind of sandboxing system must have a protocol that is ABI stable and backwards compatible. (From

Tracker 3.0 adds important features for apps and users, but these changes require apps to use a new D-Bus API which won’t be available on older operating systems such as Ubuntu 20.04.

We’re considering various ways around this, and one that I prototyped recently is to bundle Tracker3 inside the sandbox. The downside is that some folders will be double indexed on systems where we can’t use the host’s Tracker, but the upside is the app actually works on all systems.

I created a branch of gnome-music demoing this approach. GNOME’s CI is so cool now that you can just go to that page, click ‘View exposed artifact’, then download and install a Flatpak bundle of gnome-music using Tracker 3! If you do, please comment on the MR about whether it works for you 🙂 Next on my list is GNOME Photos, but this is more complex for various reasons.

Blocklists and Allowlists

The world needs major changes to stamp out racism, and renaming variables in code isn’t a major change. That said, the terms ‘blacklist’ and ‘whitelist’ rely on and reinforce an association of ‘black bad, white good’. I’m happy to see a trend to replace these terms including Google, Linux, the IETF, and more.

It was simple to switch Tracker 3 to use the more accurate terms ‘blocklist’ and ‘allowlist’. I also learned something about stable releases — I merged a change to the 2.3 branch, but I didn’t realise that we consider the stable branch to be in ‘string freeze’ forever. (It’s obvious in hindsight 🙂 We’ve now reverted that but a few translation teams already updated their translations, so to the Spanish, Brazilian Portuguese and Romanian translators – sorry for creating extra work for you!

Acknowledging merge requests

I’ve noticed while working on app porting that some GNOME projects are quite unresponsive to merge requests. I’ve been volunteering my time as a GNOME contributor for longer than I want to remember, but it still impacts my motivation if I send a merge request and nobody comments. Part of the fun of contributing GNOME is being part of such an huge and talented community. How many potential contributors have we lost simply by ignoring their contributions?

Video of paper aeroplanes falling to the street

This started me thinking about how to improve the situation. Being a GNOME maintainer is not easy and is in most cases unpaid, so it’s not constructive to simply complain about the situation. Better if we can mobilise people with free time to look at whatever uncommented merge requests need attention! In many cases you can give useful feedback even if you don’t know the details of the project in question – if there’s a problem then it doesn’t need a maintainer to raise it.

So my idea, which I intend to raise somewhere other than my blog when I have the time, is we could have a bot that posts to every Friday with a list of merge requests that are over a week old and haven’t received any comments. If you’re bored on a Friday afternoon or during the weekend you’ll be able to pick a merge request from the list and give some feedback to the contributor – a simple “Thanks for the patch, it looks fine to me� is better than silence.

Let me know what you think of the idea! Can you think of a better way to make sure we have speedy responses to merge requests?

Badge: I'm presenting at GUADEC 2020

See you there!

July 19, 2020

The Second Milestone

credits : UNIX and Linux System Administration Handbook (Nemeth, Snyder, Hein, Whaley)

This is the second post, about my GSoC progress. I know this is a late post but hope that the content coming your way suffices for the delay. So, cheers readers! Lets dive into it!

Getting the permissions right

Last post was about porting the Basic Tab of Properties window to use GtkBuilder where we got the basic process of porting right. This time we come with a similar goal for permissions-tab but with a tad-bit more content than the basics.

Like before the task started with porting the outer containers of the the permissions-page ie. defining them in XML and packing its inner contents inside a GtkBox on code. Commit: 1c978477 achieves the same.

Structue of permissions page

The outermost container of the permissions page packs a grid which delivers the basic usecase of the permissions-page, in addition to this the box also contains two labels at the top and bottom which respectively show up when the dialog is launched under exceptional ownership and file type circumstances.

Naturally the next step in our course of actions is to port these labels along with their prompt separators to the template definition. And make sure they appear only when needed using gtk_widget_show () in code. Commit 5953986c achieves the same. This has pretty much been a monotonic story, moving on to the interesting parts.

The Permissions Grid

Interestingly the permissions grid is not as simple as it sounds (looks!) I packs a total of 9 GtkLabel:GtkComboBox pairs which hide/show etc. based on the selection for which the properties-window has been launched morphing the appearance of the permissions grid.

The first image corresponds to permissions for a directory, and the second one is permissions for a composite selection of files and directories. Porting each row of the grid involves adding a GtkLabel:GtkComboBox pair to the .ui file and than glue together their data flow mechanisms to the existing logic in code.

Wait we missed something ? Where are the options in the ComboBoxes like we know them to be ?

GtkComboBoxes have Menu items too

Clearly we owe you an explanation! For the reason behind adding empty combo-boxes in Glade and populating them with code instead of doing so in Glade, there is one in the next section of this post. Commit bd8f590a ports all the GtkLabel:GtkComboBox rows to the .ui template.

The remaining rows of the grid like the execute-label, security-context, and the change-permissions button are ported to the .ui template in the commit 13cff098

GEEKY TEXT WARNING: The next section gets technical! Proceed Accordingly!

GtkComboBox: The Explanation as Promised

If you expected it to be simple, well it’s not! When you see many boxes sliding out a single one ‘you know it’s not simple!’ So, let’s put our nerd-glasses on and start dismantling the GtkComboBox.

According to the documentation :

> A GtkComboBox is a widget that allows the user to choose from a list of valid choices. The GtkComboBox displays the selected choice. When activated, the GtkComboBox displays a popup which allows the user to make a new choice.

> The GtkComboBox uses the model-view pattern the list of valid choices is specified in the form of a tree model, and the display of the choices can be adapted to the data in the model by using cell renderers, as you would in a tree view. This is possible since GtkComboBox implements the GtkCellLayout interface. The tree model holding the valid choices is not restricted to a flat list, it can be a real tree, and the popup will reflect the tree structure.

All in all this is what it says is :

Clearly, Glade supports the creation of empty GtkComboBoxes! Question arises does it also support Tree Model and Cell Renderer Widgets. The answer is Yes, It does! Glade supports the creation of Data Holding Widgets like GtkTreeStore and GtkListStore. The Options could be found neatly tucked away in the overflow menu.

So GtkListStore can be initialzed in Glade in the form of a table where Column types can be defined and than Data rows can be populated. Now,

  • Create GtkListStore –done
  • Connect GtkListStore with a GtkComboBox (Option found in General Tab)
  • Assign Cell Renderer of the ComboBox a Column from the Tree Model (Open the Combo Editor by right-clicking on ComboBox and selecting Edit) And this is how a complete ComboBox along with all it’s data members can be Constructed in Glade.
Why did we not populate permission-comboBoxes using Glade ?
  1. Permissions list stores have a column for permissions enum. While we can use the enum value symbol in C, we would have to use a numeric value in Glade, which is going to be quite obscure and hard to document
  2. Porting the models to GtkBuilder UI definition isn’t really improving hackability / design-ability much because of how Glade handles it
  3. The comboBox itself is already defined in the .ui file, so it’s possible to tweak the design even if the comboBoxes look empty
  4. If it would make things more complicated, then it’s more reasonable to let the code handle it.

Credits: @antoniof As a result @antoniof opened an issue about in Glade about the same : Combo-Editor Discoverability

Hhuh ! That was tedious ! Next blog-post will be talking about revisiting the Basic and Permissions page and completing them once-and for all.

July 16, 2020

Social Events at GUADEC 2020

Part of the magic of GUADEC is going out to amazing dinners with your new and old friends; exploring the beautiful parts of somewhere new; and maybe even staying up to watch the sun rise.

This year is a little bit different – but there are still lots of ways to get to know each other, try out new things, and hopefully have a little fun.

Four smiling people at a restaurant
Photo courtesy of Sriram Ramkrishna. Licensed CC-BY-SA

Social Events

Wednesday (22 July) at 21:10 UTC you can join Melissa Wu for drinks. She’ll be teaching us some fun cocktail and mocktail recipes. Thank you Woodlyn Travel for making this happen! (See Notes below.)

Sriram Ramkrishna, every GNOMEie’s fun uncle, is also quite the cook. Join him Thursday (23 July) at 21:00 UTC to learn some of his kitchen secrets. I recommend getting the ingredients ahead of time so you can cook along and then we can all snack together. (See Notes below.)

You might know Sumana Harihareswara from her work with Python, GNOME, Zulip, Mailman, MediaWiki, or many other places in the free software world. She’s also hilarious. If you like to laugh, check out Sumana on Friday (24 July) at 21:00 UTC to hear Sumana’s stand-up comedy.

There might not be a Museum BoF this year, but Ayanna Dozier will be bringing the museum experience to us on Monday (27 July) at 21:00 UTC. Ayanna Dozier is a scholar, filmmaker, and performance artist, and the Joan Tisch Teaching Fellow at the Whitney Museum and an Adjunct Professor at Fordham University. She’ll be introducing us to modern art (1930 – 1965) through key artists and important historical events.

Social Hours

Every night after the evening social events, you can attend or host a Social Hour. Social hours are a time to get together around any topic you’re interested in. These will include a Tea Party, GNOME Beers, and a GLBTQ+ social time. We especially encourage Social Hours based on non-English languages. If you want to host a social hour, please sign up for an account on and add it to the wiki.


Ingredients for Drinks

Cocktail 1

  • Beer (ideally Mexican beer)
  • Tomato Juice
  • Lime Juice
  • Optional: Worcestershire Sauce, Hot Sauce, Tajin Seasoning, Lime wedge
    Non alcoholic version – all of the above minus the beer!

Cocktail 2

  • Tequila
  • Grapefruit Juice
  • Lime Juice
  • Agave Nectar, Simple Syrup, or Honey
  • Salt
  • Ice

Cocktail 3

  • Whiskey
  • Lemon or limes
  • Honey, Simple Syrup, or Maple Syrup
  • Ginger beer or soda water
  • Mint or Basil
  • Optional: Berries for flavor
  • Ice

Cocktail 4

  • Vodka
  • Coconut Water
  • Pineapple Juice
  • Fresh Lime Juice
  • Agave Nectar, Simple Syrup, or Honey
  • Club Soda
  • Ice

Ingredients for Cooking

Recipe 1

  • 1 cup mayonnaise (vegan works too)
  • 1 cup Parmesan cheese, shredded
  • 14 oz can artichoke hearts (in brine, not oil), drained and finely chopped

Recipe 2

  • 2 slices of bread
  • Various veggies – mushrooms, carrots, onions, beets, cabbage, green or red peppers – thinly sliced
  • 1-2 TB Mayonnaise (vegan works too)
  • 1-2 TB Cranberry pickle (recipe included)
  • 1 slice of cheese (muenster, cheddar, etc)
  • 1 TB of butter

Cranberry Pickle

  • 2 TB Oil
  • 3 tsp mustard seeds
  • 1 tsp fenugreek seeds
  • 1 cup cranberries
  • 1/2 tsp salt (to taste)
  • 1/4 tsp turmeric
  • 1/4 tsp asofetida
  • 1 tsp kashmiri chilli powder
  • 1/4 tsp sugar (optional)

July 15, 2020

GStreamer Rust Bindings & Plugins New Releases

It has been quite a while since the last status update for the GStreamer Rust bindings and the GStreamer Rust plugins, so the new releases last week make for a good opportunity to do so now.


I won’t write too much about the bindings this time. The latest version as of now is 0.16.1, which means that since I started working on the bindings there were 8 major releases. In that same time there were 45 contributors working on the bindings, which seems quite a lot and really makes me happy.

Just as before, I don’t think any major APIs are missing from the bindings anymore, even for implementing subclasses of the various GStreamer types. The wide usage of the bindings in Free Software projects and commercial products also shows both the interest in writing GStreamer applications and plugins in Rust as well as that the bindings are complete enough and production-ready.

Most of the changes since the last status update involve API cleanups, usability improvements, various bugfixes and addition of minor API that was not included before. The details of all changes can be read in the changelog.

The bindings work with any GStreamer version since 1.8 (released more than 4 years ago), support APIs up to GStreamer 1.18 (to be released soon) and work with Rust 1.40 or newer.


The biggest progress probably happened with the GStreamer Rust plugins.

There also was a new release last week, 0.6.0, which was the first release where selected plugins were also uploaded to the Rust package (“crate”) database This makes it easy for Rust applications to embed any of these plugins statically instead of depending on them to be available on the system.

Overall there are now 40 GStreamer elements in 18 plugins by 28 contributors available as part of the gst-plugins-rs repository, one tutorial plugin with 4 elements and various plugins in external locations.

These 40 GStreamer elements are the following:

  • rsaudioecho: Port of the audioecho element from gst-plugins-good
  • rsaudioloudnorm: Live audio loudness normalization element based on the FFmpeg af_loudnorm filter
  • claxondec: FLAC lossless audio codec decoder element based on the pure-Rust claxon implementation
  • csoundfilter: Audio filter that can use any filter defined via the Csound audio programming language
  • lewtondec: Vorbis audio decoder element based on the pure-Rust lewton implementation
  • cdgdec/cdgparse: Decoder and parser for the CD+G video codec based on a pure-Rust CD+G implementation, used for example by karaoke CDs
  • cea608overlay: CEA-608 Closed Captions overlay element
  • cea608tott: CEA-608 Closed Captions to timed-text (e.g. VTT or SRT subtitles) converter
  • tttocea608: CEA-608 Closed Captions from timed-text converter
  • mccenc/mccparse: MacCaption Closed Caption format encoder and parser
  • sccenc/sccparse: Scenarist Closed Caption format encoder and parser
  • dav1dec: AV1 video decoder based on the dav1d decoder implementation by the VLC project
  • rav1enc: AV1 video encoder based on the fast and pure-Rust rav1e encoder implementation
  • rsflvdemux: Alternative to the flvdemux FLV demuxer element from gst-plugins-good, not feature-equivalent yet
  • rsgifenc/rspngenc: GIF/PNG encoder elements based on the pure-Rust implementations by the image-rs project
  • textwrap: Element for line-wrapping timed text (e.g. subtitles) for better screen-fitting, including hyphenation support for some languages
  • reqwesthttpsrc: HTTP(S) source element based on the Rust reqwest/hyper HTTP implementations and almost feature-equivalent with the main GStreamer HTTP source souphttpsrc
  • s3src/s3sink: Source/sink element for the Amazon S3 cloud storage
  • awstranscriber: Live audio to timed text transcription element using the Amazon AWS Transcribe API
  • sodiumencrypter/sodiumdecrypter: Encryption/decryption element based on libsodium/NaCl
  • togglerecord: Recording element that allows to pause/resume recordings easily and considers keyframe boundaries
  • fallbackswitch/fallbacksrc: Elements for handling potentially failing (network) sources, restarting them on errors/timeout and showing a fallback stream instead
  • threadshare: Set of elements that provide alternatives for various existing GStreamer elements but allow to share the streaming threads between each other to reduce the number of threads
  • rsfilesrc/rsfilesink: File source/sink elements as replacements for the existing filesrc/filesink elements

July 14, 2020

Refactoring Pitivi's Media Library

Since my GSoC project is about improving Pitivi’s Media Library and introducing new features to it, the first task was to clean it up.

To display assets the Media Library used a Gtk.TreeView widget to show a detailed list view and a Gtk.IconView widget to show a simpler icon view. Some major drawbacks with the previous implementation using two separate widgets are:

  • We had two widgets in memory ( we toggled their visibility to switch between the list view and the icon view )
  • We had redundant code and callbacks for the same signals ( one to handle Gtk.TreeView and the other for Gtk.IconView )
  • Any new feature would require us to implement it in two widgets, using two widgets also brought their native bugs with them

What we needed was a single widget that could support both views, saving us from all the aforementioned drawbacks. Luckily such a widget called Gtk.FlowBox was introduced in GTK 3.12, six years ago.

The plan was to only use Gtk.FlowBox under the hood to display both list view and icon view, hence refactoring the already present logic into Gtk.FlowBox was the only task.

Now let us look at some technical details around how we’re using Gtk.FlowBox and how we refactored the code.

First leap:

One idea was to first refactor listview and then iconview into Gtk.FlowBox. This seemed a good approach at first as we were breaking a big task into smaller steps, but soon an issue came up.

Gtk.FlowBox uses Gio.ListStore, while Gtk.TreeView and Gtk.IconView rely on Gtk.TreeModel. If we kept one of the previous widgets we would have to keep the TreeModel along with it too. This would just add unnecessary tasks to the project, we would have to add some temporary methods to parse and sync both the models. So after a brief discussion it was decided we simply go all in. There was no actual need of breaking the task in two parts in the first place.

Setting up Gtk.FlowBox for the first time:

I started by introducing a minimal Gtk.FlowBox widget and commenting out all the code that powered the previous views. Initially it took some time to get used to FlowBox and figuring out on how to actually use Gio.ListStore with Gtk.FlowBox.

The way we toggle between views:

In the previous approach we were switching between the Gtk.TreeView and Gtk.IconView widgets having one of them visible at a time. Initially I replicated a similar approach for FlowBox, keeping two widgets per item.

We preferred to keep the logic simpler and instead to recreate the widgets when the view mode is changed. This approach was easily possible with FlowBox by re-binding the model and specifying a different createwidgetfunc. This also saved us from wasting memory in storing two sets of child widgets.

Setting up Drag and Drop along with Rubber Band selection:

This task took a significant amount of time. In my first implementation of Drag and Drop the dragging wouldn’t work at all, instead multiple items would get selected in the direction of dragging. At first I didn’t look much into the cause behind this behaviour, my initial stance was maybe I was not implementing the drag and drop correctly in Pitivi. After a lot of debugging when nothing worked, I realized I should try implementing it in a minimal Python script first. This is when I noticed by default in FlowBox if we drag over it’s children they come under rubberband selection.

The first subtask was to stop this behaviour because in Pitivi the users would prefer dragging the asset to timeline by default over selecting multiple assets.There was no clear way how to differentiate between dragging as in drag and drop and dragging in case of rubberband selection. To get a better idea what’s going on, I looked into the codebase of Gtk.FlowBox where I found it used Gtk.Gestures to govern the rubberband behaviour but I still couldn’t find anything on how to distinguish it from a drag and drop operation.

In Pitivi’s previous implementation Gtk.IconView had both of these operations perfectly working together. A major point I came across while playing with it was that one could only initiate the rubberband selection from an empty space. Dragging over an asset would simply initiate a drag and drop operation. I then decided to read the codebase of Gtk.IconView to better understand how to implement/replicate it in Pitivi.

Now we differentiate between drag & drop and the rubberband selection by controlling the propagation of the “button-press-event” signal. In case if we propagate the signal further by returning True in the button-press-event handler we say yes to rubberband selection, which means everything you drag upon is selected. If we don’t propagate the signal by returning False, we block the default behaviour and drag and drop comes into play.

How do we decide whether to propagate the signal or not ? This is inspired by Gtk.IconView, we simply check if there is any asset under the cursor when we start dragging, if yes the dragging is meant for a drag and drop operation so we block the propagation of the “button-press-event” signal. Otherwise we propagate the signal which results into rubberband selection.

P.S. A shout out to the last implementation, the codebase was pretty readable and I actually never had to go through any struggles in understanding what was going on in the previous implementation and that was a major booster !

What’s next in Pitivi ?

Now with the upgraded Media Library we are all set to introduce new features ! The first one we are working on is Adding Tagging functionality to clips which will allow us to display the assets by tag as a Folder View.

This feature empowers Pitivi’s user base to categorize their assets in the Media Library. An asset can have multiple tags. Multiple assets can have common tags. Assets can be filtered based on their tags by searching.

The first step is to extend the current asset properties dialog to include a Tags property and a proper way to add and store new tags.

tags-in-pitivi The second step is to extend it to manage possible operations on tags in case multiple assets are under selection!

Last step would be to introduce The Folder View, which shows the assets into folders based on their tags.


There’s been a lot of discussion on this proposed Fedora change for Workstation to use BTRFS.

First off, some background: I reprovision my workstation about every 2-3 months to avoid it becoming too much of a "pet". I took the opportunity for this reprovision to try out BTRFS again (it’d been years).

Executive summary

BTRFS should be an option, even an emphasized one. It probably shouldn’t be the default for Workstation, and shouldn’t be a default beyond that for server use cases (e.g. Fedora CoreOS).

Why are there multiple Linux filesystems?

There are multiple filesystems in the Linux kernel for good reasons. It’s basically impossible to optimize for all use cases at once, and there are fundamental tradeoffs to make. BTRFS in particular has a lot of features…and those features have costs. Not every use case needs those features, and the costs can be close to prohibitive for things like databases.

BTRFS is good for "pet" systems

There is this terminology in the industry of pets vs cattle – I once saw a talk that proposed "elephants vs ants" instead which is more appealing. Lately I tend to use "disposable" or "reprovisionable" for the second term.

I mentioned above I reprovision my workstation periodically, but it’s still somewhat of a "pet". I don’t have everything in config management yet (and probably never will); I change things often enough that it’s hard to commit to 100% discipline to record every change in git instead of just running a CLI or writing a file. But I have all the important stuff. (And I take backups of data separately of course.)

For people who don’t have much in configuration management – the server or desktop system that has years of individually built up changes (whether from people doing things manually over ssh or interactively via a GUI like Cockpit, being able to take a filesystem snapshot of things is an extremely compelling feature.

Another great BTRFS-style use case is storing data like your photos on a local drives instead of uploading them to the cloud, etc.

The BTRFS cost

Those features though come at a cost. And this back to the "pets" vs "disposable" systems and where the "source of truth" is. For users managing disposable systems, the source of truth isn’t the Unix filesystem – it’s most likely a form of GitOps. Or take the case of Kubernetes – it’s a cluster with the primary source being etcd.

And of course people are using storage systems like PostgreSQL or Ceph for data, or an object storage system.

The important thing to see here is that in these cases, the "source of truth" isn’t a single computer (a single Unix filesystem) – it’s a distributed cluster.

For all these databases, performance is absolutely critical. They don’t need the underlying filesystem to do much other than pass through writes to disk, because they are already managing things like duplication/checksumming/consistency at a higher level.

As most BTRFS users know (or have discovered the hard way) you really need to use nodatacow for these – effectively "turning off" a lot of BTRFS features.

Another example: virtual machine images which is an interesting one because the "pet" vs "disposable" discussion here becomes recursive – is the VM a pet or disposable, etc.

Not worth paying for reprovisionable systems

For people who manage "reprovisionable" systems, there’s usually not much value using BTRFS for things like operating system data or /etc (they can just blow it away and reprovision), and a clear cost where they need to either use nodatacow on the things that do matter (losing a lot of the BTRFS features for that data), or explicitly use e.g. xfs/ext4 for them, going back into a world of managing "mixed" storage.

In particular, I would strongly argue against defaulting to BTRFS for Fedora CoreOS because we are explicitly pushing people away from creating these types of "pet" systems.

To say this another way, I’ve seen some Internet discussion about this read the proposed change as applying beyond Fedora Workstation, and that’s wrong.

But if you e.g. want to use BTRFS anyways for Fedora CoreOS (perhaps using a separate subvolume for /var where persistent container data is stored) that would be mounted with nodatacow for things etcd that could make sense! We are quite close to finishing root filesystem reprovisioning in Ignition.

But a great option if you know you want/need it!

As I mentioned above, my workstation (FWIW a customized Silverblue-style system) is a seems like a nearly ideal use case for BTRFS. I’m not alone in that! I’m likely going to roll with it for a few months until the next reprovisioning time unless I hit some stumbling blocks.

However, I am already noticing the Firefox UI periodically lock up for seconds at a time, which wasn’t happening before. Since I happen to know Firefox uses SQLite (which like the other databases mentioned above, conflicts with btrfs), I tried this and yep:

walters@toolbox> find ~/.mozilla/ -type f -exec filefrag {} \; | grep -Ee '[0-9][0-9][0-9]+ extents found'
firefox/xxxx.default-release/storage/.../xxxx.sqlite: 1825 extents found

And that’s only a few days old! (I didn’t definitively tie the UI lockups to that, but I wouldn’t be surprised. I’d also hope Firefox isn’t writing to the database on the main thread, but I’m sure it’s hard for the UI to avoid blocking on some queries).

I just found this stackoverflow post with some useful tips around manually or automatically defragmenting but…it’s really difficult to say that all Fedora/Firefox users should need to discover this and make the difficult choice of whether they want BTRFS features or performance for individual files after the fact. Firefox upstream probably can’t unilaterally set the nodatacow option on their databases because some users might reasonably want consistent snapshots for their home directory. A lot of others though might use a separate backup system (or Firefox Sync) and much prefer performance, because they can just restore their browser state like bookmarks/history from backup if need be.

Random other aside: sqlite performance and f2fs

In a tangentially related "Linux filesystems are optimized for different things" thread, the f2fs filesystem mostly used by Android (AFAIK) has special APIs designed specifically for SQLite, because SQLite is so important to Android.


All Fedora variants are generic to a degree; I don’t think there will ever be just one Linux filesystem that’s the only sane choice. It makes total sense to have BTRFS as a prominent option for people creating desktops (and laptops and to a lesser degree servers).

The default however is an extremely consequential decision. It implies many years of dealing with the choice in later bug reports, etc. It really requires a true committment to that choice for the long term.

I’m not sure it makes sense to push even Linux workstation users towards a system that’s more "pet" oriented by default. How people create disposable systems (particularly for workstations) is a complex topic with a lot of tradeoffs; I’d love for the Fedora community to have more blog entries about this in the Magazine. One of those solutions might be e.g. using a BTRFS root and using send/receive to a USB drive for backups for example!

But others would be about the things I and others do to manage "disposable" systems: managing data in /home in git, using image systems like rpm-ostree for the base OS to replicate well known state instead of letting their package database be a "pet", storing development environment as a container image etc. Those work on any Unix filesystem without imposing any runtime cost. And that’s what I think most people provisioning new systems in 2020 should be doing.

OpenHMD and the Oculus Rift

For some time now, I’ve been involved in the OpenHMD project, working on building an open driver for the Oculus Rift CV1, and more recently the newer Rift S VR headsets.

This post is a bit of an overview of how the 2 devices work from a high level for people who might have used them or seen them, but not know much about the implementation. I also want to talk about OpenHMD and how it fits into the evolving Linux VR/AR API stack.


In short, OpenHMD is a project providing open drivers for various VR headsets through a single simple API. I don’t know of any other project that provides support for as many different headsets as OpenHMD, so it’s the logical place to contribute for largest effect.

OpenHMD is supported as a backend in Monado, and in SteamVR via the SteamVR-OpenHMD plugin. Working drivers in OpenHMD opens up a range of VR games – as well as non-gaming applications like Blender. I think it’s important that Linux and friends not get left behind – in what is basically a Windows-only activity right now.

One downside is that does come with the usual disadvantages of an abstraction API, in that it doesn’t fully expose the varied capabilities of each device, but instead the common denominator. I hope we can fix that in time by extending the OpenHMD API, without losing its simplicity.

Oculus Rift S

I bought an Oculus Rift S in April, to supplement my original consumer Oculus Rift (the CV1) from 2017. At that point, the only way to use it was in Windows via the official Oculus driver as there was no open source driver yet. Since then, I’ve largely reverse engineered the USB protocol for it, and have implemented a basic driver that’s upstream in OpenHMD now.

I find the Rift S a somewhat interesting device. It’s not entirely an upgrade over the older CV1. The build quality, and some of the specifications are actually worse than the original device – but one area that it is a clear improvement is in the tracking system.

CV1 Tracking

The Rift CV1 uses what is called an outside-in tracking system, which has 2 major components. The first is input from Inertial Measurement Units (IMU) on each device – the headset and the 2 hand controllers. The 2nd component is infrared cameras (Rift Sensors) that you space around the room and then run a calibration procedure that lets the driver software calculate their positions relative to the play area.

IMUs provide readings of linear acceleration and angular velocity, which can be used to determine the orientation of a device, but don’t provide absolute position information. You can derive relative motion from a starting point using an IMU, but only over a short time frame as the integration of the readings is quite noisy.

This is where the Rift Sensors get involved. The cameras observe constellations of infrared LEDs on the headset and hand controllers, and use those in concert with the IMU readings to position the devices within the playing space – so that as you move, the virtual world accurately reflects your movements. The cameras and LEDs synchronise to a radio pulse from the headset, and the camera exposure time is kept very short. That means the picture from the camera is completely black, except for very bright IR sources. Hopefully that means only the LEDs are visible, although light bulbs and open windows can inject noise and make the tracking harder.

Rift Sensor view of the CV1 headset and 2 controllers.
Rift Sensor view of the CV1 headset and 2 controllers.

If you have both IMU and camera data, you can build what we call a 6 Degree of Freedom (6DOF) driver. With only IMUs, a driver is limited to providing 3 DOF – allowing you to stand in one place and look around, but not to move.

OpenHMD provides a 3DOF driver for the CV1 at this point, with experimental 6DOF work in a branch in my fork. Getting to a working 6DOF driver is a real challenge. The official drivers from Oculus still receive regular updates to tweak the tracking algorithms.

I have given several presentations about the progress on implementing positional tracking for the CV1. Most recently at 2020 in January. There’s a recording at if you’re interested, and I plan to talk more about that in a future post.

Rift S Tracking

The Rift S uses Inside Out tracking, which inverts the tracking process by putting the cameras on the headset instead of around the room. With the cameras in fixed positions on the headset, the cameras and their view of the world moves as the user’s head moves. For the Rift S, there are 5 individual cameras pointing outward in different directions to provide (overall) a very wide-angle view of the surroundings.

The role of the tracking algorithm in the driver in this scenario is to use the cameras to look for visual landmarks in the play area, and to combine that information with the IMU readings to find the position of the headset. This is called Visual Inertial Odometry.

There is then a 2nd part to the tracking – finding the position of the hand controllers. This part works the same as on the CV1 – looking for constellations of LED lights on the controllers and matching what you see to a model of the controllers.

This is where I think the tracking gets particularly interesting. The requirements for finding where the headset is in the room, and the goal of finding the controllers require 2 different types of camera view!

To find the landmarks in the room, the vision algorithm needs to be able to see everything clearly and you want a balanced exposure from the cameras. To identify the controllers, you want a very fast exposure synchronised with the bright flashes from the hand controller LEDs – the same as when doing CV1 tracking.

The Rift S satisfies both requirements by capturing alternating video frames with fast and normal exposures. Each time, it captures the 5 cameras simultaneously and stitches them together into 1 video frame to deliver over USB to the host computer. The driver then needs to split each frame according to whether it is a normal or fast exposure and dispatch it to the appropriate part of the tracking algorithm.

Rift S – normal room exposure for Visual Inertial Odometry.
Rift S – fast exposure with IR LEDs for controller tracking.

There are a bunch of interesting things to notice in these camera captures:

  • Each camera view is inserted into the frame in some native orientation, and requires external information to make use of the information in them
  • The cameras have a lot of fisheye distortion that will need correcting.
  • In the fast exposure frame, the light bulbs on my ceiling are hard to tell apart from the hand controller LEDs – another challenge for the computer vision algorithm.
  • The cameras are Infrared only, which is why the Rift S passthrough view (if you’ve ever seen it) is in grey-scale.
  • The top 16-pixels of each frame contain some binary data to help with frame identification. I don’t know how to interpret the contents of that data yet.


This blog post is already too long, so I’ll stop here. In part 2, I’ll talk more about deciphering the Rift S protocol.

Thanks for reading! If you have any questions, hit me up at or @thaytan on Twitter

Startup time profiling of gnome-software

Following on from the heap profiling I did on gnome-software to try and speed it up for Endless, the next step was to try profiling the computation done when starting up gnome-software — which bits of code are taking time to run?

tl;dr: There is new tooling in sysprof and GLib from git which makes profiling the performance of high-level tasks simpler. Some fixes have landed in gnome-software as a result.

Approaches which don’t work

The two traditional tools for this – callgrind, and print statements – aren’t entirely suitable for gnome-software.

I tried running valgrind --tool=callgrind gnome-software, and then viewing the results in KCachegrind, but it slowed gnome-software down so much that it was unusable, and the test/retry cycle of building and testing changes would have been soul destroyingly slow.

callgrind works by simulating the CPU’s cache and looking at cache reads/writes/hits/misses, and then attributing costs for those back up the call graph. This makes it really good at looking at the cost of a certain function, or the total cost of all the calls to a utility function; but it’s not good at attributing the costs of higher-level dynamic tasks. gnome-software uses a lot of tasks like this (GsPluginJob), where the task to be executed is decided at runtime with some function arguments, rather than at compile time by the function name/call. For example “get all the software categories” or “look up and refine the details of these three GsApp instances”.

That said, it was possible to find and fix a couple of bits of low-hanging optimisation fruit using callgrind.

Print statements are the traditional approach to profiling higher-level dynamic tasks: print one line at the start of a high-level task with the task details and a timestamp, and print another line at the end with another timestamp. The problem comes from the fact that gnome-software runs so many high-level tasks (there are a lot of apps to query, categorise, and display, using tens of plugins) that reading the output is quite hard. And it’s even harder to compare the timings and output between two runs to see if a code change is effective.

Enter sysprof

Having looked at sysprof briefly for the heap profiling work, and discounted it, I thought it might make sense to come back to it for this speed profiling work. Christian had mentioned at GUADEC in Thessaloniki that the design of sysprof means apps and libraries can send their own profiling events down a socket, and those events will end up in the sysprof capture.

It turns out that’s remarkably easy: link against libsysprof-capture-4.a and call sysprof_capture_writer_add_mark() every time a high-level task ends, passing the task duration and details to it. There’s even an example app in the sysprof repository.

So I played around with this newly-instrumented version of gnome-software for a bit, but found that there were still empty regions in the profiling trace, where time passed and computation was happening, but nothing useful was logged in the sysprof capture. More instrumentation was needed.

sysprof + GLib

gnome-software does a lot of its computation in threads, bringing the results back into the main thread to be rendered in the UI using idle callbacks.

For example, the task to list the apps in a particular category in gnome-software will run in a thread, and then schedule an idle callback in the main thread with the list of apps. The idle callback will then iterate over those apps and add them to (for example) a GtkFlowBox to be displayed.

Adding items to a GtkFlowBox takes some time, and if there are a couple of hundred of apps to be added in a single idle callback, that can take several hundred milliseconds — a long enough time to block the main UI from being redrawn that the user will notice.

How do you find out which idle callback is taking too long? sysprof again! I added sysprof support to GLib so that GSource.dispatch events are logged (along with a few others), and now the long-running idle callbacks are displayed in the sysprof graphs. Thanks to Christian and Richard for their reviews and contributions to those changes.

This capture file was generated using sysprof-cli --gtk --use-trace-fd -- gnome-software, and the ‘gnome-software’ and ‘GLib’ lines in the ‘Timings’ row need to be made visible using the drop-down menu in the ‘Timings’ row.

It’s important to call g_task_set_source_tag() or g_task_set_name() on all the GTasks in your code, and to call g_source_set_name() on the GSources (like this), so that the marks in the capture file have helpful names.

In it, you can see the ‘get-updates’ plugin job on gnome-software’s flatpak plugin is taking 1.5 seconds (in a thread), and then 175ms to process the results in the main thread.

The selected row above that is showing it’s taking 110ms to process the results from a call to gs_plugin_loader_job_get_categories_async() in the main thread.

What’s next?

With the right tooling in place, it should be easier for me and others to find and fix performance issues like these, in gnome-software and in other projects.

I’ve submitted a few fixes, but there are more to do, and I need to shift my focus onto other things for now.

Please try out the new sysprof features, and add libsysprof-capture-4.a support to your project (if it would help you debug high-level performance problems). Ask questions on Discourse (and @ me).

To try out the new features, you’ll need the latest versions of sysprof and GLib from git.

Sound Recorder to modern HIG II

Yay, new changes also added in Sound Recorder. Another blog for new recent changes.

This snapshot of application, is at almost stable usable for daily life, cause we added back delete and rename button those temporarily removed.

In this snapshot, three new cool features are added “Pause Recording� and “Cancel Recording�.

Pause recording: Before this added people were recording whole long single recording, even they not wanna record some part, cause they couldn’t able to pause recording. They were doing extra steps and using other apps to remove part of the unwanted recording. Now it’s a big relief for them.

Pause/Cancel recording recording

Pause/Cancel recording

Cancel recording: This option is added for the convenience of a user who just feels oops I recorded something accidentally before they were ending recording and were looking back to list and removing recording from there. Now it’s showing the option of cancel recording next to stop recording button. So easy…

Rename recording recording

Rename recording

About Rename Recording this feature was already there but wasn’t designed for small (mobile) screens. Well, now it’s fixed.</br>

We changed back to the default recording naming pattern. Now it’s back to “Recording 1..2..3�. No more confusing long name with date and time.

Now application straight asks for what you gonna name this recording as soon as you press stop recording button. Don’t worry you can skip it and rename it later. All this process is soo smooth you can see above.

All these things were not single line code fix. I don’t wanna make this blog long and boring so I’m choosing to not adding code changes, you can find it in commits if you want it.

And look at new recording list-row with cool expander.

I got epic mentors Bilal and Felipe for supporting epic things.

That’s all for now.

I’m also contributing to other GNOME projects in free time plus a lots of things going on in irl, but I still can promise new things coming soon.

Nowadays I’m using twitter @kavanmevada, feel free to share anything anytime. � for life.

July 13, 2020

(Some) Highlights from GUADEC

There are so many exciting things happening at GUADEC this year, it would be impossible to highlight everything I’m looking forward to. What really excited me about the schedule this year is how diverse the topics are. I really do think this year’s GUADEC has something for everyone, from people just getting to know free and open source software (FOSS) to people who are hardcore GNOME contributors. Please note that all sessions will be captioned in English.

A woman stands behind a desk, looking up at slides. She has a microphone in her hand and is giving a presentation.
“Guadec 2013: Interns lightning talks” by Ana _Rey is licensed under CC BY-SA 2.0

Staff Sessions

I positively adore my coworkers. I’ll spare you how great they are, and instead focus on some of the talks they’ll be giving.

GKT Core Developer Emmanuele Bassi will be giving two talks: Being a GNOME Maintainer: Best Practices and Known Traps and Archaeology of Accessibility. Being a GNOME Maintainer will discuss what it means to be a GNOME maintainer, and Archaeology of Accessibility will be a technical deep dive into the accessibility work Emmanuele and others have been doing around accessibility. (Note: “Accessibility” refers to the ability of technology to accommodate the needs of users who have disabilities, visual impairments, etc.)

Melissa Wu, who is organizing the Community Engagement Challenge, will give two sessions as well. In her first, Remember What It’s Like to Be New to GNOME, she’ll talk about her experience coming to the GNOME community only a few months ago, getting to know people, and making things happen.

Melissa will also join me for A Year of Strategic Initiatives at GNOME, during which we’ll talk about a range of things that have happened at GNOME over the past year (and some future plans), with a focus on organizational sustainability and the initiatives that make us excited to work here.

Executive Director Neil McGovern will lead the Annual General Meeting, to provide everyone with an overview of what we’ve been doing and what we will do, and answer your questions.

Welcome to FOSS, Welcome to GNOME

New to FOSS? New to GNOME? Not sure what I’m talking about? Check out these sessions!

Building Better Community

In these talks you’ll learn about how to build better, stronger communities and be a better community member. A lot of this is applicable to your life outside of FOSS as well!

Building Better Software

These talks cover ways to build software better. Some of them are focused on GNOME, but all of them will be applicable to whatever you’re working on.

Join us!

Registration for GUADEC is free, but we encourage you to do so anyway. Knowing how many people are attending, and learning about who you are, helps us make GUADEC better every year. Register today!

July 11, 2020

Introducing Minuit

It all started with a sketch on paper (beware of my poor penmanship).

Sketch of a UI

I was thinking how I can build a musical instrument in software. Nothing new here actually, there are even applications like VMPK that fit the bill. But this is also about learning as I have started to be interested in music software.

This is how Minuit is born. The name come from a mix of a play on word where, in French, Minuit is midnight and MIDI is midday. MIDI of course is the acronym Musical Instrument Digital Interface, which is the technology at the heart of computer aided music. Also minuit sounds a bit like minuet which is a dance of social origin.

I have several goals here:

  1. learn about MIDI: The application can be controlled using MIDI.
  2. learn about audio: Of course you have to output audio, so now is the best time to learn about it.
  3. learn about music synthesis: for now I use existing software to do so. The first instrument was ripped off Qwertone, to produce a simple tone, and the second is just Rhodes toy piano using soundfonts.
  4. learn about audio plugins: the best way to bring new instruments is to use existing plugins, and there is a large selection of them that are libre.

Of course, I will use Rust for that, and that means I will have to deal with the gaps found in the Rust ecosystem, notably by interfacing libraries that are meant to be used from C or C++.

I also had to create a custom widget for the piano input, and for this I basically rewrote in Rust a widget I found in libgtkmusic that were written in Vala. That allowed me to be refine my tutorial on subclassing Gtk widgets in rust.

To add to the learning, I decided to do everything in Builder instead of my usual Emacs + Terminal combo, and I have to say it is awesome! It is good to start using tools you are not used to (ch, ch, changes!).

In end the first working version look like this:

Basic UI for Minuit showing the piano widget and instrument selection

But I didn't stop here. After some weeks off doing other things, I resumed. I was wondering, since I have a soundfont player, if I could turn this into a generic soundfont player. So I got to this:

Phase 1 of UI change for fluidlite

Then iterated on the idea, learning about banks and preset for soundfont, which is straight off MIDI, and made the UI look like this:

Phase 2 of the UI change for fluidlite

Non configurable instruments have a placeholder:

UI for generic instrument placeholder

There is no release yet, but it is getting close to the MVP. I am still missing an icon.

My focus forward will be:

  1. focus on user experience. Make this a versatile musical instrument, for which the technology is a mean not an end.
  2. focus on versatility by bringing more tones. I'd like to avoid the free form plugins but rather integrated plugins into the app, however leveraging that plugin architecture whenever possible.
  3. explore other ideas in the musical creativity.

The repository is hosted on GNOME gitlab.

Flatpak extensions

Linux Audio

When I started packaging music applications in Flatpak, I was confronted to the issue of audio plugins: lot of music software has effects, software instruments and others implemented as plugins colloquially called VST. When packaging Ardour, it became clear that supported these plugins where a necessity, as Ardour includes very little in term of instruments and effects.

On Linux there are 5 different formats: while LADSPA, DSSI and LV2 are open, VST2 and VST3 are proprietary standard, created by Steinberg, they are very popular in non-libre desktops and applications. Fortunately with somewhat open implementations available, they exist on Linux in open source form. These 5 audio plugins format work in whichever applications can host them. In general LV2 is preferred.

Now the problem is that "one doesn't simply drop a binary in a flatpak". I'm sure there are some tricks to install them, since a lot of plugins are standalone, but in general it's not sanctioned. So I came up with a proposal and an implementation to support and build Linux Audio plugins in Flatpak. I'll skip the details for now, as I'm working on a comprehensive guide, but the result is that several audio applications now support plugins in flatpak, and a good number of plugins are available on Flathub.

The music applications that support plugins in Flatpak are Muse3 (not to be confused with MuseScore), LMMS, Hydrogen, Ardour (this is not supported by the upstream project), Mixxx, gsequencer. Audacity is still pending. There is also a few video editors: kdenlive, Flowblade, and Shotcut that support LADSPA audio effects and can now use the packaged plugins.

Sadly there don't seem to be a way to find the plugins on Flathub web site, nor in GNOME Software (as found in Fedora). So to find available plugins, you have to use the command line:

$ flatpak search LinuxAudio


In the same idea, GIMP was lacking plugins. GMic has been the most requested plugin. So I took similar steps and submitted plugin support for GIMP as a flatpak. This was much less complicated as we don't have the problem of multiple apps. Then I packaged a few plugins, GMic being the most complex (we need to build Qt5). Now GIMP users have a few important third-party plugins available, including LensFun, Resynthesizer, Liquid Rescale and BIMP.

Thanks to all the flathub contributors and reviewers for the feedback, to the Ardour team for creating such an amazing application, and to Jehan for his patience as the GIMP flatpak maintainer.

This week in GNOME Builder #2

This week we fixed some specific topics which were planned for the previous cycle. If anyone wants to contribute so see some of our “Builder wishlist” go there: Builder/ThreePointThirtyfive Last time i had forgotten to mention the great work of our translation team which contributed various translations to Builder. Thank you! New Features For several releases now Builder allows to debug applications with gdb. However it was not possible to interact with gdb directly.

July 10, 2020

Implementing Gtk based Container-Widget: Part — 2

Implementing Gtk based Container-Widget: Part — 2

Working on a single row implementation

Some background

This write-up is in continuation of its previous part — setting up basic container functionality.

In the past couple of weeks, we moved on from just adding children to actually repositioning them (child widgets of the container, NewWidget) when enough space is not available for all widget to fit in the given width. Though the grid structure is yet to put in place, the widget could be seen taking shape already (look at below gif).

Work so far

In the last part of this blog, we set-up the basic container functionality that involved overriding some virtual functions in order to display some widgets on the screen. In this part, we cover how to handle a single row of widgets and adjusting/repositioning the widgets (determined by some properties/variables) when the container’s width is not enough to adjust all widgets in one horizontal line of widgets.

Note: For now, we are trying to work with only one row of widgets.

We introduced two new properties for children of NewWidget container namely — weight and position. Using these properties we try to develop a way which determines which widget needs to be repositioned and in what order the repositioning should happen.

In the above gif, you could see how the higher weighted widgets are positioned below the lower weighted widgets when we shrink the window’s width.

This repositioning is modified by changing the weights assigned to widgets, which changes the repositioning behaviour as below.

Weights: GtkComboBoxText = 0, GtkEntry = 1, GtkImage = 0
Weights: GtkComboBoxText = 0, GtkEntry = 1, GtkImage = 1

Implementation detail

In the previous blog, the basic methods were already introduced and the good thing is that no new functions were introduced during the work involved in these two weeks.

Past weeks our main focus was on implementing two functions namely — measure and size_allocate. The former being very crucial and the later is the heart of our widget. The entire allocation logic goes inside the size_allocate function, which is where we put our magical code to reposition the widgets whenever necessary.

The measure function

The measure happens in the size-requisition phase where a parent calls on child widgets asking for their preferred sizes.

  • The first point of interest is that the measure function performs size calculation for the following cases:
    1. preferred_height:
    this is pretty straight forward, we add up preferred heights for all the widgets using gtk_widget_get_preferred_height.
    2. preferred_width:
    this again is similar as above, the only difference is that here we use gtk_widget_get_preferred_width to get widths of the widgets.
    3. preferred_height_for_width:
    in this case, a widget is supposed to return its height requirement for the given width value.
    4. preferred_width_for_height:
    similar to the third case, a widget is supposed to return its width requirement for the given height value.
  • All four cases have their related virtual function in the GtkWidget class which we already overrode in our basic implementation.
  • Of all four, two cases are of our concern here. Case 2 and case 3

Measure — Preferred width

  • It is implemented as follows:
    - Group widgets by their weights
    - Sum preferred sizes of widgets belonging to the same group
    - Take the maximum sum value from the groups

Measure — Preferred height for width

  • Its implementation goes as follows:
    - Group the widgets by their weights
    - Starting from the group with the lowest weight, fit maximum possible groups into one line width (a row has multiple lines)
    - At the end of the above process, every group should be assigned a line to go into
    - For each line, bring widgets to their natural sizes using the following function gtk_distribute_natural_allocation()
    - For each line, line-height is equal to the height of the tallest widget in the line
    - Lastly, the preferred height is sum of heights of all the lines

The allocation function

Now let’s talk about the actual positioning of child widgets in the container space. The logic inside the size_allocate function is our special ingredient for the NewWidget.


  • If a row can’t fit its widgets in the container’s width, split the row into multiple lines
  • Widgets are grouped according to their weights
  • A line can have multiple groups
  • All widgets of a group should be in the same line at all times
  • The original order of children (when all placed in one row) should be maintained all the time.
  • Also, in a line, the relative order of the children should be maintained.


  • Sort the list of children based on their weights.
  • Get the preferred widths for the widgets, using gtk_widget_get_preferred_width().
  • Starting from the group with the lowest weight, fit maximum possible groups into one line width (a row has multiple lines)
  • At the end of the above step, every group should be assigned a line to go into
  • For each line, bring widgets to their natural sizes using the following function gtk_distribute_natural_allocation() and then distribute extra space among all widgets
  • Next, iterate over each line and sort the widgets of each line based on their position value.
  • Lastly, for each of the child do the following:
    Get the preferred height for the width of the widget
    Adjust allocation for this child widget
    Allocate space to the child widget

And this is how we achieved what is shown here :).

An Appointment Up the Hill

Hey, Everyone, it has been a while since I wrote my last post, but during this period I’ve been working and tackling problems for implementing the authentication functionality. So the user can now successfully enter his/her EteSync account in evolution and be able to see his/her data (Address-books, Calenders and Tasks) \o/.

What have I been doing during this period?

First if you want to skip to the results and see the module in action click here 😀

In my last post I showed screenshots for contacts appearing in Evolution, and explained that the .source file was created manually and that the credentials were hard coded for retrieving a specific journal form a specific EteSync account.

After finishing this, I extended so that I can also retrieve calenders and tasks in the same manner which was quite easy as I already understood what should be done. Then I created an etesync-backend file, which generally handles the user’s collection account in evolution (retrieving/ creating /deleting) journals which are address-book or calenders .source files.

The next step was then to make a user enter his credentials, So it isn’t hard coded. In this stage I had faced some issues regarding the implementation, I asked for my mentors help. Some of the problems that I faced were I needed to create a new dialog that will appear ask the user for his credentials and retrieve the data from EteSync, this had some implementation problems for me at first. Other issues appeared while integrating had to change some pieces.

So just to sum up what have I been doing during this period:

  1. Extended the reading functionality to calenders and tasks.
  2. Added etesync-backend file which handles the creating/deleting/ retrieving journals.
  3. Added a Lookup file for looking up your account before adding it.
  4. Added a Credential promoter for taking in the user’s Encryption Password.
  5. Integrated the collection account authentication so that is isn’t hard coded any more.
  6. Also, did the same with the address-book and calender back-end.

Module in action

First you need to add your account in Evolution.

  • Open Evolution and then press the little down arrow next to “New” on the top-left.
  • Then Choose “Collection Account”
This the collection account lookup window
Click on “Advanced Options” to enter your local server
After looking up for your account press next to add it
This window will appear, right after you add your account asking for entering the Encryption Password
(username and password will be preloaded as entered before)
  • OK So at this point you should’ve entered your correct credentials, now evolution will load your journals so you can see your data.
  • Now I made some changes in the my contacts journal “Default” and went back to Evolution and refreshed the address book, it loaded the changes made.

So as you see, there is a great progress done, but still more to be made so EteSync and Evolution users can fully use the module without issues for reading and writing there data.

What Else?

  1. While the reading functionality is working, it may need some small fixes for some cases and testing.
  2. Preparing the module to be easily used by users for testing it (some adjustments in the CMakeList files.
  3. Add the writing functionality for calenders, tasks and address-books.
  4. Adding the ability to create or delete journals from Evolution.
  5. Small tweaks in the UI in authentication dialog.

These are the things that I can think of now, during the progress other things might appear along the way. So, stay tuned for more posts in the future, sharing with you my progress throughout the journey 😀