GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

August 16, 2018

libinput's "new" trackpoint acceleration method

This is mostly a request for testing, because I've received zero feedback on the patches that I merged a month ago and libinput 1.12 is due to be out. No comments so far on the RC1 and RC2 either, so... well, maybe this gets a bit broader attention so we can address some things before the release. One can hope.

Required reading for this article: Observations on trackpoint input data and X server pointer acceleration analysis - part 5.

As the blog posts linked above explain, the trackpoint input data is difficult and largely arbitrary between different devices. The previous pointer acceleration libinput had relied on a fixed reporting rate which isn't true at low speeds, so the new acceleration method switches back to velocity-based acceleration. i.e. we convert the input deltas to a speed, then apply the acceleration curve on that. It's not speed, it's pressure, but it doesn't really matter unless you're a stickler for technicalities.

Because basically every trackpoint has different random data ranges not linked to anything easily measurable, libinput's device quirks now support a magic multiplier to scale the trackpoint range into something resembling a sane range. This is basically what we did before with the systemd POINTINGSTICK_CONST_ACCEL property except that we're handling this in libinput now (which is where acceleration is handled, so it kinda makes sense to move it here). There is no good conversion from the previous trackpoint range property to the new multiplier because the range didn't really have any relation to the physical input users expected.

So what does this mean for you? Test the libinput RCs or, better, libinput from master (because it's stable anyway), or from the Fedora COPR and check if the trackpoint works. If not, check the Trackpoint Configuration page and follow the instructions there.

August 15, 2018

Builder Session Restore

People have asked for more advanced session restore for quite some time, and now Builder can do it. Builder will now restore your previous session, and in particular, horizontal and vertical splits.

Like previously, you can disable session restore in preferences if that’s not your jam.

You can write plugins which hook into session save/restore by implementing IdeSessionAddin.

August 14, 2018

GSoC’18 - Final Report

As GSoC’18 ends, I’m writing this final report as a summary of the current state of the project.

Summary

My project can be majorly divided into the following subtasks. Deatils about my project and subtasks can be found here- 1, 2, 3.

Fetching metadata to Games

Currently Games never made use of any metadata provided by thegamesdb. My first task was to enable Games to fetch this metadata. This is the core of my projct as all the other tasks are dependent on this one. My changes for this have already been pushed to master.

Allow grilo to set values for non-registered keys

My following task was to add missing Grl KeyIDs to grilo in order to succesfully fetch them to Games, but adding too many KeyIDs to core grilo was not suitable. I wrote a new grio API to succesfully register and set data to non-registered keys which took a lot more time than expected. The first goal to be met in order to achieve that was allowing grilo to set values for non-registered keys. My patches for this have already been pushed to master.

Allow Lua plugins to self register keys

In order to fetch all of metadata to games, the only task remaining was allowing lua sources to register keys by themselves. This would allow anyone to use any key for lua based sources without having to worry about adding those keys as core system keys. This has already been pushed to master.

Segregating-view

Here I make the first use of the metadata I added to Games. Two new views, developers view and platforms view were added to Games which display games grouped by a particular developer or platform respectively. This was the most enjoyable task to work on as this provided a much needed UI upgrade to Games. You can try this as this has already been merged to master.

Description-view

This made use of all the remaining metadata that I was fetching to Games. Description view displays all of the game metadata like

  • Description
  • Cover
  • Title
  • Rating
  • Developer
  • Publisher
  • Genre
  • Coop
  • Release Date
  • Players

This is almost complete, it can be built and used through my patches. Some minor fixes and a review are required to get this merged to master.

Bugs

I fixed a few bugs I encountered along the way of my project. All of these bug fixes were pushed to master.

Tasks left

Some of the tasks I had originally planned took a lot more time than expected. My last task was to add stats to games that track and store your overall game statistics. I’ve already began working on this and will get it merged after thoroughly getting it reviewed by my mentors.

Work Product

I’ve uploaded all my current patches to a drive folder for easy availability. Additionally all my patches are also avaialable on Gitlab.

A few words

I had a wonderful time contributing to GNOME since I started this February. The amazing community and even more amazing mentors helped me learn new things and guided me all along the way which I would like to thank them for. I will surely keep contributing to GNOME.

Special thanks to Adrien, theawless and exalm. Thank you for making my summers super exciting.

August 13, 2018

GTK+ 4 and Nautilus </GSoC>

Another summer here at GNOME HQ comes to an end. While certainly eventful, it unfortunately did not result in a production-ready Nautilus port to GTK+ 4 (unless you don’t intend to use the location entry or any other entry, but more on that later).

It works, and you can try it if you really (really) want to by doing so:

flatpak install https://gitlab.gnome.org/GNOME/gnome-apps-nightly/raw/master/NautilusGtk4.flatpakref

You will notice that DnD works differently as compared to the current stable version:

  • the default action isn’t set by Nautilus to “move” when dragging a file in its parent directory, but rather all possible actions (“copy”, “move”, “link”, “ask”) – that is due to changes in the DnD API in GTK+ in an effort to consolidate all the different action types (actions, suggested actions, selected actions);
  • “link” cannot be accessed with ctrl-shift when using Wayland, “ask” does not work in X;
  • “ask” does not result in a context menu being popped up, likely due to a bug in GTK+ and not in Nautilus;
  • sidebar drops are rejected and cause the “New bookmark” row to remain visible, which is caused by another reported GTK+ bug.

Another big issue is actions and their (global) accelerators. Due to changes in how these are activated in GTK+, Nautilus can no longer do the trick where it prevents that from happening if the currently-focused widget is an editable. Not being able to do that means that pressing ctrl-a while, say, renaming a file will select the files in the view instead of the text. Pressing delete immediately after will result in you fainting, as you’ve just trashed all the files (as had happened to me several times, and thank goodness it’s not permanent deletion). Given the shortcuts reworking effort in GTK+ that is currently in progress, it might be a breeze to fix the issues in the near future.

Aside from all that, there are some other small-ish and annoying issues, like drag icons not being offset in Wayland, the operations button losing its reveal animation, and probably something else I’m forgetting at the moment. So, if you’re interested in turning your computer into a jet engine or benchmarking GitLab, feel free to take a look at https://gitlab.gnome.org/GNOME/nautilus/merge_requests/266.

GSoC 2018 Final Evaluation

As GSoC is coming to an end, I am required to put my work altogether in order for it to be easily available and hopefully help fellow/potential contributors work on their own projects. 🙂

All of my work (during the GSoC period!) can be found on this link.

GSoC has been an amazing experience and opportunity. I have learned lots, made a lot of friends and found something I’d really love to keep on doing further. 🙂

Our initial plan can be found in its own issue right here, issue 224.

First off, a few goals have been unfortunately left out as there has been a lot of going back and forth regarding how we want to do certain things, which ones to prioritize and how to best set an example for the consequent work.

At its prestige, through this project we will have tests both for most critical and used operations of Nautilus, and for the search engines we use. Further on, I’ll provide links for all of my merge requests and dwell a bit on their ins and outs while posting links to my commits:

Add dir-has-files test, the code can be found here: a fairly small and simple function I started to work on in order to get used to working with the GTest framework (although I had some experience from the merged file-utilities test which was merged before the start of GSoC) and set the cornerstone on how we want files and folders to be managed.

Add file-operations-move test, the code available here: the first large operation whose tests we merged. With this we learned we need to create several different hierarchies and that we’d need synchronous alternatives to the operations we’re about to test.

Copy, trash-or-delete file operations tests, whose code is in this folder: after the move operation, things must have went smoother, right? Copy was more or less straight forward, yet trashing and deleting posed a few difficulties as they involve interacting with the user a lot (e.g. confirmation for deletion, confirmation for permanent deletion and the likes of these dialogs which we wanted to avoid as we were doing testing). Once these were added, we realized (as the tests are being ran in parallel) we wanted to prefix the file hierarchies in order to prevent multiple tests from deleting (or trying to access) the same file (which might have been wrongfully deleted by another executable subsequently).

Test library, undo/redo of operations and search engines, which can be found here. This is still a work in progress at the moment of writing this, although the final pushes are more or less about polishing up the code, its documentation and making sure the commits are consistent and make sense in the work tree. Undo and redo operations took quite a bit of time as they have several callbacks in the call chain and they proved cumbersome to debug and work out. With a little help, we decided it would be best not to go with synchronous alternatives, but rather isolate the calls in their own main loop, which signals when it reaches the final callback. Although undo and redo have mostly been merged, some follow up work will be needed as our environment right now does not allow trashing (which means undoing copying a file for instance would work, but not its redo). Search tests were not as difficult as we had a small test regarding the main engine which we only needed to update and make sure runs fine, before proceeding with the following engines. Definitely, the one which proved most difficult was the tracker engine as we need to index a virtual file in the tracker before we run the engine, in order for it to be found. After all this work, I decided we’d reduce the lines of code by a good chunk using a test library, which allows undoing/redoing operations in a blocking main thread, creating and deleting different and varying in size file hierarchies for the purpose of testing (library which will hopefully make following contributions on Nautilus a bit easier and maybe set an example for other GNOME apps).

 

Bottom line, there’s still a bit of work ahead of us, but I’m confident I’ll manage to do some of them myself, while also trying to make it more approachable for other (or newer) Nautilus contributors.

August 12, 2018

[GSoC’18] Pitivi’s UI Polishing – Final Report

As part of Google Summer of Code 2018, I worked on the project Pitivi: UI Polishing. This is my final report to showcase the work that I have done during the program.

Work Done

I worked on the issue #1302 to integrate Pitivi’s welcome dialog box into its main window and display projects in a more informative, discoverable, and user friendly layout. This is how Pitivi greeted users before I started working on the issue:

Screenshot from 2018-08-11 13-59-26

My first task was to remove the welcome dialog from Pitivi and integrate a simple greeter perspective that displays the name of recent projects in a listbox with all the relevant buttons placed in the headerbar. This was a big task and took nearly 5 weeks to complete. The main challenge of this task was to understand the big codebase of main window and refactor it. The refactoration lead to the following architectural changes:

pitivi_architecture (2)

In the the new architecture, I have developed a perspective based system which allows easily adding more perspectives to Pitivi. Every class has single responsibility in the new architecture as opposed to everything being managed by main window in the previous architecture.

!33 is the merge request for this change and 330d62a4 is the committed code. This is how the greeter perspective looked like after this change:

pitivi_greeter_ad

I also did some code clean up to conclude my greeter perspective’s integration:

  • Remove unused “ignore_saved_changes” param from newBlankProject() function in project class – b2f19ea3
  • Make newly added shortcuts available when users upgrade Pitivi – 8b3a691a
  • Move dialog for locating missing asset to dialogs package – c08007a8
  • Remove directly called callback methods – 7a8d23ba
  • Remove project title functionality because we only care about project file name – 4ea9d89f

My next task was to display meta info regarding a project such as its directory and last updated timestamp in the greeter perspective. !43 is the merge request for this change and ff672460 is the committed code. This is how the greeter perspective looked like after this change:

aaa

My next task was to show a welcome message on greeter perspective when there are no recent projects. !44 is the merge request for this change and d9c8f6c8 is the committed code. This is how the empty (no recent projects) greeter screen looks like:

empty_greeter

Finally, I added the following to greeter perspective:

  1. “Search” feature for easy browsing of recent projects, and
  2. “Selection” feature for removing project(s) from recent projects list.

!46 is the merge request for search feature and c72ebce7 is the committed code. !48 is the merge request for selection feature and 809d7599 is the committed code.

searchSearch feature
removeSelection feature

After resolving issue #1302, I worked on the issue #1462 to integrate project thumbnails in greeter perspective. The main idea behind these thumbnails is to give the user a hint what a certain project is about. The main challange of this task was to come up with a simple and robust algorithm for thumbnail generation (link). More details about our project thumbnail generation approach can be found here.

!51 is the merge request for this change and a6511c1e is the committed code.

I also made the following optimizations to project thumbnails (!54):

  • Use assets thumbnails if no thumbnails are present in XDG cache – cb47d926
  • While opening greeter perspective, defer loading of project thumbnails until the main thread is idle – 37c973e1

So this is how the greeter perspective looks finally after all these changes:

Screenshot from 2018-07-21 00-29-40

After wrapping integration of project thumbnails, I added the following features to greeter perspective:

  • Right click on a recent project to select it and start selection mode – !67 is the merge request and 6db06472 is the committed code
  • Drag & drop a project file from Nautilus onto greeter to open it – !68 is the merge request and 817d885a is the committed  code

Next, I worked on the issue #1816 to allow video viewer resizing.

The first task was to integrate a marker on the bottom left corner of the viewer container to allow for easy resizing of the viewer. !60 is the merge request for this change and 25609f3a is the committed code.

resize1Viewer resizing using corner marker

Work in Progress…

Two more tasks are left under issue #1816:

  1. Displaying resize status (in %) over slightly dimmed video, and
  2. Snapping of viewer to 100% and 50% if the current resize status is between [100%, 100% + Δ ] and [50%, 50% + Δ ] respectively.

I have submitted a merge request !70 for review for both these tasks.

resize2Displaying resize status over dimmed video

Final Words

I would like to thank my mentor Alexandru Băluț (aleb) for his help and guidance throughout this project. I would also like to thank all other members of GNOME community who helped me along the way. This turned out to be an amazing summer for me with lots of learning and fun. I will continue to contribute to Pitivi as I feel like this is just the beginning of many exciting things yet to be done in Pitivi to make it the best video editing application.

Until next time 🙂

Implementing a distributed compilation cluster

Slow compilation times are a perennial problem. There have been many attempts at caching and distributing the problem such as distcc and Icecream. The main bottleneck on both of these is that some work must be done on the "user's desktop" machine which is then transferred over the network. Depending on the implementation this may include things such as fully preprocessing the source file and then sending the result over the net (so it can be compiled on the worker machine without needing any system headers).

This means that the user machine can easily become the bottleneck. In order to remove this slowdown all the work would need to be done on worker machines. Thus the architecture we need would be something like this:


In this configuration the entire source tree is on a shared network drive (such as NFS). It is mounted in the same path on all build workers as well as the user's desktop machine. All workers and the desktop machine also must have an identical setup, that is, same compilers and installed dependencies. This is fairly easy to achieve with Docker or any similar container technology.

The main change needed to distribute the work is to create a compiler wrapper script, much like distcc or icecc, that sends the compilation request to the work distributor. It consists only of a command line to execute and the path to run it in. The distributor looks up the machine with the smallest load, sends the command, waits for the result and then returns the result to the developer machine.

Note that the input or output files do not need to be transferred between the developer machine and the workers. It is taken care of automatically by NFS. This includes any changes made by the user on their local checkout which are not in revision control. The code that implements all of this (in an extremely simple, quick, dirty and unreliable way) can be found in this Github repo. The implementation is under 300 lines of Python.

Experimental results

Since I don't have a data center to spare I tested this on a single 8 core i7 computer. The "native OS" ran the NFS server and work distributor. The workers were two cloned Virtualbox images each having 2 cores. For testing I compiled LLVM, which is a fairly big C++ code base.

Using the wrapper is straightforward and consists of setting up the original build directory with this:

FORCE_INLINE=1 CXX='/path/to/wrapper workserver_address g++' cmake <options>

Force inline is needed so configuration tests are run on the local machine. They write to /tmp, which is not shared and the executables might be run on a different machine than where they are compiled leading to failures. This could also be solved by having a shared temporary folder but that would increase the complexity of this simple experiment.

Compiling the source just over NFS in a single machine using 2 cores took about an hour. Compiling it with two workers took about 47 minutes. This is not particularly close to the optimal time of 30 minutes so there is a fair bit of overhead in the implementation. Most of this is probably due to NFS and the fact that absolutely everything ran on the same physical machine. NFS also had coherency problems. Sometimes some process invocations could not see files created by their dependency tasks. The most common case was linker invocations, which were missing one or more object files. Restarting the build always made it pass. I tried to add sync commands as necessary but could not make it 100% reliable.

Miscellaneous things of note

In this test only compilation was parallelised. However this same approach works with every executable that is standalone, that is, it does not need to talk to any other ongoing process via IPC. Every build system that supports setting the compiler manually can be used with this scheme. It also works for parallelising tests for build systems that support invoking tests with an arbitrary runner. For example, in Meson you could do this:

meson test --wrapper='/path/to/wrapper workserver_address'

The system also works (in theory) identically on other operating systems such as macOS and Windows. Setting up the environment is even easier because most projects do not use "system dependencies" on those platforms, only the compiler. Thus on Windows you could mount a smb drive with the code on, say, D:\code on all machines and, assuming they have the same version of Visual Studio, it should just work (not actually tested).

Adding caching support is fairly easy. All machines need to have a common directory mounted, point CCACHE_DIR to that and set the wrapper command on the desktop machine to:

CXX='/path/to/wrapper workserver_address ccache g++'

August 11, 2018

Five-or-More Modernisation: It's a Wrap

As probably most of you already know, or recently found out, at the beginning of this week the GSoC coding period officially ended, and it is time for us, GSoC students, to submit our final evaluations and the results we achieved thus far. This blog post, as you can probably tell from the title, will be a summary of all of the work I put into modernising Five or More throughout the summer months.

My main task was rewriting Five or More in Vala since this simple and fun game did not find its way to the list of those included in the Games Modernisation Initiative. This fun, strategy game consists of aligning, as often as possible, five or more objects of the same shape and color, to make them disappear and score points.

Besides the Vala rewrite, there were also some other tasks included, such as migrating to Meson and dropping autotools, as well as keeping the view and logic separated and updating the UI to make this game more relatable for the public and more fresh-looking. However, after thoroughly discussing the details with my mentor, Robert Roth (IRC: evfool), more emphasis was placed upon rewriting the code to Vala, since the GSoC program is specifically designed for software development. However, slight UI modifications were integrated as to match the visual layout guidelines.

Some of the tasks, namely porting to gettext, porting to Meson and dropping autotools happened earlier on, during the pre-GSoC time frame, in the attempt to familiarise myself with the project and the tasks that would come with it.

Afterward, I started with the easier tasks and advanced towards the more complex ones. At first, I ported the application window and the preferences menu and added callbacks for each of the preferences buttons. I then continued with porting the preview widget displaying the next objects to be rendered on the game board.

Next, it was time to channel my attention towards porting the game board, handling board colour changes and resizing the board alongside the window resize, by using the grid frame functionality inside the libgnome-games-support library.

The following target was implementing the actual gameplay logic, which consisted of a pathfinding algorithm based on A*, erasing all objects of the same shape and colour aligned in a row, column or diagonal from the board, if there were either five or more than five, and adding to the score whenever that happened. I also made the object movement possible with both clicking and keyboard keys, for more ease of use.

Finishing touches included adding the high scores tables via the libgnome-games-support library, displaying an option to change the theme of the game, adding a grid to be able to make out easier the actual cells in which different shaped and coloured objects should reside, as well as updating some information contained by the help pages.

Some features, however, could not be done during the GSoC period. These included handling raster game themes, making significant UI updates, as well as some other extra-features I wanted to add, which were not part of the original project description such as gamepad support or sound effects. The fist feature mentioned in this list, handling raster images was decided upon skipping as a suggestion from my mentor, as the existing raster themes were low-resolution and did not scale well to large size and HiDPI displays.

For easier reading, I decided to also include this list of the tasks successfully done during GSoC:

  • Ported from autotools to Meson
  • Ported application window and preferences menu
  • Added callbacks for all buttons
  • Ported widget of next objects to be rendered on the game board
  • Handled the resize and colour change of the game board
  • Implemented the gameplay logic (pathfinding, matching objects and calculating scores)
  • Made gameplay accessible for both mouse clicks and keyboard controls
  • Added high scores via the libgnome-games-support library
  • Handling vector image theme changing
  • Made slight modifications to the UI to serve as a template and changed the spacing as recommended in the visual layout guidelines.
  • Updated documentation.

All code is available by accessing this merge request link.

The end of GSoC

After three months of hard work and a lot of coding the Google Summer of Code is over. I learned a lot and had a lot fun. GSoC was an amazing experience and I encourage everybody to participate in future editions. At this point I’ve been a contributor to GNOME for nearly a year, and I plan on sticking around for a long time. I really hope that other GSoC students also found it so enjoyable, and keep contributing to GNOME or other Free Software Projects.

This will be my final GSoC post, so I will summarize what I did for Fractal in the last couple of months, and what I didn’t finish before the deadline. I completed some of the tasks I proposed for GSoC already before the Coding Phase. It was really nice that we had a meeting with Daniel (my mentor) and Eisha (my co-GSoC student) every week, and also some other core members working for Fractal joined sometimes. During these meetings I could get feedback from Daniel and discuss tasks for the next week. We had a really good workflow. I guess that was also a reason that I started working on some imported tasks and not only on things I initially proposed.

Main Tasks

These are my main Tasks I completed during the GSoC. They are all already merged, and included in the current release:

These are the task I could not finish in time:

  • Spell Check: Add spellcheck to Fractal, based on gspell. I implemented this (though not with the UI we’ll eventually want) and it was merged, but since then we made some big changes to the message input widget and therefore it doesn’t work anymore at the moment. We also decided that other tasks were more important. I will work on this in the next couple of months.
  • refactor room history: A big change to how the message history is displayed, loaded, and stored. This will lay the groundwork for a lot of new features we want, and the backend refactor we need to split the app. This is one of the biggest tasks I worked on during the summer, and I will try to finish it as soon as possible. Since we started restructuring the code base I had a lot of stuff to figure out and therefore it was delayed.
  • Use libhandy to have adaptive columns: We want Fractal to work on big screens as well as on small screens and therefore we use a widget in libhandy to center columns and limit the width of, mostly, gtkListboxes.

During the Summer I made also some other contributions, which were tangentially related to Fractal and my GSoC tasks, e.g. I fixed the GTK emoji picker layout and styling (this has been merged now, so look forward to a much nicer emoji picker in GTK 3.24).

My Code

Most of my work is upstream and already integrated into Fractal. This is a list of all commits merged into Master:

* b9d7a5a - message_menu: remove appOP dependecy from message_menu 
* 6897ad0 - roomsettings: update .po file and add i18n 
* f96eeef - roomsettings: fix member list disappearing when closing 
* c772d9f - roomsettings: hide confirm button when the topic/name didn"t change 
* cf7dffd - roomsettings: clean up code from old memberslist 
* 44b2d28 - roomsettings: reload room settings when members are avaible 
* 089b001 - headerbar: remove avatar form the headerbar 
* 20c9ebd - roomsettings: move room settings panel to a custom widget 
* 692045f - roomsettings: make room description label a dim-label 
* 78c5be1 - roomsettings: set max with for centerd column 
* 0c99e0e - roomheaderbar: move room menu to new settings button 
* a2de6ac - roomsettings: hide not implemented widgets 
* 5097931 - roomsettings: request avatar when entering the room settings 
* 0244a23 - roomsettings: hide settings not needed for a room type 
* dc83201 - roomsettings: add invite button 
* 544fc52 - roomsettings: show members in the room settings 
* 4e6b802 - room: add room settings panel 
* 8993f5f - avatar: refactor avatar loading and caching 
* b81f1ec - accountsettings: add closures to address button callback 
* 200440d - accountsettings: remove seperator in user settings menu 
* e88663e - accountsettings: make animation for destruction revealer slower 
* 0068fde - accountsettings: fix valignment of label name/email/phone 
* 140da51 - accountsettings: scroll to advanced/destruction section when open 
* f1e86bd - accountsettings: add placeholder text 
* c4d608b - accountsettings: show spinner will updating the avatar 
* 41a07a8 - accountsettings: generate client secret for each request 
* 7c459d3 - accountsettings: use stored identity server 
* 708c5c0 - accountsettings: add button to submit new display name 
* e019e48 - accountsettings: move account settings to the main window 
* 6ee5c99 - accountsettings: show confirm dialog when deleting account 
* 9a2dada - accountsettings: remove loading spinner once password is set 
* 835ab15 - accountsettings: password change and account destruction 
* ba74610 - accountsettings: managment for phone and email threePIDs 
* 345cb8e - accountsettings: add buttons to address entries 
* 04ed27f - accountsettings: load threePIDs (email and phone addresses) 
* a214504 - accountsettings: password validation 
* 8711c01 - accountsettings: add password change dialog 
* 1d31eab - accountsettings: add api for changing account avatar 
* de83098 - accountsettings: add api for changing account name 
* 96bcb94 - accountsettings: add UI for account settings 
* 1fec2e5 - login: use global constant for homeserver and identity server 
* ee1ad9c - login: store identity server 
* 206035c - gspell: add basic spell check 
* f92387c - mention: highlight own username in mentions using pango attributes 
* 3d3a873 - fix: add joining members to the there own room instate to the active room 
* d6145e8 - message-history: set space after last message to 18px 
* 972d44a - mention: blue highlight for messages with mentions and cleanup css file 
* 0571245 - autoscroll: add ease-out when scrolling to last message 
* b35dec6 - autoscroll: add button to scroll down to last message 
* 789048f - autoscroll: move to new message when the user send a message 
* 55dcaf1 - autoscroll: no delay before autoscroll 
* 77051f6 - center inapp notification 
* 5c1cd22 - fix autoscroll, disable autoscroll when not at the end of message history, fix #137 
* f59c3d2 - autocomplete: fix change usernames font color to white when selection ends at the same position as the username 
* f6a3088 - move autocomplete popover to a seperate file 
* 0f2d8a8 - messages-history: add spacing after last message 
* b548fc9 - messages-history: remove blue selection and add more space inside each row 
* dc0f1e1 - add Julian Sparber to the author list 
* 20beb87 -add spacing bethween avatar and msg body 
* b721c96 - add padding to the message history 
* 533a736 - implement redesin of the autocomplete popover, fix #146 
* 9a9f215 - remove " (IRC)" from the end of suggested username for mentions, fix #126 
* 101c7cd - use only @ with tab-compleation 
* 798c6d3 - match also uid for mentions 
* 3caf5a0 - limit user name suggests to 5 and allow usage of @ for mention 
* b2c5a17 - fix spacing in user menu popover and in room settings popover 
* 8fa9b39 - reoder options in add room menu, add separator and change some lables 
* 5f15909 - fix title of dialog for joining a room by ID 
* b475b98 - center all dialog title on dialog width and refactor glade file 
* d9059f0 - fix the text of some lables 
* e5827c5 - spawn popover from room name and remove room-menu-button 
* 7b72f59 - add image to no room view and add new style #132 
* e92b20a - fix alignment of no room selected message by splitting the text to two label 
* 6f2fa3b - fix alignment for the text when no room is selected 
* fcb8b41 - set focus for each stack view, fix #118 
* c2a2816 - increase avatar size in the sidebar to 24px 
* d7dc175 - make user menu button avatar 24px 
* f94c558 - make room avatar in header 24px if no description 
* 7b4fe55 - show user info in popover 
* 9a0d36b - split user menu in two menus and remove the title in the left headerbar 
* 3c80382 - add spinner to initial sync notification 
* 1db7847 - [refactor] use headerbars in the titlebar instate of boxes 
* 2538993 - adjust spacing on first message in group 
* 6db14b6 - set timestamp fontsize to small 
* 3a92de6 - fix spacing around messages 
* f355fdc - add spacing to load more button and center it 

Some of my work went into libraries so it can be used by other applications, and one big merge request is still work in progress. I will continue to work on these git repositories, therefore I added the last commit for each repo done during GSoC:

The Gspell rust bindings:
https://gitlab.gnome.org/jsparber/gspell-rs
(last commit: 1501c997e2450378ea7e8291c94fa8189bb360df)
https://gitlab.gnome.org/jsparber/gspell-sys-rs
(last commit: 158e8ebbe5aeafc96663e3e76fc9a25715198264)

The libhandy rust bindings:
https://gitlab.gnome.org/jsparber/libhandy-rs
(last commit: f174e76882896c32959d0d16fe11eaadfce1c674)
https://gitlab.gnome.org/jsparber/libhandy-sys-rs
(last commit: 2caa9f6e0b68d391a6cdb45123f85c73b9852553)

I also created a libhandy test branch in Fractal, but I got never around merging it, because of a bug in libhandy (though it is fixed now, so it could be merged soon)
https://gitlab.gnome.org/jsparber/fractal/tree/handy
(last commit: b28da9a7443c34989dc8703402e6bc154a962f57)

The code to generate an avatar based on usernames:
https://gitlab.gnome.org/jsparber/letter-avatar
(last commit: 083e65e6ae8f63dc4071d3fac47adaa81e99d607)

There’s already a work in progress MR for the room history refactor:
https://gitlab.gnome.org/World/fractal/merge_requests/184
(last commit: 38abd3ceb3eb6d0ef1e70f66c7b0139ad65413b4)

August 10, 2018

The birth of a new runtime

Runtimes are a core part of the flatpak design. They are a way to make bundling feasible, while still fully isolating from the host system. Application authors can bundle the libraries specific to the application, but don’t have to care about the lowlevel dependencies that are uninteresting (yet important) for the application.

Many people think of runtimes primarily as a way to avoid duplication (and thus bloat). However, they play two other important roles. First of all they allow an independent stream of updates for core libraries, so even dead apps get fixes. And secondly, they allow the work of the bundling to be shared between all application authors.

There are some runtimes bases on pre-existing distribution packages, such as Fedora and Debian. These are very useful if you want to produce flatpaks of the packages from these distributions. However, most of the “native” Flatpaks these days are based on the Freedesktop 1.6 runtime or one of its derivates (like the Gnome and KDE runtimes).

Unfortunately this runtime is starting to show its age.

The freedesktop runtime is built in two steps. The first is based on Yocto, which is a cross-compilation system maintained by the Linux Foundation. An image is created from this build which is then further extended using flatpak-builder. This was a great way to get something going initially. However, Yocto focuses mainly on cross compilation and embedded which isn’t a great fit, and the weird 2 layer split and the complex yocto build files lead to very few people being able to build or do any work on the runtime. It also didn’t help that the build system was a bunch of crappy scripts that needed a lot of handholding by me.

Fortunately this is now getting much better, because today the new Freedesktop runtime, version 18.08, was released!

This runtime has the same name, and its content is very similar, but it is really a complete re-implementation. It is based on a new build system called BuildStream, which is much nicer and a great fit for flatpak. So, no more Yocto, no more buildbake, no multi-layer builds!

Additionally, it has an entire group of people working on it, including support from Codethink. Its already using gitlab, with automatic builds, CI, etc, etc. There is also a new release model (year.month) with a well-defined support time. Also, all the packages are much newer!

Gnome is also looking at using this as the basics for its releases, its CI system and eventually the Gnome runtime.

The old freedesktop runtime is dead, long live the freedesktop runtime!

GSoC 2018: Overview

Introduction

Throughout the summer I was working on librsvg, a GNOME library for rendering SVG files to Cairo surfaces. This post is an overview of the work I did with relevant links.

My Results

For the project I was to port the SVG filter infrastructure of librsvg from C to Rust, adding all missing filter tests from the SVG test suite along the way. I was also expected to implement abstractions to make the filter implementation more convenient, including Rust iterators over the surface pixels.

Here’s a list of all merge requests accepted into librsvg as part of my GSoC project:

Here’s a convenient link to see all of these merge requests in GitLab: https://gitlab.gnome.org/GNOME/librsvg/merge_requests?scope=all&utf8=%E2%9C%93&state=all&author_username=YaLTeR&label_name[]=GSoC%202018

All of this code was accepted into the mainline and will appear in the next stable release of librsvg.

I also wrote the following blog posts detailing some interesting things I worked on as part of the GSoC project:

Further Work

There are a couple of fixes which still need to be done for filters to be feature-complete:

  • Fixing filters operating on off-screen nodes. Currently all intermediate surfaces are limited to the original SVG view area so anything off-screen is inaccessible to filters even when it should be. This is blocked on some considerable refactoring in the main librsvg node drawing code which is currently underway.
  • Implementing the filterRes property. This property allows to set the pixel resolution for filter operations and is one of the ways of achieving more resolution-independent rendering results. While it can be implemented with the current code as is, it will be much more convenient to account for it while refactoring the code to fix the previous issue.
  • Implementing the enable-background property. The BackgroundImage filter input should adhere to this property when picking which nodes to include in the background image, whereas it currently doesn’t.

GSoC 2018: Parallelizing Filters with Rayon

Introduction

I’m working on SVG filter effects in librsvg, a GNOME library for rendering SVG files to Cairo surfaces. After finishing porting all filters from C to Rust and adding tests, I started investigating the filter performance. With the codebase converted to Rust, I am able to confidently apply important optimizations such as parallelization. In this post I’ll show how I parallelized two computation-intensive filter primitives.

Rayon

Rayon is a Rust crate for introducing parallelism into existing code. It utilizes Rust’s type system to guarantee memory safety and data-race free execution. Rayon mimics the standard Rust iterators, so it’s possible to convert the existing iterator-using code with next to no changes. Compare the following single-threaded and parallel code:

// Single-threaded.
fn sum_of_squares(input: &[i32]) -> i32 {
    input.iter()
         .map(|i| i * i)
         .sum()
}

// Parallelized with rayon.
use rayon::prelude::*;
fn sum_of_squares(input: &[i32]) -> i32 {
    input.par_iter()
         .map(|i| i * i)
         .sum()
}

By merely using .par_iter() instead of .iter(), the computation becomes parallelized.

Parallelizing Lighting Filters

Going forward with analyzing and improving the performance of the infamous mobile phone case, the two biggest time sinks were the lighting and the Gaussian blur filter primitives. It can be easily seen on the callgrind graph from KCachegrind:

KCachegrind graph for filter rendering

Since I was working on optimizing the lighting filters, I decided to try out parallelization there first.

The lighting filter primitives in SVG (feDiffuseLighting and feSpecularLighting) can be used to cast light onto the canvas using a render of an existing SVG node as a bump map. The computation is quite involved, but it boils down to performing a number of arithmetic operations for each pixel of the input surface independently of the others—a perfect target for parallelization.

This is what the code initially looked like:

let mut output_surface = ImageSurface::create(
    cairo::Format::ARgb32,
    input_surface.width(),
    input_surface.height(),
)?;

let output_stride = output_surface.get_stride() as usize;
let mut output_data = output_surface.get_data().unwrap();

let mut compute_output_pixel = |x, y, normal: Normal| {
    let output_pixel = /* expensive computations */;

    output_data.set_pixel(output_stride, output_pixel, x, y);
};

// Compute the edge pixels
// <...>

// Compute the interior pixels
for y in bounds.y0 as u32 + 1..bounds.y1 as u32 - 1 {
    for x in bounds.x0 as u32 + 1..bounds.x1 as u32 - 1 {
        compute_output_pixel(
            x,
            y,
            interior_normal(&input_surface, bounds, x, y),
        );
    }
}

The edge pixel computation is separated out for optimization reasons and it’s not important. We want to focus on the main loop over the interior pixels: it takes up the most time.

What we’d like to do is to take the outer loop over the image rows and run it in parallel on a thread pool. Since each row (well, each pixel in this case) is computed independently of the others, we should be able to do it without much hassle. However, we cannot do it right away: the compute_output_pixel closure mutably borrows output_data, so sharing it over multiple threads would mean multiple threads get concurrent mutable access to all image pixels, which could result in a data race and so won’t pass the borrow checker of the Rust compiler.

Instead, we can split the output_data slice into row-sized non-overlapping chunks and feed each thread only those chunks that it needs to process. This way no thread can access the data of another thread.

Let’s change the closure to accept the target slice (as opposed to borrowing it from the enclosing scope). Since the slices will start from the beginning of each row rather than the beginning of the whole image, we’ll also add an additional base_y argument to correct the offsets.

let compute_output_pixel =
    |mut output_slice: &mut [u8],
     base_y,
     x,
     y,
     normal: Normal| {
        let output_pixel = /* expensive computations */;

        output_slice.set_pixel(
            output_stride,
            output_pixel,
            x,
            y - base_y,
        );
    };

// Compute the interior pixels
for y in bounds.y0 as u32 + 1..bounds.y1 as u32 - 1 {
    for x in bounds.x0 as u32 + 1..bounds.x1 as u32 - 1 {
        compute_output_pixel(
            output_data,
            0,
            x,
            y,
            interior_normal(&input_surface, bounds, x, y),
        );
    }
}

Now we can convert the outer loop to operate through iterators using the chunks_mut() method of a slice which does exactly what we want: returns the slice in evenly sized non-overlapping mutable chunks.

let compute_output_pixel =
    |mut output_slice: &mut [u8],
     base_y,
     x,
     y,
     normal: Normal| {
        let output_pixel = /* expensive computations */;

        output_slice.set_pixel(
            output_stride,
            output_pixel,
            x,
            y - base_y,
        );
    };

// Compute the interior pixels
let first_row = bounds.y0 as u32 + 1;
let one_past_last_row = bounds.y1 as u32 - 1;
let first_pixel = (first_row as usize) * output_stride;
let one_past_last_pixel =
    (one_past_last_row as usize) * output_stride;

output_data[first_pixel..one_past_last_pixel]
    .chunks_mut(output_stride)
    .zip(first_row..one_past_last_row)
    .for_each(|(slice, y)| {
        for x in bounds.x0 as u32 + 1..bounds.x1 as u32 - 1 {
            compute_output_pixel(
                slice,
                y,
                x,
                y,
                interior_normal(
                    &input_surface,
                    bounds,
                    x,
                    y,
                ),
            );
        }
    });

And finally, parallelize by simply changing chunks_mut() to par_chunks_mut():

use rayon::prelude::*;

output_data[first_pixel..one_past_last_pixel]
    .par_chunks_mut(output_stride)
    .zip(first_row..one_past_last_row)
    .for_each(|(slice, y)| {
        for x in bounds.x0 as u32 + 1..bounds.x1 as u32 - 1 {
            compute_output_pixel(
                slice,
                y,
                x,
                y,
                interior_normal(
                    &input_surface,
                    bounds,
                    x,
                    y,
                ),
            );
        }
    });

Let’s see if the parallelization worked! Here I’m using time to measure how long it takes to render the mobile phone SVG.

Before parallelization:

└─ time ./rsvg-convert -o temp.png mobile_phone_01.svg
6.95user 0.66system 0:07.62elapsed 99%CPU (0avgtext+0avgdata 270904maxresident)k
0inputs+2432outputs (0major+714373minor)pagefaults 0swaps

After parallelization:

└─ time ./rsvg-convert -o temp.png mobile_phone_01.svg
7.47user 0.63system 0:06.04elapsed 134%CPU (0avgtext+0avgdata 271328maxresident)k
0inputs+2432outputs (0major+714460minor)pagefaults 0swaps

Note that even though the user time went up, the elapsed time went down by 1.5 seconds, and the CPU utilization increased past 100%. Success!

Parallelizing Gaussian Blur

Next, I set out to parallelize Gaussian blur, the biggest timesink for the phone and arguably one of the most used SVG filters altogether.

The SVG specification hints that for all reasonable values of the standard deviation parameter the blur can be implemented as three box blurs (taking the average value of the pixels) instead of the much more costly Gaussian kernel convolution. Pretty much every SVG rendering agent implements it this way since it’s much faster and librsvg is no exception. This is why you’ll see functions called box_blur and not gaussian_blur.

Both box blur and Gaussian blur are separable convolutions, which means it’s possible to implement them as two passes, one of which is a loop blurring each row of the image  and another is a loop blurring each column of the image independently of the others. For box blur specifically it allows for a much more optimized convolution implementation.

In librsvg, the box blur function contains an outer loop over the rows or the columns of the input image, depending on the vertical argument and an inner loop over the columns or the rows, respectively. It uses i and j for the outer and inner loop indices and has some helper functions to convert those to the actual coordinates, depending on the direction.

// Helper functions for getting and setting the pixels.
let pixel = |i, j| {
    let (x, y) = if vertical { (i, j) } else { (j, i) };

    input_surface.get_pixel_or_transparent(bounds, x, y)
};

let mut set_pixel = |i, j, pixel| {
    let (x, y) = if vertical { (i, j) } else { (j, i) };

    output_data.set_pixel(output_stride, pixel, x, y);
};

// Main loop
for i in other_axis_min..other_axis_max {
    // Processing the first pixel
    // <...>

    // Inner loop
    for j in main_axis_min + 1..main_axis_max {
        // <...>
    }
}

Trying to convert this code to use chunks_mut() just like the lighting filters, we stumble on an issue: if the outer loop is iterating over columns, rather than rows, the output slices for all individual columns overlap (because the pixels are stored in row-major order). We need some abstraction, like a matrix slice, that can be split into non-overlapping mutable subslices by rows or by columns.

The first thing that comes to mind is to try using the Matrix type from the nalgebra crate which does have that functionality. However, it turns out that nalgebra doesn’t currently support rayon or even have by-row or by-column iterators. I tried implementing my own iterators but that required some very non-obvious unsafe code with odd trait bound restrictions which I really wasn’t sure were correct. Thus, I scrapped that code and made my own wrapper for the ImageSurface which only contains things needed for this particular use case.

To reiterate, we need a wrapper that:

  • provides write access to the image pixels,
  • can be split by row or by column into non-overlapping chunks,
  • is Send, i.e. can be safely sent between threads (for parallelizing).

Here’s what I came up with:

struct UnsafeSendPixelData<'a> {
    width: u32,
    height: u32,
    stride: isize,
    ptr: NonNull<u8>,
    _marker: PhantomData<&'a mut ()>,
}

unsafe impl<'a> Send for UnsafeSendPixelData<'a> {}

impl<'a> UnsafeSendPixelData<'a> {
    /// Creates a new `UnsafeSendPixelData`.
    ///
    /// # Safety
    /// You must call `cairo_surface_mark_dirty()` on the
    /// surface once all instances of `UnsafeSendPixelData`
    /// are dropped to make sure the pixel changes are
    /// committed to Cairo.
    #[inline]
    unsafe fn new(
        surface: &mut cairo::ImageSurface,
    ) -> Self {
        assert_eq!(
            surface.get_format(),
            cairo::Format::ARgb32
        );
        let ptr = surface.get_data().unwrap().as_mut_ptr();

        Self {
            width: surface.get_width() as u32,
            height: surface.get_height() as u32,
            stride: surface.get_stride() as isize,
            ptr: NonNull::new(ptr).unwrap(),
            _marker: PhantomData,
        }
    }

    /// Sets a pixel value at the given coordinates.
    #[inline]
    fn set_pixel(&mut self, pixel: Pixel, x: u32, y: u32) {
        assert!(x < self.width);
        assert!(y < self.height);

        let value = pixel.to_u32();

        unsafe {
            let ptr = self.ptr.as_ptr().offset(
                y as isize * self.stride + x as isize * 4,
            ) as *mut u32;
            *ptr = value;
        }
    }

    /// Splits this `UnsafeSendPixelData` into two at the
    /// given row.
    ///
    /// The first one contains rows `0..index` (index not
    /// included) and the second one contains rows
    /// `index..height`.
    #[inline]
    fn split_at_row(self, index: u32) -> (Self, Self) {
        assert!(index <= self.height);

        (
            UnsafeSendPixelData {
                width: self.width,
                height: index,
                stride: self.stride,
                ptr: self.ptr,
                _marker: PhantomData,
            },
            UnsafeSendPixelData {
                width: self.width,
                height: self.height - index,
                stride: self.stride,
                ptr: NonNull::new(unsafe {
                    self.ptr
                        .as_ptr()
                        .offset(index as isize * self.stride)
                }).unwrap(),
                _marker: PhantomData,
            },
        )
    }

    /// Splits this `UnsafeSendPixelData` into two at the
    /// given column.
    ///
    /// The first one contains columns `0..index` (index not
    /// included) and the second one contains columns
    /// `index..width`.
    #[inline]
    fn split_at_column(self, index: u32) -> (Self, Self) {
        assert!(index <= self.width);

        (
            UnsafeSendPixelData {
                width: index,
                height: self.height,
                stride: self.stride,
                ptr: self.ptr,
                _marker: PhantomData,
            },
            UnsafeSendPixelData {
                width: self.width - index,
                height: self.height,
                stride: self.stride,
                ptr: NonNull::new(unsafe {
                    self.ptr
                        .as_ptr()
                        .offset(index as isize * 4)
                }).unwrap(),
                _marker: PhantomData,
            },
        )
    }
}

The wrapper contains a pointer to the data rather than a mutable slice of the data, so the intermediate pixels (which cannot be accessed through set_pixel()) are not mutably aliased between different instances of UnsafeSendPixelData.

Now it’s possible to implement an iterator over the rows or the columns for this wrapper, however I went with a different, simpler approach: using rayon‘s scope functionality which allows spawning worker threads directly into rayon‘s thread pool.

First, let’s change the existing code to operate on individual rows or columns, just like we did with the lighting filters:

// The following loop assumes the first row or column of
// `output_data` is the first row or column inside `bounds`.
let mut output_data = if vertical {
    output_data.split_at_column(bounds.x0 as u32).1
} else {
    output_data.split_at_row(bounds.y0 as u32).1
};

for i in other_axis_min..other_axis_max {
    // Split off one row or column and launch its processing
    // on another thread. Thanks to the initial split before
    // the loop, there's no special case for the very first
    // split.
    let (mut current, remaining) = if vertical {
        output_data.split_at_column(1)
    } else {
        output_data.split_at_row(1)
    };

    output_data = remaining;

    // Helper function for setting the pixels.
    let mut set_pixel = |j, pixel| {
        // We're processing rows or columns one-by-one, so
        // the other coordinate is always 0.
        let (x, y) = if vertical { (0, j) } else { (j, 0) };
        current.set_pixel(pixel, x, y);
    };

    // Processing the first pixel
    // <...>

    // Inner loop
    for j in main_axis_min + 1..main_axis_max {
        // <...>
    }
}

I could avoid the current and the base_i arguments to the set_pixel closure because I can declare the closure from within the loop, whereas in the lighting filters code the compute_output_pixel closure had to be used and so declared outside of the main loop.

Now it’s a simple change to split the work across rayon‘s threads:

// The following loop assumes the first row or column of
// `output_data` is the first row or column inside `bounds`.
let mut output_data = if vertical {
    output_data.split_at_column(bounds.x0 as u32).1
} else {
    output_data.split_at_row(bounds.y0 as u32).1
};

// Establish a scope for the threads.
rayon::scope(|s| {
    for i in other_axis_min..other_axis_max {
        // Split off one row or column and launch its
        // processing on another thread. Thanks to the
        // initial split before the loop, there's no special
        // case for the very first split.
        let (mut current, remaining) = if vertical {
            output_data.split_at_column(1)
        } else {
            output_data.split_at_row(1)
        };

        output_data = remaining;

        // Spawn the thread for this row or column.
        s.spawn(move |_| {
            // Helper function for setting the pixels.
            let mut set_pixel = |j, pixel| {
                // We're processing rows or columns
                // one-by-one, so the other coordinate is
                // always 0.
                let (x, y) =
                    if vertical { (0, j) } else { (j, 0) };
                current.set_pixel(pixel, x, y);
            };

            // Processing the first pixel
            // <...>

            // Inner loop
            for j in main_axis_min + 1..main_axis_max {
                // <...>
            }
        });
    }
});

Let’s measure the performance. In the end of the previous section we had:

└─ time ./rsvg-convert -o temp.png mobile_phone_01.svg
7.47user 0.63system 0:06.04elapsed 134%CPU (0avgtext+0avgdata 271328maxresident)k
0inputs+2432outputs (0major+714460minor)pagefaults 0swaps

And now after the parallelization:

└─ time ./rsvg-convert -o temp.png mobile_phone_01.svg
10.32user 1.10system 0:04.57elapsed 250%CPU (0avgtext+0avgdata 272588maxresident)k
0inputs+2432outputs (0major+1009498minor)pagefaults 0swaps

We cut another 1.5 seconds and further increased the CPU utilization!

Conclusion

Rayon is an excellent crate which provides a multitude of ways for safely parallelizing Rust code. It builds on idiomatic concepts such as iterators, but also contains a number of other convenient ways for parallelizing code when standard iterators aren’t sufficient or wouldn’t be very convenient. It uses the Rust’s type system to statically guarantee the absence of data races and manages the low-level details on its own allowing the programmer to focus on the actual computation.

The parallelized filters are now included in the latest development release of librsvg for testing if using multiple threads leads to any issues downstream.

AbiWord: It's alive

No, AbiWord is not dead. The project has been very very slow moving due to very little contributions. But sometime a spark ignite things.

Martin needed to fix a major issue we had related to how AbiWord uses Gtk wrong

Then I took some time to perform two long overdue tasks:

1. Moving to git. We have had a complete mirror of the Subversion repository for a few years on Github. But now that GNOME moved to their own Gitlab, I felt that this was a more appropriate place for AbiWord. So we did. Thanks to Carlos Soriano, AbiWord is now hosted in GNOME World.

2. Making AbiWord available as a Flatpak. This would alleviate all the issues plagging us wher distribution never update the app, even on the stable branch. For a while I had a manifest to build it, all it needed were a few adjustments and reaching the hosting stage.. Thanks to the Flathub team, AbiWord 3.0.2+ is now available on Flathub. Check it out.

The next steps:

1. Release 3.0.3 (the current stable branch) and update the Flatpak.

2. Release 3.next (I think it will be 3.2) as the next stable branch. This one includes a large set of fixes that should fix lot of crash and memory corruptions.

3. Work on more incremental release. With Flatpak this will be easier.

4. Gtk4 port: we might hit some roadblock with that.

August 09, 2018

Developer Center Initiative – Meeting Summary 8th August

Yesterday we had the second meeting about the Developer Center Initiative. We had 14 attendees with participation from HotDoc, GJS, Purism, Builder and more.

The primary topic for this week was to let developers review and demonstrate website prototypes based on different technologies. This is where we would like your opinion too! We are particularly interested in choosing the solution which..

  • ..can unify our documentation and API reference.
  • ..is maintainable.
  • ..is easy to contribute to for writers/developers.

I would like to thank everyone for working on the presented solutions! The test instances are not fully functional of course, but they  prove that it’s a possible direction we can go if we choose to.

Django

The Django instance is based on the software stack used previously for the Ubuntu Phone developer portal (An example can be seen from the Wayback Machine). It is a dynamic CMS which uses minimal Bootstrap CSS with possibility to extend its functionality using plug-ins. Users registered in the Django instance can edit documentatioGNOME documentationn using the built-in WYSIWYG editor and the documentation can be searched in all programming languages through a unified search. It can be setup for multiple human languages with possibility to set fallback languages. It is also possible to create redirects and stage changes before they are published.

The Ubuntu Portal used language-specific scripts to import API documentation by reading documents generated for each language. For GObject documentation, an importer would be needed to be written. One possibility could be to modify HotDoc to generate the documentation into Django’s database.

One of the things discussed in relation to Django was whether a dynamic website would be necessary (versus static). It creates a heavier load but could offer some extor HTMLra possibilities like comments/code attachments or generated code packages.

HotDoc

HotDoc generates a static website which can become a replacement for the whole Developer Center. It can generate API reference itself from GObject and has unified search. It links languages together so you can easily switch from one programming language to another. HotDoc allows us to specify subprojects meaning that library maintainers can generate documentation on their own and the documentation can then be positioned in HotDocs sitemap.  It is meant to replace GtkDoc this way. Documentation is written in Markdown and is editable through “Edit in Gitlab” links.

The project has not been developed further so much lately since there are not so many users and the current users of HotDoc have most of the features they want. Mathieu and Thiblahute gave examples such as the PiTiVi documentation and the GStreamer documentation (work in progress). Mathieu assessed that to use HotDoc as developer center, the existing documentation would need to be ported to MarkDown. He has implemented a PanDoc reader which could be usable for this purpose. Furthermore, libraries would need to be ported from GtkDoc to HotDoc.

VuePress

Evan has built a GJS Guide for his internship based on VuePress which is used by VueJS. It supports multiple human languages via directory overrides and has search. Documentation is written in MarkDown which VuePress compiles into tutorials with table of contents, editable through “Edit in Gitlab” links. This could also be usable for API reference since GObject-introspection now can generate MarkDown and HTML from GtkDoc and make links in-between documentation.

Currently there is no example of API reference ported to VuePress – the test instance currently has it in DevDocs. To generate the API documentation, a CI runner would need to have GObject-introspection generate each language into a directory and create a dropdown component in VuePress to switch between directories. Some extra CSS styling would need to be done to make it conform to Allan’s theme. It currently does not live in GNOME’s Gitlab due to slow CI for Gitlab Pages.

Sphinx

Sphinx was also on the radar, but we did not have time to discuss it in detail. Currently no test instance exists either, except for the PyGObject documentation which is written in ReadTheDocs. Christoph Reiter who is one of the documentation maintainers has shared his thoughts on using Sphinx for the Developer Center in the Gitlab discussion linked above.

Next Steps

With the demonstrated instances and detailed discussion we hope to make a decision on a specific technology at the next meeting. If you have questions or comments on the presented technologies, please let your voice be heard in the linked Gitlab discussion threads! It will help inform the final decision-making.

Finally, check out the  framadate we have running for arranging the next meeting in 2 weeks. See you there!

libgepub + rust

In 2010 I was working with evince, the gnome PDF document viewer, trying to add some accessibility to PDF files. That was really hard, not because GTK+ or ATK technology but because the PDF format itself. The PDF format is really cool for printing because you know that the piece of paper will look the same as the PDF doc, and because it's vector it scales and don't loose quality and files are smaller than image files, but almost all PDF files have not any metadata for sections, headings, tablets or so, this depends on the creation tool, but it's really hard to deal with PDF content text, because you don't know event if the text that you're reading is really in the same order that you read from the PDF.

After my fight against the PDF format hell and poppler, I discovered the epub format that's a really simple format for electronic books. An epub is a zip with some XML files describing the book index and every chapter is a xhtml and xhtml is a good format compared to PDF because you can parse easily with any XML lib and the content is tagged and well structured so you know what's a heading, what's a paragraph, etc.

So I started to write a simple C library to read epub files, thinking about add epub support to evince. That's how libgepub was born. I tried to integrate libgepub in evince, I've something working, rendering with webkit, but nothing really useful, because evince needs pages and it's not easy to split an xhtml file in pages with the same height, because xhtml is continuous text and it adapts to the page width, so I give up and leave this branch.

gnome-books

gnome-books

After some time, I discovered gnome-books, and the idea of epub support comes to me again. Books has an initial epub support, that consists only in showing in the collection, but when you click on the book, an error message was shown because there's no epub preview support.

I've libgepub with GIR support and books was written with gjs so it was easy for me to add an initial support for epub documents using libgepub and rendering with webkit.

So libgepub is used in a gnome app to render epub documents and for that reason gnome depends on libgepub now.

Rustifying everything

In 2017 I discover rust and fell in love with this language, so I wanted to write some code with rust and I started porting this epub library and I ended creating the epub crate, that's basically the same implementation but with rust instead of C, the API is really similar.

After that, I read somewhere that Federico was porting the librsvg code to rust, so I thought that I can do the same with libgepub and replace almost all the C code with this new rust crate.

I copied all the code for autotools to build using cargo and I've copied to the glue code from librsvg and adapt to my own lib. That worked really well and I was able to remove the core libgepub C code and replace with the new rust code.

But rust is really new and it was not full supported in all distributions, so release depending on rust will make the release people's live harder. That's the reason because I decided to leave the C version for now and wait a little to make the rust migration.

And here we're, the libgepub has changed a little, some fixes and small changes, but the main change is that now we're using meson to build instead of autotools, and it's not easy to integrate a rust+cargo lib in the meson desc. There's a PR in meson to support this but it seems that meson devs doesn't like the idea.

So yesterday I was playing around with meson to get back the rust code working in libgepub, and was not easy, but now I've a working build configuration that works. It's a hack with a custom script to build with cargo, but it's working so I'm really happy.

Show me the code

To integrate the rust epub crate in the libgepub C lib I needed to write a simple cargo project with the glue code that converts the rust types to glib/c types and expose that in a C lib, and that's the libgepub_internals, that's full of glue rust+glib code like:

#[no_mangle]
pub extern "C" fn epub_new(path: *const libc::c_char) -> *mut EpubDoc {
    let my_path = unsafe { &String::from_glib_none(path) };
    let doc = EpubDoc::new(my_path);
    let doc = doc.unwrap();
    Box::into_raw(Box::new(doc))
}

#[no_mangle]
pub unsafe extern "C" fn epub_destroy(raw_doc: *mut EpubDoc) {
    assert!(!raw_doc.is_null());
    let _ = Box::from_raw(raw_doc);
}

Then, in the gepub-doc.c, that's the glib object that expose the GIR API, I added calls to this functions, defining each function heading that'll be used from the rust static lib at link time:

// Rust
void      *epub_new(char *path);
void       epub_destroy(void *doc);
void      *epub_get_resource(void *doc, const char *path, int *size);
void      *epub_get_resource_by_id(void *doc, const char *id, int *size);
void      *epub_get_metadata(void *doc, const char *mdata);
void      *epub_get_resource_mime(void *doc, const char *path);
void      *epub_get_resource_mime_by_id(void *doc, const char *id);
void      *epub_get_current_mime(void *doc);
void      *epub_get_current(void *doc, int *size);
void      *epub_get_current_with_epub_uris(void *doc, int *size);
void       epub_set_page(void *doc, guint page);
guint      epub_get_num_pages(void *doc);
guint      epub_get_page(void *doc);
gboolean   epub_next_page(void *doc);
gboolean   epub_prev_page(void *doc);
void      *epub_get_cover(void *doc);
void      *epub_resource_path(void *doc, const char *id);
void      *epub_current_path(void *doc);
void      *epub_current_id(void *doc);
void      *epub_get_resources(void *doc);
guint      epub_resources_get_length(void *er);

gchar     *epub_resources_get_id(void *er, gint i);
gchar     *epub_resources_get_mime(void *er, gint i);
gchar     *epub_resources_get_path(void *er, gint i);

And then calling to this functions:

static gboolean
gepub_doc_initable_init (GInitable     *initable,
                         GCancellable  *cancellable,
                         GError       **error)
{
    GepubDoc *doc = GEPUB_DOC (initable);

    g_assert (doc->path != NULL);
    doc->rust_epub_doc = epub_new (doc->path);
    if (!doc->rust_epub_doc) {
        if (error != NULL) {
            g_set_error (error, gepub_error_quark (), GEPUB_ERROR_INVALID,
                         "Invalid epub file: %s", doc->path);
        }
        return FALSE;
    }

    return TRUE;
}

To make this work, I needed to add the libgepub_internals dependency to the libgepub meson.build like this:

gepub_deps = [
  dependency('gepub_internals', fallback: ['libgepub_internals', 'libgepub_internals_dep']),
  dependency('webkit2gtk-4.0'),
  dependency('libsoup-2.4'),
  dependency('glib-2.0'),
  dependency('gobject-2.0'),
  dependency('gio-2.0'),
  dependency('libxml-2.0'),
  dependency('libarchive')
]

This definition looks for the gepub_internals lib in the libgepub_internals subproject with this meson.build:

project(
  'libgepub_internals', 'rust',
  version: '3.29.6',
  license: 'GPLv3',
)

libgepub_internals_version = meson.project_version()
version_array = libgepub_internals_version.split('.')
libgepub_internals_major_version = version_array[0].to_int()
libgepub_internals_minor_version = version_array[1].to_int()
libgepub_internals_version_micro = version_array[2].to_int()

libgepub_internals_prefix = get_option('prefix')

cargo = find_program('cargo', required: true)
cargo_vendor = find_program('cargo-vendor', required: false)
cargo_script = find_program('scripts/cargo.sh')
grabber = find_program('scripts/grabber.sh')
cargo_release = find_program('scripts/release.sh')

c = run_command(grabber)
sources = c.stdout().strip().split('\n')

cargo_build = custom_target('cargo-build',
                        build_by_default: true,
                        input: sources,
                        output: ['libgepub_internals'],
                        install: false,
                        command: [cargo_script, '@CURRENT_SOURCE_DIR@', '@OUTPUT@'])

libgepub_internals_lib = static_library('gepub_internals', cargo_build)

cc = meson.get_compiler('c')
th = dependency('threads')
libdl = cc.find_library('dl')

libgepub_internals_dep = declare_dependency(
  link_with: libgepub_internals_lib,
  dependencies: [th, libdl],
  sources: cargo_build,
)

I'm using here a custom_target to build the lib using a custom script that simply calls to cargo and then copies the result lib to the correct place:

if [[ $DEBUG = true ]]
then
    echo "DEBUG MODE"
    cargo build --manifest-path $1/Cargo.toml && cp $1/target/debug/libgepub_internals.a $2.a
else
    echo "RELEASE MODE"
    cargo build --manifest-path $1/Cargo.toml --release && cp $1/target/release/libgepub_internals.a $2.a
fi

Then I declared the static_library and the dependency with declare_dependency. I need to add threads and dl because the epub crate depends on it and this works!

I'll need to vendor all dep crates with cargo-vendor for releasing, but I think that this is working and it's the way to go with libgepub.

The future of libgepub + rust

Currently, with the libgepub_internals lib, the gepub-doc.c code is basically to provide a gobject and GIR information to be able to work with epub docs, but the real work is done in Rust. The gnome-class provides a simple way to build this gobject with rust code, but currently it's not completed and there's no way to generate GIR, but in the future, it could be cool to remove the gepub-doc.c code and generate all with gnome-class. I can wait until Federico writes the piece of code that I need for this or maybe I should contribute to gnome-class to be able to do this.

Libgepub also provides a widget that inherits from WebkitWebView to render the book. That widget is written in C to provide GIR data also and we can try to do the same and use gnome-class to write this widget.

But for now, we're really far from this, we need to spend some time improving gnome-class to be able to write all the code in rust and expose the gobject GIR. Meantime we can start to use rust with this glue code, and that's great, because if you've a gobject library and you want to migrate to rust, you don't need to migrate all the code at once, you can do the same that librsvg is doing and migrate function by function and that's really cool.

GUADEC 2018

A few weeks ago I attended GUADEC in Almeria, Spain. The travel was a bit of an adventure, because Julian and I went there and back from Italy by train. It was great though, because we had lots of time to hack on Fractal on the train.

We also met Bastian on the train from Madrid to Almeria

Conference Days

The conference days were great, though I didn’t manage to see many talks because I kept getting tied up in interesting discussions (first world problem, I know :D). I did give a talk of my own though, about my work at Purism on UI patterns for making GNOME apps work on mobile. There is a video recording of the talk, and here are my slides.

The main thing I tried to get across is that Purism isn’t trying to create a separate ecosystem or platform, but to make the GNOME platform better upstream. We ship vanilla GNOME on our laptops, and we want to do the same on phones. It’s of course early days, and it will take a while for everything to get into place, but it feels great to work for a company that has upstream-first as a core principle.

The biggest area where our efforts will make an impact upstream in the short term are the Libhandy widgets Adrien and Guido have been working on. These widgets allow regular GNOME/GTK apps to scale to smaller sizes using adaptive UI patterns. The patterns are extending GNOME’s existing HIG in a few small details only, and can be used to make many existing GNOME apps adaptive without requiring major UI changes. We’re still experimenting with them, but once the patterns are solid and the widgets stable, we will work to upstream them into GTK.

Using Libhandy widgets will not only enable apps to run on phones, but also yield benefits on the desktop. For example, HdyColumn solves a very old problem many GTK apps have: Lists that need to grow with the window’s width, but also need a maximum width to ensure legibility. By enabling this, HdyColumn allows apps to work better on both very small and very large screens.

BoF Days

The BoF days were packed with interesting sessions, but sadly many of them happened simultaneously, so I was only able to attend a few of them. However, the ones I did attend were all incredibly productive and interesting, and I’m excited about the things we worked on and planned for the future.

Monday: Librem 5

On Monday I attended the all-day Librem 5 BoF, together with my colleagues from Purism, and some community members, such as Jordan and Julian from Fractal.

We talked about apps, particularly the messaging situation and Fractal. We discussed what will be needed in order to split the app, make the UI adaptive, and get end-to-end encryption. Daniel’s work on the database and Julian’s message history refactor are currently laying the groundwork for these.

On the shell side we talked through the design of various parts of the shell, such as keyboard, notifications, multitasking, and gestures. Though many of those things won’t be implemented in the near future, we have a plan for where we’re going with these, and getting designers and developers in one room was very productive for working out some of the details.

We also discussed a number of exciting new widgets to make it easier to get GNOME apps to work at smaller sizes, such as a new adaptive preferences window, and a way to allow modal windows to take up the entire screen at small sizes.

Multi-Monitor & Theming

On Tuesday we had a Multi-Monitor BoF, where we discussed multi-monitor behaviors with people from System76 and Ubuntu, among others. The most interesting parts to me were the discussions about adding some usecase-driven modes, such as a presentation mode, and a potential new keyboard-driven app switching interface (think Alt-Tab, but good). All of this will require a lot of work, but it’s great to see downstreams like System76 interested in driving initiatives like this.

In the afternoon we had the Theming & Ecosystem BoF, where we got designers, upstream GNOME developers, and people from various downstreams together to talk about the state of theming, and its impact on our ecosystem.

The basic problem we were discussing is that app developers want stable APIs and control over what their app looks like on user’s systems, while some distributions want to apply their own branding to everything. The current situation is pretty bad, because users end up with broken apps, developers constantly need to fix bugs for setups they didn’t want to support in the first place, and distributions need to invest lots of resources into building forever-slightly-broken custom themes. We discussed a number of possible approaches to tackle this problem, in order to make our platform easier to target. I’ll blog about this in more detail, but I’m excited about the possibility of finally solving this long-standing problem.

We also talked about some of our future plans with regard to icons. This includes a new “library” of symbolics, and a push for app developers to ship symbolic icons with their apps, rather than linking to random strings which have to be maintained forever. We also introduced the new app icon style initiative, which will make it drastically easier to make icons because they are simpler, more geometric, and there are fewer sizes to draw. All of this will still develop over the next cycle, since it’s not going to ship until 3.32.

Jakub presenting the new icon style

Wednesday: What is a GNOME app?

On Wednesday we had a small but very productive BoF to work on a proposal for a policy for including new apps as part of GNOME, and more generally getting a clearer definition of what it means for a project to be part of GNOME. There is currently no clear process for the inclusion of new projects as part of GNOME, so it doesn’t happen very often, and usually in a very disorganized fashion. This is a problem, because it leaves people who are excited about making new GNOME apps without a clear path to do so.

For example, both Fractal and Podcasts were built from the ground up to be GNOME apps, but still haven’t made it to the GNOME/ group on Gitlab officially, because there’s no clear policy. The new Calls app Bob is building for the Librem 5 is in the same limbo, just waiting around for someone to say if/how it can become a part of GNOME officially.

At the BoF we drafted a proposal for an explicit inclusion policy. The idea is for apps that a follow a set of criteria (e.g. follows the HIG, uses our tech stack) to be able to apply for inclusion, and have some kind of committee that could review these requests.

This is only a very rough proposal for now, but I’m excited about the potential it has to bring in more developers from the wider ecosystem. And of course, as the designer of a half dozen semi-official GNOME apps I’m very selfishly interested in getting this in place ;)

Thanks everyone!

Some of the Fractal core team meeting for the first time at the pre-registration event (Daniel, Jordan, Julian, and me)

In addition to all of the above, it was great to meet and hang out with so many of the awesome people in our community at social events, beach BoFs, and ad-hoc hacking sessions on the corridor. It’s hard to believe that one year ago I came to GUADEC as a newcomer. This year it felt like coming home.

Thanks everyone, and see you next year o/

My final report for GSoC 2018

The Google Summer of Code 2018 is coming to an end for me, so it means that it’s time for the final report!

The tasks

The tasks that I’ve actually done are quite different from my initial proposal, the reason of this is because Julian (the other GSoC student working on Fractal) and I applied for the same project and we were both accepted for it. However, there were some different tasks we were proposing (we respectively have done these tasks) and our mentor reassigned for us the tasks we had in common. Especially for me as he proposed me to do many other tasks so I may have ended up doing even more than what I initially planned to do! So I haven’t done all the tasks I proposed but the tasks I haven’t done was done by Julian and I still have done a lot of other improvements. All of my merge requests has been merged into the branch master of the Fractal repository. So now, let’s talk about the tasks in more details.

Localization support

My first task was to bring the internationalization for Fractal. I completed this task during the first week of the working period: I’ve implemented the localization support, written a first French translation and written some blogposts about how to use gettext in a Rust program.

The code used for the internationalization of Fractal has been also used in GNOME Podcasts.

Here are my merge requests related to this task:

The redesign of the room directory

Then I’ve moved on working on the redesign of the room directory. For this task, I had to improve the layout of the room directory by moving the search bar to the header bar, revamping the layout of the search result and implementing the ability of searching rooms on arbitrary homeservers.

There is only one thing left to do: enclosing the GtkListBox into a column whose width varies according to the width of the window. It couldn’t be implemented as this feature will use a custom GTK+ widget that is still being developed in libhandy (HdyColumn).

Here is what the new room directory looks like:

Capture du 2018-08-08 12-13-24.png

Here are my merge requests related to this task:

The media viewer

I’ve created a media (although for now it only works with pictures) viewer for Fractal. Its purpose is to easily have a better view of the images within a room, to be able to zoom in and out of them, to navigate between the different images of the room in the chronological order, to enter in a full screen mode and to save a copy of the media in the filesystem. I made a first implementation and then had to do a lot of other improvements. I’ve spent about a month working on it.

There is still the need to improve the zoom of the media viewer as the pictures are a little bit blurred and it’s not possible to zoom beyond 100%. There are optimizations to do as the application becomes very slow when trying to zoom beyond 100% on large pictures.

Here is what the media viewer looks like:

Capture du 2018-08-08 12-27-31.png

Here are my merge requests related to this task:

Improve the behavior of the “New Messages” divider

For this task, I couldn’t completely implement the behavior of the divider as Julian was working (and has been until the end of GSoC) on a big refactoring of the message history and it would have been useless to do it before the completion of the refactoring. There was also no complete design for it. However, I could work on fixing its behavior (it was marking the user’s own messages as new and could not appear at all sometimes) and add the ability to refresh the last viewed message at each start up.

What is left to do with it is to define how and when the divider should disappear and what should be done when entering a room with new messages (should we jump to the first new message? ask for the user if they want to do that? do as if nothing happened?).

Here is what this divider looks like:

Capture du 2018-07-13 10-47-30

Here are my merge requests related to this task:

Multiline message input

The message input area was using a GtkEntry before but this widget wasn’t good if we wanted to have multiple lines in our message. So I’ve replaced it with a GtkSourceView in order to be able to insert new lines with Shift+Enter and to have syntax highlight when Markdown is enabled.

Here is what the new message input with Markdown highlight looks like:

Capture du 2018-07-23 19-09-39

Here are my merge requests related to this task:

Message context menu

This is one of the two last tasks I’ve worked on, I had to implement a context menu for the messages which would be opened by a secondary click. There would be many options (for quoting, copying text, deleting a message or viewing the JSON source of it) in the menu. I’ve worked on many fixes after the first version of it.

There is still some work to do like making the context menu aware of when we are clicking above a link (so that a user can copy this link) or hiding the “Delete Message” option when the user doesn’t have the right to do so.

Here is what the popover looks like:

Capture du 2018-08-06 17-07-30

Here is what the “Message Source” window looks like:

Capture du 2018-08-06 17-08-46

Here are my merge requests related to this task:

Improve the styling of quotes

My last task for GSoC was to improve the styling of quotes in the message history by making the text of quotes dimmed, with a 6px left marge, a 2px blue border and with a vertical space between quotes.

Here is what it looks like:

Capture du 2018-08-01 19-09-36

And finally, here are my merge requests related to this task:

Conclusion

This experience has been really rewarding for me. I could discover the GNOME community, know how much people put a thoughtful work into building and making GNOME progressing. They are very welcoming and inclusive and that’s something that I really liked. The Fractal community is great too. I want to thank my mentor, Daniel García Moreno for his guidance. I’ve never thought that I could be able to do this much and I’m proud of being currently one of the three main contributors on the Fractal repository! Julian and I could deliver many improvements for Fractal during this time.

GSoC has also allowed me to greatly improve my general programming skills, my knowledge about git, GTK+ and doing real world programming. And I am way better at witting Rust programs now!

Mid-September, I will start my class again for the first year of my master degree (software engineering) at Sorbonne Université so I will have less free time but I will definitely do my best to continue my contribution to Fractal and write some blogposts. I will apply to become a GNOME Foundation member in some mouths. I would also really like to continue to work on my library rlife and then create a new GNOME application for cellular automata with it.

How the 60-evdev.hwdb works

libinput made a design decision early on to use physical reference points wherever possible. So your virtual buttons are X mm high/across, the pointer movement is calculated in mm, etc. Unfortunately this exposed us to a large range of devices that don't bother to provide that information or just give us the wrong information to begin with. Patching the kernel for every device is not feasible so in 2015 the 60-evdev.hwdb was born and it has seen steady updates since. Plenty a libinput bug was fixed by just correcting the device's axis ranges or resolution. To take the magic out of the 60-evdev.hwdb, here's a blog post for your perusal, appreciation or, failing that, shaking a fist at. Note that the below is caller-agnostic, it doesn't matter what userspace stack you use to process your input events.

There are four parts that come together to fix devices: a kernel ioctl and a trifecta of udev rules hwdb entries and a udev builtin.

The kernel's EVIOCSABS ioctl

It all starts with the kernel's struct input_absinfo.


struct input_absinfo {
__s32 value;
__s32 minimum;
__s32 maximum;
__s32 fuzz;
__s32 flat;
__s32 resolution;
};
The three values that matter right now: minimum, maximum and resolution. The "value" is just the most recent value on this axis, ignore fuzz/flat for now. The min/max values simply specify the range of values the device will give you, the resolution how many values per mm you get. Simple example: an x axis given at min 0, max 1000 at a resolution of 10 means your devices is 100mm wide. There is no requirement for min to be 0, btw, and there's no clipping in the kernel so you may get values outside min/max. Anyway, your average touchpad looks like this in evemu-record:

# Event type 3 (EV_ABS)
# Event code 0 (ABS_X)
# Value 2572
# Min 1024
# Max 5112
# Fuzz 0
# Flat 0
# Resolution 41
# Event code 1 (ABS_Y)
# Value 4697
# Min 2024
# Max 4832
# Fuzz 0
# Flat 0
# Resolution 37
This is the information returned by the EVIOCGABS ioctl (EVdev IOCtl Get ABS). It is usually run once on device init by any process handling evdev device nodes.

Because plenty of devices don't announce the correct ranges or resolution, the kernel provides the EVIOCSABS ioctl (EVdev IOCtl Set ABS). This allows overwriting the in-kernel struct with new values for min/max/fuzz/flat/resolution, processes that query the device later will get the updated ranges.

udev rules, hwdb and builtins

The kernel has no notification mechanism for updated axis ranges so the ioctl must be applied before any process opens the device. This effectively means it must be applied by a udev rule. udev rules are a bit limited in what they can do, so if we need to call an ioctl, we need to run a program. And while udev rules can do matching, the hwdb is easier to edit and maintain. So the pieces we have is: a hwdb that knows when to change (and the values), a udev program to apply the values and a udev rule to tie those two together.

In our case the rule is 60-evdev.rules. It checks the 60-evdev.hwdb for matching entries [1], then invokes the udev-builtin-keyboard if any matching entries are found. That builtin parses the udev properties assigned by the hwdb and converts them into EVIOCSABS ioctl calls. These three pieces need to agree on each other's formats - the udev rule and hwdb agree on the matches and the hwdb and the builtin agree on the property names and value format.

By itself, the hwdb itself has no specific format beyond this:


some-match-that-identifies-a-device
PROPERTY_NAME=value
OTHER_NAME=othervalue
But since we want to match for specific use-cases, our udev rule assembles several specific match lines. Have a look at 60-evdev.rules again, the last rule in there assembles a string in the form of "evdev:name:the device name:content of /sys/class/dmi/id/modalias". So your hwdb entry could look like this:

evdev:name:My Touchpad Name:dmi:*svnDellInc*
EVDEV_ABS_00=0:1:3
If the name matches and you're on a Dell system, the device gets the EVDEV_ABS_00 property assigned. The "evdev:" prefix in the match line is merely to distinguish from other match rules to avoid false positives. It can be anything, libinput unsurprisingly used "libinput:" for its properties.

The last part now is understanding what EVDEV_ABS_00 means. It's a fixed string with the axis number as hex number - 0x00 is ABS_X. And the values afterwards are simply min, max, resolution, fuzz, flat, in that order. So the above example would set min/max to 0:1 and resolution to 3 (not very useful, I admit).

Trailing bits can be skipped altogether and bits that don't need overriding can be skipped as well provided the colons are in place. So the common use-case of overriding a touchpad's x/y resolution looks like this:


evdev:name:My Touchpad Name:dmi:*svnDellInc*
EVDEV_ABS_00=::30
EVDEV_ABS_01=::20
EVDEV_ABS_35=::30
EVDEV_ABS_36=::20
0x00 and 0x01 are ABS_X and ABS_Y, so we're setting those to 30 units/mm and 20 units/mm, respectively. And if the device is multitouch capable we also need to set ABS_MT_POSITION_X and ABS_MT_POSITION_Y to the same resolution values. The min/max ranges for all axes are left as-is.

The most confusing part is usually: the hwdb uses a binary database that needs updating whenever the hwdb entries change. A call to systemd-hwdb update does that job.

So with all the pieces in place, let's see what happens when the kernel tells udev about the device:

  • The udev rule assembles a match and calls out to the hwdb,
  • The hwdb applies udev properties where applicable and returns success,
  • The udev rule calls the udev keyboard-builtin
  • The keyboard builtin parses the EVDEV_ABS_xx properties and issues an EVIOCSABS ioctl for each axis,
  • The kernel updates the in-kernel description of the device accordingly
  • The udev rule finishes and udev sends out the "device added" notification
  • The userspace process sees the "device added" and opens the device which now has corrected values
  • Celebratory champagne corks are popping everywhere, hands are shaken, shoulders are patted in congratulations of another device saved from the tyranny of wrong axis ranges/resolutions

Once you understand how the various bits fit together it should be quite easy to understand what happens. Then the remainder is just adding hwdb entries where necessary but the touchpad-edge-detector tool is useful for figuring those out.

[1] Not technically correct, the udev rule merely calls the hwdb builtin which searches through all hwdb entries. It doesn't matter which file the entries are in.

August 08, 2018

GNOME Software and automatic updates

For GNOME 3.30 we’ve enabled something that people have been asking for since at least the birth of the gnome-software project: automatically installing updates.

This of course comes with some caveats. Since it’s still not safe to auto-update packages (trust me, I triaged the hundreds of bugs) we will restrict automatic updates to Flatpaks. Although we do automatically download things like firmware updates, ostree content, and package updates by default they’re deployed manually like before. I guess it’s important to say that the auto-update of Flatpaks is optional and can easily be turned off in the GUI, and that you’ll be notified when applications have been auto-updated and need restarting.

Another common complaint with gnome-software was that it didn’t show the same list of updates as command line tools like dnf. The internal refactoring required for auto-deploying updates also allows us to show updates that are available, but not yet downloaded. We’ll still try and auto-download them ahead of time if possible, but won’t hide them until that. This does mean that “new” updates could take some time to download in the updates panel before either the firmware update is performed or the offline update is scheduled.

This also means we can add some additional UI controlling whether updates should be downloaded and deployed automatically. This doesn’t override the existing logic regarding metered connections or available battery power, but does give the user some more control without resorting to using gsettings invocation on the command line.

Also notable for GNOME 3.30 is that we’ve switched to the new libflatpak transaction API, which both simplifies the flatpak plugin considerably, and it means we install the same runtimes and extensions as the flatpak CLI. This was another common source of frustration as anyone trying to install from a flatpakref with RuntimeRepo set will testify.

With these changes we’ve also bumped the plugin interface version, so if you have out-of-tree plugins they’ll need recompiling before they work again. After a little more polish, the new GNOME Software 2.29.90 will soon be available in Fedora Rawhide, and will thus be available in Fedora 29. If 3.30 is as popular as I think it might be, we might even backport gnome-software 3.30.1 into Fedora 28 like we did for 3.28 and Fedora 27 all those moons ago.

Comments welcome.

August 07, 2018

Five-or-More Modernisation: Now You Can Properly Play It

As Google Summer of Code is officially drawing to an end, all of my attention was focused towards making the Five or More Vala version feature-complete. As you probably already know from my previous blog post, the game was somehow playable at that time, but it was missing some of the key features included in the old version.

So what’s new this time? First and foremost, you can surely notice the game board now sports a grid, which wasn’t there until now. On the same note, there are also animations used for clicking a piece on the board, for an improved gaming experience. For further accessibility, some header bar hints are available at different stages in the game: at the start of any new game, at the end of each game, as well as whenever there is no clear path between the initial position and the cell indicated by the user for the current move.


Overall final game look


By using libgnome-games-support, I was able to implement a high scores table, which gets updated each time the player gets a score ranging in the top 10 highest scores for the chosen board size. The high scores for each of all of the three categories can also be viewed as showed in the screencast below. Also, you can see I have done quite my fair share of testing the functionalities and assuring everything worked as planned 😄.


High scores


Further changes include theme changing, although this momentarily only works for the vector themes available in Five or More, namely the ball and shape themes, as implementation for raster images is not fully functional as of this moment.


Changing theme


Also, now window size is being saved in between runs, so each time the game starts it will take into consideration the last window size settings, including wether the window was full screened or not.

As for the most exciting change revealed in this blog post, it concerns playing Five or More using keyboard controls. Basically, the user can play the game by navigating with the keyboard arrows, the home, end, page up, page down and space, enter or return keys, as described in the wikipage section for using the keyboard.


Playing Five-or-More usig keyboard controls


If you have been following my journey with GSoC up closely, you probablly remember me mentioning something about extra-features in my first blog post, such as adding gamepad support, sounds, or making some design changes. I need to say I feel optimistic about getting some of these features done in the weeks to come, post GSoC. I feel the ground has been already laid down somehow for gamepad support by adding keyboard controls. Also, Five or More has already undergone some slight design changes, such as widget spacing and alignment.

Using Vundle from the Vim Flatpak

I (mostly) use Vim, and it’s available on Flathub. However: my plugins are managed with Vundle, which shells out to git to download and update them. But git is not in the org.freedesktop.Platform runtime that the Vim Flatpak uses, so that’s not going to work!

If you’ve read any of my recent posts about Flatpak, you’ll know my favourite hammer. I allowed Vim to use flatpak-spawn by launching it as:

flatpak run --talk-name=org.freedesktop.Flatpak org.vim.Vim

I saved the following file to /tmp/git:

#!/bin/sh
exec flatpak-spawn --host git "$@"

then ran the following Vim commands to make it executable, add it to the path, then fire up Vundle:

:r !chmod +x /tmp/git
:let $PATH = '/tmp:/app/bin:/usr/bin'
:VundleInstall

This tricks Vundle into running git outside the sandbox. It worked!

I’m posting this partly as a note to self for next time I want to do this, and partly to say “can we do better?”. In this specific case, the Vim Flatpak could use org.freedesktop.Sdk as its runtime, like many other editors do. But this only solves the problem for tools like git which are included in the relevant SDK. What if I’m writing Python and want to use pyflakes, which is not in the SDK?

DevConf India 2018

DevConf IN was organized at Christ University, Bangalore 04-05 August. It turned out to be totally fun-packed excited weekend for me. I really had a great time meeting people from various other open source communitites from India. I also delivered a talk on Flatpak mainly focusing on overall architecture, it’s benefits for the user and developers.

Markdowm Image

Honestly speaking, I didn’t expect much from Devconf India but the event has been flaw-less in every sense. Even though the event was sponsored and organized by Red Hat, 48% of the speakers were non-Redhatters coming from other organizations, academia and hobbists. It really brought out the vibe of the “community”.

The most popular track according to me was the “Cloud and Containers” track. There were other tracks like “Testing”, “Community” and “Design” which are often neglected at other conferences but are a big part of SDLC. The keynotes by Ric Wheeler and Karanbir Singh were full of inspiration. Approx. 1300 participants attended the conference.

My talk on Flatpak went smoothly and I managed to keep it under the designated time slot \o/. I also got some questions and follow-ups which I think was a good sign. I also plugged about EndlessOS during the first couple of minutes on what problems it is trying to address using open-source. Adrian also delivered a talk on Flatpak-ing apps.

Markdowm Image

I particularly engaged with folks(Sayan and Sinny) from teamsilverblue and fsmk. I also received this coffee mug swag from teamsilverblue :)

Markdowm Image

I am amazed and glad that Devconf organizers pulled off this event at such a scale. They did put in a lot of hard work and it paid off. Thank you organizers, volunteers, speakers and participants for making this such an amazing experience.

Next, I am leaving for GNOME Asia,Taipei in couple of days; another round of fun-packed weekend. I am really excited to meet people from GNOME and openSUSE community. I will also be visiting Endless’s Taipei office to meet my colleagues from the kernel team :)

Thank you for reading and happy hacking!

August 06, 2018

Improving todo.txt & Todoist plugin

The GSoC coding period just ended. I would first like to apologize for not updating about my work. I am working on improving Todo.txt and Todoist integration to GNOME To Do. During the coding period, a lot of improvements were added to Todo.txt and Todoist and in this blog post I write about my journey and describing the implementation details.

Todo.txt & Todoist Updates

For those not familiar with todo.txt, it is a text-based format for storing and Todoist is quite a popular task manager application that can be used on smartphones as well as Personal Computers.

My aim for the Todo.txt was to improve the current code, make the plugin feature similar to To Do and document the syntax of todo.txt. I was able to complete all the task.

  1. The first task was to implement the support for notes in the todo.txt plugin. Notes are basically a more lengthy description of the task. Since todo.txt format allows adding custom key: value pairs for task description, I introduced a new key “note” and the value follows the key in quotes “”.  The only major challenge was handling the quotes and special characters like newlines, quotes etc in the description. This was done storing the notes in escaped form (i.e adding an extra ‘/’ before the special character and removing it during parsing phase.
  2. Adding support for creation and completion date –  The completion date occurs just after priority and creation date occurs after the completion date as per the todo.txt format. i.e x (priority) completion-date creation-date. Parsing completion-date & creation was easy using the last seen token. I had to subclass GtdTask into GtdTodoTxtTask since we need support for setting creation date. As the task are stored as text, the creation date needs to be cached and set at first load. This was done by subclassing GtdTask and adding a setter for the completion date.
  3. Adding support for list background – I noticed that list background color was not being cached into todo.txt and hence the color was lost on exit and start of To Do. After discussion with my mentor we decided to add two hidden custom lines in todo.txt, here hidden just means a marker “h:1” at the start of the line in todo.txt and are used to denote ToDo specific features. 1 custom line for storing list background color and other for storing Lists. Below is an example. For this task, I had to change the parser a bit. It was slightly confusing at first because I didn’t have a clear understanding of function pointers (needed for parser vtable implementation) but feaneron helped me with it.
      h:1 Lists @test @test2
      h:1 Colors test:#729fcf test2:#5c3566
  4. And now the most important task was to implement the subtask feature. The subtask was supported in the initial stage of the plugin but was removed because of instability. It wasn’t very clear to me as to how to go about but eventually, we decided to use indentation to denote parent-child relation between task described in immediate lines. An indentation of 4 spaces means the task is a child of the previous task. The algorithm works by creating all the task and maintaining it in a hastable with key as list of tasks. Once all the tasks are created the subtask relation is determine using the indentation property.
  5. Finally I documented the syntax of todo.txt so that new user find it easier to use the plugin. The documentation is present here.

With these changes todo.txt is much more stable and has all the feature that ToDo supports. Todoist had 3 major work to be done, fixing the network issue, auto sync of tasks and removing GOA and making Todoist plugin handle accounts on it’s own.

  • The network connectivity issue was handled by adding a network monitor and listening the network changes. Incase To Do cannot connect to the Todoist the provider is removed and hence no data loss happens due to user changes. The below screenshot show that wifi connectivity loss/gain is now handled by Todoist.
  • Peek 2018-08-06 23-39
  • The autosync patch is not merged yet but is very close to being merged in master. To reduce the number sync request with Todoist we manage time intervals at which to sync based on user window focus. When user focusses on Todoist for first time, sync request is generated immediately and timeout is set to 30 seconds. But if the focus is not on ToDo, the timeout is set to 10 minutes.
  • The final work is to making Todoist plugin handle it’s own accounts by using Keyring to store tokens. This work is still in progress and unfortunately I wasn’t able to get it merged. I have added patches for changes to preferances panel and implementation of keyring but still to integrate both these changes and also make modifications to plugins and providers. I will continue working on this and finish the implementation.

Finally, I would really like to thank my mentor Feaneron for guiding me and helping me whenever I was stuck, reviewing my code which was filled with code style errors and being patient enough to keep reminding me about my mistakes. I would also like to thank GNOME for giving me this great opportunity.

Talking at GUADEC 2018 in Almería, Spain

I’ve more or less just returned from this year’s GUADEC in Almeria, Spain where I got to talk about assessing and improving the security of our apps. My main point was to make people use ASan, which I think Michael liked ;) Secondarily, I wanted to raise awareness for the security sensitivity of some seemingly minor bugs and how the importance of getting fixes out to the user should outweigh blame shifting games.

I presented a three-staged approach to assess and improve the security of your app: Compilation time, Runtime, and Fuzzing. First, you use some hardening flags to compile your app. Then you can use amazing tools such as ASan or Valgrind. Finally, you can combine this with afl to find bugs in your code. Bonus points if you do that as part of your CI.

I encountered a few problems, when going that route with Flatpak. For example, the libasan.so is not in the Platform image, so you have to use an extension to have it loaded. It’s better than it used to be, though. I tried to compile loads of apps with ASan in the past and I needed to compile a custom GCC. And then mind the circular dependencies, e.g. libmfpr is needed by GCC. If I then compile a libmfpr with ASan, then GCC would stop working, because gcc itself is not linked against ASan. It seems silly to have those annoyances in the stack. And it is. I hope that by making people play around with these technologies a bit more, we can get to a point where we do not have to catch those time consuming bugs.

Panorama in Frigiliana

The organisation around the presentation was a bit confusing as the projector didn’t work for the first ten minutes. And it was a bit unclear who was responsible for making it work. In that room the audio also used to be wonky. I hope it went well alright after all.

GNOME Keysign 0.9.8 released

It’s been a while after my last post. This time, we have many exciting news to share. For one, we have a new release of GNOME Keysign which fixes a few bugs here and there as well as introduces Bluetooth support. That is, you can transfer your key with your buddy via Bluetooth and don’t need a network connection. In fact, it becomes more and more popular for WiFis to block clients talking to each other. A design goal is (or rather: was, see down below) to not require an Internet connection, simply because it opens up a can of worms with potential failures and attacks. Now you can transfer the key even if your WiFi doesn’t let you communicate with the other machine. Of course, both of you need have to have Bluetooth hardware and have it enabled.

The other exciting news is the app being on Flathub. Now it’s easier than ever to install the app. Simply go to Flathub and install it from there. This is a big step towards getting the app into users’ hands. And the sandbox makes the app a bit more trustworthy, I hope.


flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install flathub org.gnome.Keysign

The future brings cool changes. We have already patches lined up that bring an Internet transport with the app. Yeah, that’s contrary to what I’ve just said a few paragraphs above. And it does cause some issues in the UI, because we do not necessarily want the user to use the Internet if the local transport just works. But that “if” is unfortunately getting bigger and bigger. So I’m happy to have a mix of transports now. I’m wondering what the best way is to expose that information to the user, though. Do we add a button for the potentially privacy invading act of connecting to the Internet? If we do, then why do we not offer buttons for the other transports like Bluetooth or the local network?

Anyway, stay tuned for future updates.

Add a message context menu for Fractal

Fractal is a Matrix client for GNOME and is written in Rust. Matrix is an open network for secure, decentralized communication.

As I’ve promised in the previous article, I’m going to talk about my implementation of the message context menu. My work started from this issue that were asking to add a “right-click” menu to interact with the messages. Pressing the secondary click would make a popover appear with a menu from witch you could:

  • Reply to a message. Or more exactly insert a quote of the message at the beginning of the message input (there is a new reply function in Matrix but it is not about this).
  • Copy the body of a message.
  • View the JSON source of a message. This may be removed in the future as it is mainly for debugging purpose.
  • Request the redaction of a message. Please note that messages cannot be deleted on Matrix homeservers as all the events of a room contains important information about the structure of this one (more information here). However, they can be “redacted”: all the information that are not essential to the room structure (so not the event ID, the sender ID, the date, etc…) can be removed from this event (like its body).

Here was the original design for this menu:

image.png

The message menu popover

First implementation

I will first talk about how I implemented the popover for the context menu. I used glade to to design the popover menu and I’ve added a “View Source” button to it (see this commit). Then I had to figure out how to make the menu popups.

The room history is made with a GtkListBoxand each message is a GtkListBoxRow. Inside this GtkListBoxRow, there are various enclosed GtkBoxes that compose the layout of the message with the sender’s avatar, their display name, the date the message was went and the text of the message (or the image, when it is an image). Here is what it looks like:

Capture du 2018-08-06 11-12-06.png

What I wanted to do first was to make the menu popups right above the GtkListBoxRox. My first attempt to implement this was by individually connecting the popup function with the “button-press-event” signal (and only when it was the secondary click) of each widget composing the message but there were two major problems with this method:

  • You had to click exactly on the GtkEventBox enclosing the user’s display name or one of the GtkLabels composing the message body to have the popover actually appears. The avatar, date widgets and the GtkBoxes enclosing the message’s elements wasn’t receiving the “button-press-event” signal. I couldn’t figure out why.
  • The popover was appearing right above the widget which was receiving the “button-press-event” signal, not above the GtkListBoxRow of the message.

As a work around, instead of what was done in the first place, I’ve enclosed the GtkBoxes of the message layout into a GtkEventBox and connect the “button-press-event” signal to it. With that, I got the expected result. See this commit for more details.

Improvements

I was asked to make the popover appears right where the pointer is (and positioned down by default) instead of being at the top of the GtkEventBox. I had some difficulties to make it but I’ve managed to do it in this MR and in this one.

The “View Source” dialog

First implementation

In order to implement this functionality, I’ve firstly added a field source to the Message structure. And I’ve used the function serde_json::to_string_ pretty in order to have a well-formatted JSON source for the messages. More details in this commit.

Then I’ve made the interface of the GtkMessageDialog for displaying the message’s source (see it here). And I have simply connected the “View Source” button with the appearance of the dialog and the update of its content (in this commit).

Improvements

There was a problem with the “View Source” dialog: when opening the source of a very long message, the GtkMessageDialog becomes very large. It would also be good to have JSON syntax highlight for the message source. So I’ve moved the place where the code is shown to a GtkSourceView wrapped in a GtkScrolledWindow and activated the JSON syntax highlight. See this MR for more details.

I’ve also modified the message source view in this MR to be a modal window instead of a dialog. Finally, I’ve made some visual fixes for it in order to have a better JSON syntax highlight, the style scheme “classic” was making the text pink and it was the only color of the highlight. So I’ve modified the style scheme to “kate” which has a proper JSON syntax highlight (have a look at this MR for more details).

The “Copy Text” action

I’ve simply connected the “Copy Text” button with a function that copies the content of the message’s body into the clipboard (the details are here).

The “Reply” action

I’ve connected the “Reply” button with a function that inserts the message’s body at the beginning of the message input but it also adds “> ” at the beginning of each lines and ensures that there are exactly two line returns at the end of this body (more details here).

The “Delete Message” action

To implement the redaction of messages, I had to add a command to the backend so that it can request the redaction of messages to homeservers. The redaction can be done with a PUT HTTP request to the server with this path at the end of the URL: /_matrix/client/r0/rooms/{roomId}/redact/{eventId}/{txnId} (where roomId and eventId allows to identify the event whose redaction is being requested and txnId is the transaction ID, a randomly generated string, more details here). You can see the commit which implements this command here.

Then I’ve connected the “Delete Message” button to a function that calls the backend command I’ve previously introduced (more details here).

Final result

This is what the popover finally looks like:

Capture du 2018-08-06 17-07-30.png

And for the message source view:

Capture du 2018-08-06 17-08-46.png

Vala+GDA: An experiment

I’m working on GNOME Data Access, now on its GTK+ library, specially on its Browser, a GTK+ application more as a demo than a real database access tool.

GDA’s Browser, spots notable features in GDA’s non-GUI library. For example, support to create a single connection binding for two or more connections to different databases and from different providers (SQLite, PostgreSQL, MySQL), so you can create a single query and get a table result combining data from different sources. More details in another post.

The Experiment

Browser is getting back to life, but in the process I found crashes here and there. Fix them requires to walk around the code, found lots of opportunities to port it to new available technics to define GObject/GInterface API. Do that work may it’s too much work for a single man.

So, What about to use Vala to produce C code faster in order to rewrite existing or new objects?

This experiment, will be for 7.0 development cycle. For next 6.0 release, I would like to fix bugs as far as possible.

This experiment may help to answer another question: Is possible to port Vala code to C in order to be accepted by upstream projects like GXml’s DOM4 API to GLib?

Please welcome Lenovo to the LVFS

I’d like to formally welcome Lenovo to the LVFS. For the last few months myself and Peter Jones have been working with partners of Lenovo and the ThinkPad, ThinkStation and ThinkCenter groups inside Lenovo to get automatic firmware updates working across a huge number of different models of hardware.

Obviously, this is a big deal. Tens of thousands of people are likely to be offered a firmware update in the next few weeks, and hundreds of thousands over the next few months. Understandably we’re not just flipping a switch and opening the floodgates, so if you’ve not seen anything appear in fwupdmgr update or in GNOME Software don’t panic. Over the next couple of weeks we’ll be moving a lot of different models from the various testing and embargoed remotes to the stable remote, and so the list of supported hardware will grow. That said, we’ll only be supporting UEFI hardware produced fairly recently, so there’s no point looking for updates on your beloved T61. I also can’t comment on what other Lenovo branded hardware is going to be supported in the future as I myself don’t know.

Bringing Lenovo to the LVFS has been a lot of work. It needed changes to the low level fwupdate library, fwupd, and even the LVFS admin portal itself for various vendor-defined reasons. We’ve been working in semi-secret for a long time, and I’m sure it’s been frustrating to all involved not being able to speak openly about the grand plan. I do think Lenovo should be applauded for the work done so far due to the enormity of the task, rather than chastised about coming to the party a little late. If anyone from HP is reading this, you’re now officially late.

We’re still debugging a few remaining issues, and also working on making the update metadata better quality, so please don’t judge Lenovo (or me!) too harshly if there are initial niggles with the update process. Updating the firmware is slightly odd in that it sometimes needs to reboot a few times with some scary-sounding beeps, and on some hardware the first UEFI update you do might look less than beautiful. If you want to do the firmware update on Lenovo hardware, you’ll have a lot more success with newer versions of fwupd and fwupdate, although we should do a fairly good job of not offering the update if it’s not going to work. All our testing has been done with a fully updated Fedora 28 workstation. It of course works with SecureBoot turned on, but if you’ve enabled the BootOrder lock manually you’ll need to turn that off first.

I’d like to personally thank all the Lenovo engineers and managers I’ve worked with over the last few months. All my time has been sponsored by Red Hat, and they rightfully deserve love too.

August 05, 2018

The CPU (Consuming Power Unlimited)

The CPU forms the core of any modern computing machine. To be able to perform operations at clocks as high as 4.5 GHz, CPUs need enormous power. In the electronics world, the (dynamic) power consumed by a component is estimated as:

Power = Capacitance * Voltage2 * frequency

As we can observe, an increase in clock speed directly translates to increased power draw. An Ivy Bridge i7 die having 177 mm2 area, dissipating 100 W implies a power density of 565 kW/m2, while power density of the sun’s radiation on the surface of the earth is approximately 1.4 kW/m2(which is why overclocking burns the chip). Modern CPUs consume more than 200 W of power at peak performance, and often represent the biggest drain on your laptop’s battery, followed by the GPU and then display screens.

To save power, manufacturers developed the concept of P-states and C-states, which basically shut down some parts of the CPU according to the available load. Any modern CPU (assuming recent manufacturing process <45 nm) should consume less than 10 W at idle state. An interesting fact to note here would be the power consumption of fans running to keep the system cool. A single fan can eat up to 5 W, and a group (common use case) together may consume as much as 20 Watts, which can overshadow the motherboard+RAM+CPU combined.

The users have indirect influence over the CPU’s power conditions and battery use through their selection of power settings. The TDP numbers associated with a chip imply the maximum power according to which the cooling systems need to be designed, not the actual power in day-to-day scenarios. Any reports on power consumption which give statements like “the processor’s power consumption is 35 watts” are both false and imprecise, because the actual number is far more dynamic and dependent on multiple factors.

One might assume that a CPU heavy application should increase only CPU power usage, but the motherboard also has to supply the data at higher rates, which translates to increased disk I/O. This implies that the motherboard, buses, RAM and data disks, all consume more power to deliver the higher data throughput. Since these subsystems are intertwined and hardware manufacturers rarely provide actual power consumption numbers, our best bet to estimate the individual power draw would be to develop regression models to predict the numbers. This is precisely what is done by PowerTop.

 

August 04, 2018

Beast 0.12.0-beta.1

Long time no see. It’s been a while since the last Beast release, mainly because we have a lot of code migrations going on, some of which caused severe regressions. Here are the tarball and Debian package: https://beast.testbit.org/pub/testing/beast-0.12.0-beta.1.tar.xz https://beast.testbit.org/pub…

UI polishing and auto completion

During my Google Summer of Code project I implement message search for Dino, a XMPP client focusing on ease of use and security.

Jumping to results

For each search hit in the results, three messages are (partially) displayed: The actual hit and the messages before and after that. The hit is clickable and clicking it will jump to the hit in the conversation history. The clicked message is highlighted with a CSS animation by fading a background color in and out.

Screenshot from 2018-08-04 13-59-52

Empty placeholders

I added placeholders to clarify the states where no results are shown because nothing was searched yet and where no results are shown because there where no matching results. Following the GNOME HIG, the placeholders contain an icon, a state description and suggestions on how to proceed.

Screenshot from 2018-08-04 13-05-29

Minor change of plans

In my UI mockups I planed to collapse the search sidebar into only displaying the search entry after a hit was clicked. This was supposed to let the user know that a search is still active and that he/she can resume the search without requiring much screen space.

However, this introduced more states to the search feature and thus leaves more room for confusion. Also, reopening the search from the collapsed state would need a click/shortcut, while just completely reopening the search would require the same amount of interaction. Thus, the collapsed state does not save the user interaction steps and might be harder to understand.

Instead, the search text is simply not removed from the search entry when clicking on a result. When the user desires to resume the search to explore other results, he/she can open the search again and will find the old search results. The text is marked, thus simply typing over it is possible. The search text is still reset when changing conversations.

Bug hunting

I had to hunt down and fix some displaying issues in the history search and other bugs I introduced while refactoring parts of the conversation view structure.

During the final days

I have been working on user name auto-completion and nice displaying of filters. There is some final work and testing to be done on them and then I can open a PR!

Flatpak portal experiments

One of the signs that a piece of software is reaching a mature state is its ability to serve  use cases that nobody had anticipated when it was started. I’ve recently had this experience with Flatpak.

We have been discussing some possible new directions for the GTK+ file chooser. And it occurred to me that it might be convenient to use the file chooser portal as a way to experiment with different file choosers without having to change either GTK+ itself or the applications.

To verify this idea, I wrote a quick portal implementation that uses the venerable GTK+ 2 file chooser.

Here is Corebird (a GTK+ 3 application) using the GTK+ 2 file chooser to select an image.

Logging from Rust in librsvg

Over in this issue we are discussing how to add debug logging for librsvg.

A popular way to add logging to Rust code is to use the log crate. This lets you sprinkle simple messages in your code:

error!("something bad happened: {}", foo);
debug!("a debug message");

However, the log create is just a facade, and by default the messages do not get emitted anywhere. The calling code has to set up a logger. Crates like env_logger let one set up a logger, during program initialization, that gets configured through an environment variable.

And this is a problem for librsvg: we are not the program's initialization! Librsvg is a library; it doesn't have a main() function. And since most of the calling code is not Rust, we can't assume that they can call code that can initialize the logging framework.

Why not use glib's logging stuff?

Currently this is a bit clunky to use from Rust, since glib's structured logging functions are not bound yet in glib-rs. Maybe it would be good to bind them and get this over with.

What user experience do we want?

In the past, what has worked well for me to do logging from libraries is to allow the user to set an environment variable to control the logging, or to drop a log configuration file in their $HOME. The former works well when the user is in control of running the program that will print the logs; the latter is useful when the user is not directly in control, like for gnome-shell, which gets launched through a lot of magic during session startup.

For librsvg, it's probably enough to just use an environment variable. Set RSVG_LOG=parse_errors, run your program, and get useful output. Push button, receive bacon.

Other options in Rust?

There is a slog crate which looks promising. Instead of using context-less macros which depend on a single global logger, it provides logging macros to which you pass a logger object.

For librsvg, this means that the basic RsvgHandle could create its own logger, based on an environment variable or whatever, and pass it around to all its child functions for when they need to log something.

Slog supports structured logging, and seems to have some fancy output modes. We'll see.

August 03, 2018

On Moving

Winds of Change. One of my favourite songs ever and one that comes to my mind now that me and my family are going through quite some important changes, once again. But let’s start from the beginning…

A few years ago, back in January 2013, my family and me moved to the UK as the result of my decision to leave Igalia after almost 7 years in the company to embark ourselves in the “adventure” or living abroad. This was an idea we had been thinking about for a while already at that time, and our current situation back then suggested that it could be the right moment to try it out… so we did.

It was kind of a long process though: I first arrived alone in January to make sure I would have time to figure things out and find a permanent place for us to live in, and then my family joined me later in May, once everything was ready. Not great, if you ask me, to be living separated from your loved ones for 4 full months, not to mention the juggling my wife had to do during that time to combine her job with looking after the kids mostly on her own… but we managed to see each other every 2-3 weekends thanks to the London – Coruña direct flights in the meantime, so at least it was bearable from that point of view.

But despite of those not so great (yet expected) beginnings, I have to say that this past 5+ years have been an incredible experience overall, and we don’t have a single regret about making the decision to move, maybe just a few minor and punctual things only if I’m completely honest, but that’s about it. For instance, it’s been just beyond incredible and satisfying to see my kids develop their English skills “from zero to hero”, settle at their school, make new friends and, in one word, evolve during these past years. And that alone would have been a good reason to justify the move already, but it turns out we also have plenty of other reasons as we all have evolved and enjoyed the ride quite a lot as well, made many new friends, knew many new places, worked on different things… a truly enriching experience indeed!

In a way, I confess that this could easily be one of those things we’d probably have never done if we knew in advance of all the things we’d have to do and go through along the way, so I’m very grateful for that naive ignorance, since that’s probably how we found the courage, energy and time to do it. And looking backwards, it seems clear to me that it was the right time to do it.

But now it’s 2018 and, even though we had such a great time here both from personal and work-related perspectives, we have decided that it’s time for us to come back to Galicia (Spain), and try to continue our vital journey right from there, in our homeland.

And before you ask… no, this is not because of Brexit. I recognize that the result of the referendum has been a “contributing factor” (we surely didn’t think as much about returning to Spain before that 23 of June, that’s true), but there were more factors contributing to that decision, which somehow have aligned all together to tell us, very clearly, that Now It’s The Time…

For instance, we always knew that we would eventually move back for my wife to take over the family business, and also that we’d rather make the move in a way that it would be not too bad for our kids when it happened. And having a 6yo and a 9yo already it feels to us like now it’s the perfect time, since they’re already native English speakers (achievement unlocked!) and we believe that staying any longer would only make it harder for them, especially for my 9yo, because it’s never easy to leave your school, friends and place you call home behind when you’re a kid (and I know that very well, as I went through that painful experience precisely when I was 9).

Besides that, I’ve also recently decided to leave Endless after 4 years in the company and so it looks like, once again, moving back home would fit nicely with that work-related change, for several reasons. Now, I don’t want to enter into much detail on why exactly I decided to leave Endless, so I think I’ll summarize it as me needing a change and a rest after these past years working on Endless OS, which has been an equally awesome and intense experience as you can imagine. If anything, I’d just want to be clear on that contributing to such a meaningful project surrounded by such a team of great human beings, was an experience I couldn’t be happier and prouder about, so you can be certain it was not an easy decision to make.

Actually, quite the opposite: a pretty hard one I’d say… but a nice “side effect” of that decision, though, is that leaving at this precise moment would allow me to focus on the relocation in a more organized way as well as to spend some quality time with my family before leaving the UK. Besides, it will hopefully be also useful for us to have enough time, once in Spain, to re-organize our lives there, settle properly and even have some extra weeks of true holidays before the kids start school and we start working again in September.

Now, taking a few weeks off and moving back home is very nice and all that, but we still need to have jobs, and this is where our relocation gets extra interesting as it seems that we’re moving home in multiple ways at once…

For once, my wife will start taking over the family business with the help of her dad in her home town of Lalín (Pontevedra), where we plan to be living for the foreseeable future. This is the place where she grew up and where her family and many friends live in, but also a place she hasn’t lived in for the last 15 years, so the fact that we’ll be relocating there is already quite a thing in the “moving back home” department for her…

Second, for my kids this will mean going back to having their relatives nearby once again as well as friends they only could see and play with during holidays until now, which I think it’s a very good thing for them. Of course, this doesn’t feel as much moving home for them as it does for us, since they obviously consider the UK their home for now, but our hope is that it will be ok in the medium-long term, even though it will likely be a bit challenging for them at the beginning.

Last, I’ll be moving back to work at Igalia after almost 6 years since I left which, as you might imagine, feels to me very much like “moving back home” too: I’ll be going back to working in a place I’ve always loved so much for multiple reasons, surrounded by people I know and who I consider friends already (I even would call some of them “best friends”) and with its foundations set on important principles and values that still matter very much to me, both from technical (e.g. Open Source, Free Software) and not so technical (e.g. flat structure, independence) points of view.

Those who know me better might very well think that I’ve never really moved on as I hinted in the title of the blog post I wrote years ago, and in some way that’s perhaps not entirely wrong, since it’s no secret I always kept in touch throughout these past years at many levels and that I always felt enormously proud of my time as an Igalian. Emmanuele even told me that I sometimes enter what he seems to call an “Igalia mode” when I speak of my past time in there, as if I was still there… Of course, I haven’t seen any formal evidence of such thing happening yet, but it certainly does sound like a possibility as it’s true I easily get carried away when Igalia comes to my mind, maybe as a mix of nostalgia, pride, good memories… those sort of things. I suppose he’s got a point after all…

So, I guess it’s only natural that I finally decided to apply again since, even though both the company and me have evolved quite a bit during these years, the core foundations and principles it’s based upon remain the same, and I still very much align with them. But applying was only one part, so I couldn’t finish this blog post without stating how grateful I am for having been granted this second opportunity to join Igalia once again because, being honest, more often than less I was worried on whether I would be “good enough” for the Igalia of 2018. And the truth is that I won’t know for real until I actually start working and stay in the company for a while, but knowing that both my former colleagues and newer Igalians who joined since I left trust me enough to join is all I need for now, and I couldn’t be more excited nor happier about it.

Anyway, this post is already too long and I think I’ve covered everything I wanted to mention On Moving (pun intended with my post from 2012, thanks Will Thompson for the idea!), so I think I’ll stop right here and re-focus on the latest bits related to the relocation before we effectively leave the UK for good, now that we finally left our rented house and put all our stuff in a removals van. After that, I expect a few days of crazy unpacking and bureaucracy to properly settle in Galicia and then hopefully a few weeks to rest and get our batteries recharged for our new adventure, starting soon in September (yet not too soon!).

As usual, we have no clue of how future will be, but we have a good feeling about this thing of moving back home in multiple ways, so I believe we’ll be fine as long as we stick together as a family as we always did so far.

But in any case, please wish us good luck.That’s always welcome! 🙂

mdl

The last month I wrote a blog post about the LMDB Cache database and my wish to use that in Fractal. To summarize, LMDB is a memory-mapped key-value database that persist the data to the filesystem. I want to use this in the Fractal desktop application to replace the current state storage system (we're using simple json files) and as a side effect we can use this storage system to share data between threads because currently we're using a big struct AppOp shared with Arc<Mutex<AppOp>> and this cause some problems because we need to share and lock and update the state there.

The main goal is to define an app data model with smaller structs and store this using LMDB, then we can access to the same data querying the LMDB and we can update the app state storing to the LMDB.

With this change we don't need to share these structs, we only need to query to the LMDB to get the data and the work with that, and this should simplify our code. The other main benefit will be that we'll have this state in the filesystem by default so when we open the app after close, we'll stay in the same state.

Take a look to the gtk TODO example app to view how to use mdl with signals in a real gtk app.

What is mdl

mdl is Data model library to share app state between threads and process and persist the data in the filesystem. Implements a simple way to store structs instances in a LMDB database and other methods like BTreeMap.

I started to play with the LMDB rust binding and writing some simple tests. After some simple tests, I decided to write a simple abstraction to hide the LMDB internals and to provide a simple data storage and to do that I created the mdl crate.

The idea is to be able to define your app model as simple rust structs. LMDB is a key-value database so every struct instance will have an unique key to store in the cache.

The keys are stored in the cache ordered, so we can use some techniques to store related objects and to retrieve all objects of a kind, we only need to build keys correctly, following an scheme. For example, for fractal we can store rooms, members and messages like this:

  • rooms with key "room:roomid", to store all the room information, title, topic, icon, unread msgs, etc.
  • members with key "member:roomid:userid", to store all member information.
  • messages with key "msg:roomid:msgid" to store room messages.

Following this key assignment we can iterate over all rooms by querying all objects that starts with "room", we can get all members and all messages from a room.

This have some inconveniences, because we can't query directly an message by id if we don't know the roomid. If we need that kind of queries, we need to think about another key assignment or maybe we should duplicate data. key-value are simple databases so we don't have the power of relational databases.

Internals

LMDB is fast and efficient, because it's in memory so using this cache won't add a lot of overhead, but to make it simple to use I've to add some overhead, so mdl is easy by default and can be tuned to be really fast.

This crate has three main modules with traits to implement:

  • model: This contains the Model trait that should implement every struct that we want to make cacheable.
  • store: This contains the Store trait that's implemented by all the cache systems.
  • signal: This contains the Signaler trait and two structs that allow us to emit/subscribe to "key" signals.

And two more modules that implements the current two cache systems:

  • cache: LMDB cache that implements the Store trait.
  • bcache: BTreeMap cache that implements the Store trait. This is a good example of other cache system that can be used, this doesn't persist to the filesystem.

So we've two main concepts here, the Store and the Model. The model is the plain data and the store is the container of data. We'll be able to add models to the store or to query the store to get stored models. We store our models as key-value where the key is a String and the value is a Vec<u8>, so every model should be serializable.

This serialization is the bigger overhead added. We need to do this because we need to be able to store this in the LMDB database. Every request will create a copy of the object in the database, so we're not using the same data. This can be tuned to use pointers to the real data, but to do that we'll need to use unsafe code and I think that the performance that we'll get with this doesn't deserve the complexity that this will add.

By default, the Model trait has two methods fromb and tob to serialize and deserialize using bincode, so any struct that implements the Model trait and doesn't reimplement these two methods should implement Serialize and Deserialize from serde.

The signal system is an addition to be able to register callbacks to keys modifications in the store, so we can do something when a new objects is added, modified or deleted from the store. The signaler is optional and we should use it in a explicit way.

How to use it

First of all, you should define your data model, the struct that you want to be able to store in the database:

#[derive(Serialize, Deserialize, Debug)]
struct A {
    pub p1: String,
    pub p2: u32,
}

In this example we'll define a struct called A with two attributes, p1, a String, and p2, an u32. We derive Serialize and Deserialize because we're using the default fromb and tob from the Model trait.

Then we need to implement the Model trait:

impl Model for A {
    fn key(&self) -> String {
        format!("{}:{}", self.p1, self.p2)
    }
}

We only reimplement the key method to build a key for every instance of A. In this case our key will be the String followed by the number, so for example if we've something like let a = A { p1: "myk", p2: 42 }; the key will be "myk:42".

Then, to use this we need to have a Store, in this example, we'll use the LMDB store that's the struct Cache:

    // initializing the cache. This str will be the fs persistence path
    let db = "/tmp/mydb.lmdb";
    let cache = Cache::new(db).unwrap();

We pass the path to the filesystem where we want to persist the cache as the first argument, in this example we'll persist to "/tmp/mydb.lmdb". When we ran the program for the first time a directory will be created there. The next time, that cache will be used with the information from the previous execution.

Then, with this cache object we can instantiate an A object and store in the cache:

    // create a new *object* and storing in the cache
    let a = A{ p1: "hello".to_string(), p2: 42 };
    let r = a.store(&cache);
    assert!(r.is_ok());

The store method will serialize the object and store a copy of that in the cache.

After the store, we can query for this object from other process, using the same lmdb path, or from the same process using the cache:

    // querying the cache by key and getting a new *instance*
    let a1: A = A::get(&cache, "hello:42").unwrap();
    assert_eq!(a1.p1, a.p1);
    assert_eq!(a1.p2, a.p2);

We'll get a copy of the original one.

This is the full example:

extern crate mdl;
#[macro_use]
extern crate serde_derive;

use mdl::Cache;
use mdl::Model;
use mdl::Continue;

#[derive(Serialize, Deserialize, Debug)]
struct A {
    pub p1: String,
    pub p2: u32,
}
impl Model for A {
    fn key(&self) -> String {
        format!("{}:{}", self.p1, self.p2)
    }
}

fn main() {
    // initializing the cache. This str will be the fs persistence path
    let db = "/tmp/mydb.lmdb";
    let cache = Cache::new(db).unwrap();

    // create a new *object* and storing in the cache
    let a = A{ p1: "hello".to_string(), p2: 42 };
    let r = a.store(&cache);
    assert!(r.is_ok());

    // querying the cache by key and getting a new *instance*
    let a1: A = A::get(&cache, "hello:42").unwrap();
    assert_eq!(a1.p1, a.p1);
    assert_eq!(a1.p2, a.p2);
}

Iterations

When we store objects with the same key prefix we can iterate over all of the objects, because we don't know the full key of all objects.

Currently there's two ways to iterate over all objects with the same prefix in a Store:

  • all

This is the simpler way, calling the all method we'll receive a Vec<T> so we've all the objects in a vector.

    let hellows: Vec<A> = A::all(&cache, "hello").unwrap();
    for h in hellows {
        println!("hellow: {}", h.p2);
    }

This has a little problem, because if we've a lot of objects, this will use a lot of memory for the vector and we'll be iterating over all objects twice. To solve this problems, the iter method was created.

  • iter

The iter method provides a way to call a closure for every object with this prefix in the key. This closure should return a Continue(bool) that will indicates if we should continue iterating of if we should stop the iteration here.

    A::iter(&cache, "hello", |h| {
        println!("hellow: {}", h.p2);
        Continue(true)
    }).unwrap();

Using the Continue we can avoid to iterate over all the objects, for example if we're searching for one concrete object.

We're copying every object, but the iter method is better than the all, because if we don't copy or move the object from the closure, this copy only live in the closure scope, so we'll use less memory and also, we only iterate one. If we use all, we'll iterate over all objects with that prefix to build the vector so if we iterate over that vector another time this will cost more than the iter version.

Signal system

As I said before, the signal system provide us a way to register callbacks to key modifications. The signal system is independent of the Model and Store and can be used independently:

extern crate mdl;

use mdl::Signaler;
use mdl::SignalerAsync;
use mdl::SigType;
use std::sync::{Arc, Mutex};
use std::{thread, time}; 

fn main() {
    let sig = SignalerAsync::new();
    sig.signal_loop();
    let counter = Arc::new(Mutex::new(0));

    // one thread for receive signals
    let sig1 = sig.clone();
    let c1 = counter.clone();
    let t1: thread::JoinHandle<_> =
    thread::spawn(move || {
        let _ = sig1.subscribe("signal", Box::new(move |_sig| {
            *c1.lock().unwrap() += 1;
        }));
    });

    // waiting for threads to finish
    t1.join().unwrap();

    // one thread for emit signals
    let sig2 = sig.clone();
    let t2: thread::JoinHandle<_> =
    thread::spawn(move || {
        sig2.emit(SigType::Update, "signal").unwrap();
        sig2.emit(SigType::Update, "signal:2").unwrap();
        sig2.emit(SigType::Update, "signal:2:3").unwrap();
    });

    // waiting for threads to finish
    t2.join().unwrap();

    let ten_millis = time::Duration::from_millis(10);
    thread::sleep(ten_millis);

    assert_eq!(*counter.lock().unwrap(), 3);
}

In this example we're creating a SignalerAsync that can emit signal and we can subscribe a callback to any signal. The sig.signal_loop(); init the signal loop thread, that wait for signals and call any subscribed callback when a signal comes.

        let _ = sig1.subscribe("signal", Box::new(move |_sig| {
            *c1.lock().unwrap() += 1;
        }));

We subscribe a callback to the signaler. The signaler can be cloned and the list of callbacks will be the same, if you emit a signal in a clone and subscribe in other clone, that signal will trigger the callback.

Then we're emiting some signals:

        sig2.emit(SigType::Update, "signal").unwrap();
        sig2.emit(SigType::Update, "signal:2").unwrap();
        sig2.emit(SigType::Update, "signal:2:3").unwrap();

All of this three signals will trigger the previous callback because the subscription works as a signal starts with. This allow us to subscribe to all new room messages insertion if we follow the previous described keys, subscribing to "msg:roomid" and if we only want to register a callback to be called only when one message is updated we can subscribe to "msg:roomid:msgid" and this callback won't be triggered for other messages.

The callback should be a Box<Fn(signal)> where signal is the following struct:

#[derive(Clone, Debug)]
pub enum SigType {
    Update,
    Delete,
}

#[derive(Clone, Debug)]
pub struct Signal {
    pub type_: SigType,
    pub name: String,
}

Currently only Update and Delete signal types are supported.

Signaler in gtk main loop

All the UI operations in a gtk app should be executed in the gtk main loop so we can't use the SignalerAsync in a gtk app, because this signaler creates one thread for the callbacks so all callbacks should implement the Send trait and if we want to modify, for example, a gtk::Label in a callback, that callback won't implement Send because gtk::Label can't be send between threads safely.

To solve this problem, I've added the SignalerSync. That doesn't launch any threads and where all operations runs in the same thread, even the callback. This is a problem if one of your callbacks locks the thread, because this will lock your interface in a gtk app, so any callback in the sync signaler should be non blocking.

This signaler should be used in a different way, so we should call from time to time to the signal_loop_sync method, that will check for new signals and will trigger any subscribed callback. This signaler doesn't have a signal_loop because we should do the loop in our thread.

This is an example of how to run the signaler loop inside a gtk app:

    let sig = SignalerSync::new();

    let sig1 = sig.clone();
    gtk::timeout_add(50, move || {
        gtk::Continue(sig1.signal_loop_sync())
    });

    // We can subscribe callbacks using the sig here

In this example code we're registering a timeout callback, every 50ms this closure will be called, from the gtk main thread, and the signal_loop_sync will check for signals and call the needed callbacks.

This method returns a bool that's false when the signaler stops. You can stop the signaler calling the stop method.

Point of extension

I've tried to make this crate generic to be able to extend in the future and provide other kind of cache that can be used changing little code in the apps that uses mdl.

This is the main reason to use traits to implement the store, so the first point of extension is to add more cache systems, we're currently two, the LMDB and the BTreeMap, but it would be easy to add more key-value storages, like memcached, unqlie, mongodb, redis, couchdb, etc.

The signaler is really simple, so maybe we can start to think about new signalers that uses Futures and other kind of callbacks registration.

As I said before, mdl does a copy of the data on every write and on every read, so it could be cool to explore the implication of these copies in the performance and try to find methods to reduce this overhead.

Pinpoint Flatpak

A while back I made a Pinpoint COPR repo in order to get access to this marvelous tool in Fedora. Well, now I work for Endless and the only way you can run apps on our system is in a Flatpak container. So I whipped up a quick Pinpoint Flatpak in order to give a talk at GUADEC this year.

Flatpak is actually very helpful here, since the libraries required are rapidly becoming antique, and carrying them around on your base system is gross as well as somewhat insecure. There isn’t a GUI to create or open files, and it’s somewhat awkward to use if you’re not already an expert, so I didn’t submit the app to Flathub, however you can easily download and install the bundle locally. I hope the two people for whom this is useful find it as useful as I did to make.

Pinpoint COPR Repo

A few years ago I worked with a number of my former colleagues to create Pinpoint, a quick hack that made it easier for us to give presentations that didn’t suck. Now that I’m at Collabora I have a couple of presentations to make and using pinpoint was a natural choice. I’ve been updating our internal templates to use our shiny new brand and wanted to use some newer features that weren’t available in Fedora’s version of pinpoint.

There hasn’t been an official release for a little while and a few useful patches have built up on the master branch. I’ve packaged a git snapshot and created a COPR repo for Fedora so you can use these snapshots yourself. They’re good.

Porting Coreboot to the 51NB X210

The X210 is a strange machine. A set of Chinese enthusiasts developed a series of motherboards that slot into old Thinkpad chassis, providing significantly more up to date hardware. The X210 has a Kabylake CPU, supports up to 32GB of RAM, has an NVMe-capable M.2 slot and has eDP support - and it fits into an X200 or X201 chassis, which means it also comes with a classic Thinkpad keyboard . We ordered some from a Facebook page (a process that involved wiring a large chunk of money to a Chinese bank which wasn't at all stressful), and a couple of weeks later they arrived. Once I'd put mine together I had a quad-core i7-8550U with 16GB of RAM, a 512GB NVMe drive and a 1920x1200 display. I'd transplanted over the drive from my XPS13, so I was running stock Fedora for most of this development process.

The other fun thing about it is that none of the firmware flashing protection is enabled, including Intel Boot Guard. This means running a custom firmware image is possible, and what would a ridiculous custom Thinkpad be without ridiculous custom firmware? A shadow of its potential, that's what. So, I read the Coreboot[1] motherboard porting guide and set to.

My life was made a great deal easier by the existence of a port for the Purism Librem 13v2. This is a Skylake system, and Skylake and Kabylake are very similar platforms. So, the first job was to just copy that into a new directory and start from there. The first step was to update the Inteltool utility so it understood the chipset - this commit shows what was necessary there. It's mostly just adding new PCI IDs, but it also needed some adjustment to account for the GPIO allocation being different on mobile parts when compared to desktop ones. One thing that bit me - Inteltool relies on being able to mmap() arbitrary bits of physical address space, and the kernel doesn't allow that if CONFIG_STRICT_DEVMEM is enabled. I had to disable that first.

The GPIO pins got dropped into gpio.h. I ended up just pushing the raw values into there rather than parsing them back into more semantically meaningful definitions, partly because I don't understand what these things do that well and largely because I'm lazy. Once that was done, on to the next step.

High Definition Audio devices (or HDA) have a standard interface, but the codecs attached to the HDA device vary - both in terms of their own configuration, and in terms of dealing with how the board designer may have laid things out. Thankfully the existing configuration could be copied from /sys/class/sound/card0/hwC0D0/init_pin_configs[2] and then hda_verb.h could be updated.

One more piece of hardware-specific configuration is the Video BIOS Table, or VBT. This contains information used by the graphics drivers (firmware or OS-level) to configure the display correctly, and again is somewhat system-specific. This can be grabbed from /sys/kernel/debug/dri/0/i915_vbt.

A lot of the remaining platform-specific configuration has been split out into board-specific config files. and this also needed updating. Most stuff was the same, but I confirmed the GPE and genx_dec register values by using Inteltool to dump them from the vendor system and copy them over. lspci -t gave me the bus topology and told me which PCIe root ports were in use, and lsusb -t gave me port numbers for USB. That let me update the root port and USB tables.

The final code update required was to tell the OS how to communicate with the embedded controller. Various ACPI functions are actually handled by this autonomous device, but it's still necessary for the OS to know how to obtain information from it. This involves writing some ACPI code, but that's largely a matter of cutting and pasting from the vendor firmware - the EC layout depends on the EC firmware rather than the system firmware, and we weren't planning on changing the EC firmware in any way. Using ifdtool told me that the vendor firmware image wasn't using the EC region of the flash, so my assumption was that the EC had its own firmware stored somewhere else. I was ready to flash.

The first attempt involved isis' machine, using their Beaglebone Black as a flashing device - the lack of protection in the firmware meant we ought to be able to get away with using flashrom directly on the host SPI controller, but using an external flasher meant we stood a better chance of being able to recover if something went wrong. We flashed, plugged in the power and… nothing. Literally. The power LED didn't turn on. The machine was very, very dead.

Things like managing battery charging and status indicators are up to the EC, and the complete absence of anything going on here meant that the EC wasn't running. The most likely reason for that was that the system flash did contain the EC's firmware even though the descriptor said it didn't, and now the system was very unhappy. Worse, the flash wouldn't speak to us any more - the power supply from the Beaglebone to the flash chip was sufficient to power up the EC, and the EC was then holding onto the SPI bus desperately trying to read its firmware. Bother. This was made rather more embarrassing because isis had explicitly raised concern about flashing an image that didn't contain any EC firmware, and now I'd killed their laptop.

After some digging I was able to find EC firmware for a related 51NB system, and looking at that gave me a bunch of strings that seemed reasonably identifiable. Looking at the original vendor ROM showed very similar code located at offset 0x00200000 into the image, so I added a small tool to inject the EC firmware (basing it on an existing tool that does something similar for the EC in some HP laptops). I now had an image that I was reasonably confident would get further, but we couldn't flash it. Next step seemed like it was going to involve desoldering the flash from the board, which is a colossal pain. Time to sleep on the problem.

The next morning we were able to borrow a Dediprog SPI flasher. These are much faster than doing SPI over GPIO lines, and also support running the flash at different voltage. At 3.5V the behaviour was the same as we'd seen the previous night - nothing. According to the datasheet, the flash required at least 2.7V to run, but flashrom listed 1.8V as the next lower voltage so we tried. And, amazingly, it worked - not reliably, but sufficiently. Our hypothesis is that the chip is marginally able to run at that voltage, but that the EC isn't - we were no longer powering the EC up, so could communicated with the flash. After a couple of attempts we were able to write enough that we had EC firmware on there, at which point we could shift back to flashing at 3.5V because the EC was leaving the flash alone.

So, we flashed again. And, amazingly, we ended up staring at a UEFI shell prompt[3]. USB wasn't working, and nor was the onboard keyboard, but we had graphics and were executing actual firmware code. I was able to get USB working fairly quickly - it turns out that Linux numbers USB ports from 1 and the FSP numbers them from 0, and fixing that up gave us working USB. We were able to boot Linux! Except there were a whole bunch of errors complaining about EC timeouts, and also we only had half the RAM we should.

After some discussion on the Coreboot IRC channel, we figured out the RAM issue - the Librem13 only has one DIMM slot. The FSP expects to be given a set of i2c addresses to probe, one for each DIMM socket. It is then able to read back the DIMM configuration and configure the memory controller appropriately. Running i2cdetect against the system SMBus gave us a range of devices, including one at 0x50 and one at 0x52. The detected DIMM was at 0x50, which made 0x52 seem like a reasonable bet - and grepping the tree showed that several other systems used 0x52 as the address for their second socket. Adding that to the list of addresses and passing it to the FSP gave us all our RAM.

So, now we just had to deal with the EC. One thing we noticed was that if we flashed the vendor firmware, ran it, flashed Coreboot and then rebooted without cutting the power, the EC worked. This strongly suggested that there was some setup code happening in the vendor firmware that configured the EC appropriately, and if we duplicated that it would probably work. Unfortunately, figuring out what that code was was difficult. I ended up dumping the PCI device configuration for the vendor firmware and for Coreboot in case that would give us any clues, but the only thing that seemed relevant at all was that the LPC controller was configured to pass io ports 0x4e and 0x4f to the LPC bus with the vendor firmware, but not with Coreboot. Unfortunately the EC was supposed to be listening on 0x62 and 0x66, so this wasn't the problem.

I ended up solving this by using UEFITool to extract all the code from the vendor firmware, and then disassembled every object and grepped them for port io. x86 systems have two separate io buses - memory and port IO. Port IO is well suited to simple devices that don't need a lot of bandwidth, and the EC is definitely one of these - there's no way to talk to it other than using port IO, so any configuration was almost certainly happening that way. I found a whole bunch of stuff that touched the EC, but was clearly depending on it already having been enabled. I found a wide range of cases where port IO was being used for early PCI configuration. And, finally, I found some code that reconfigured the LPC bridge to route 0x4e and 0x4f to the LPC bus (explaining the configuration change I'd seen earlier), and then wrote a bunch of values to those addresses. I mimicked those, and suddenly the EC started responding.

It turns out that the writes that made this work weren't terribly magic. PCs used to have a SuperIO chip that provided most of the legacy port functionality, including the floppy drive controller and parallel and serial ports. Individual components (called logical devices, or LDNs) could be enabled and disabled using a sequence of writes that was fairly consistent between vendors. Someone on the Coreboot IRC channel recognised that the writes that enabled the EC were simply using that protocol to enable a series of LDNs, which apparently correspond to things like "Working EC" and "Working keyboard". And with that, we were done.

Coreboot doesn't currently have ACPI support for the latest Intel graphics chipsets, so right now my image doesn't have working backlight control.Backlight control also turned out to be interesting. Most modern Intel systems handle the backlight via registers in the GPU, but the X210 uses the embedded controller (possibly because it supports both LVDS and eDP panels). This means that adding a simple display stub is sufficient - all we have to do on a backlight set request is store the value in the EC, and it does the rest.

Other than that, everything seems to work (although there's probably a bunch of power management optimisation to do). I started this process knowing almost nothing about Coreboot, but thanks to the help of people on IRC I was able to get things working in about two days of work[4] and now have firmware that's about as custom as my laptop.

[1] Why not Libreboot? Because modern Intel SoCs haven't had their memory initialisation code reverse engineered, so the only way to boot them is to use the proprietary Intel Firmware Support Package.
[2] Card 0, device 0
[3] After a few false starts - it turns out that the initial memory training can take a surprisingly long time, and we kept giving up before that had happened
[4] Spread over 5 or so days of real time

comment count unavailable comments

August 02, 2018

On Flatpak updates

Maybe you remember times when updating your system was risky business – your web browser might crash of start to behave funny because the update pulled data files or fonts out from underneath the running process, leading to fireworks or, more likely, crashes.

Flatpak updates on the other hand are 100% safe. You can call

 flatpak update

and the running instances of are not affected in any way. Flatpak keeps existing deployments around until the last user is gone.  If you quit the application and restart it, you will get the updated version, though.

This is very nice, and works just fine. But maybe we can do even better?

Improving the system

It would be great if the system was aware of the running instances, and offered me to restart them to take advantage of the new version that is now available. There is a good chance that GNOME Software will gain this feature before too long.

But for now, it does not have it.

Do it yourself

Many apps, in particular those that are not native to the Linux distro world, expect to update themselves, and we have had requests to enable this functionality in flatpak. We do think that updating software is a system responsibility that should be controlled by global policies and be under the users control, so we haven’t quite followed the request.

But Flatpak 1.0 does have an API that is useful in this context, the “Flatpak portal“. It has a Spawn method that allows applications to launch a process in a new sandbox.

Spawn (IN  ay    cwd_path,
       IN  aay   argv,
       IN  a{uh} fds,
       IN  a{ss} envs,
       IN  u     flags,
       IN  a{sv} options,
       OUT u     pid)

There are several use cases for this, from sandboxing thumbnailers (which create thumbnails for possibly untrusted content files) to sandboxing web browser tabs individually. The use case we are interested in here is restarting the latest version of the app itself.

One complication that I’ve run into when trying this out is the “unique application” pattern that is built into GApplication and similar application classes: Since there is already an owner for the application ID on the session bus, my newly spawned version will just back off and exit. Which is clearly not what I intended in this case.

Make it stop

The workaround I came up with is not very pretty, but functional. It requires several parts.

First, I need a “quit” action exported on the session bus. The newly spawned version will activate this action of the running instance to convince it to go away. Thankfully, my example app already had this action, for the Quit item in the app menu.

I don’t want this to happen unconditionally, but only if I am spawning a new version. To achieve this, I made my app only activate “quit” if the –replace option is present, and add that option to the commandline that I pass to the “Spawn” call.

The code for this part is less pretty than it could be, since GApplication gets in the way a bit. I have to manually check for the –replace option and do the “quit” D-Bus call by hand.

Doing the “quit” call synchronously is not quite enough to avoid a race condition between the running instance dropping the bus name and my new instance attempting to take it. Therefore, I explicitly wait for the bus name to become unowned before entering g_application_run().

But it all works fine. To test it, i exported a “restart” action and added it to the app menu.

Tell me about it

But who can remember to open the app menu and click “Restart”. That is just too cumbersome. Thankfully, flatpak has a solution for this: When you update an app that is running, it creates a marker file named

/app/.updated

inside the sandbox for each running instance.

That makes it very easy for the app to find out when it has been updated, by just monitoring this file. Once the file appears, it can pop up a dialog that offers the user to restart the newer version of the app. A good quality implementation of this will of course save and restore the state when doing this.

Voilá, updates made easy!

You can find the working example in the portal-test repository.

August 01, 2018

How I built pipewire from source code in Linux with systemd

Lately, I have been interested in contributing to Pipewire. One interesting thing about it is that it allows you to use the same video device (for example your webcam) in different applications at the same time.

If you have a webcam, try for example in your terminal:

gst-launch-1.0 v4l2src ! xvimagesink

It just will open a window with the capture of your webcam… don’t close it yet! If in other terminal, you run gst-launch-1.0 v4l2src ! xvimagesink again, you will get the following error:

Device busy

and you realize that your webcam can only be used by one application at the same time, at least until you keep reading the rest of this post.

As I said, I was interested in contributing to pipewire, so you will see how to build it form source code. But if you just want to use the one that is already provided by your distro repositories, just go to the final part.

cd ~
git clone https://github.com/PipeWire/pipewire
cd pipewire

Then you can build pipewire, but before that, for the time I am trying it (2018-08-01), I had the problem that pipewire wanted to override my local pipewire installation (the one that I have already installed from the official packages of Fedora) even when I set a prefix.

systemd unit directory error because of missing permissions

This error, as you may say, is because my user does not have permissions to write into /usr/lib/systemd/user. That’s fine. Actually, I am not sure how bad or good would be to override my system wide pipewire installation. Luckyly, a guy from the #pipewire channel with the nickname jadahl had a patch.

So if you have the same problem, use that patch. To use it just do,

git checkout -b systemduserunitdir
curl -O http://cfoch.github.io/assets/posts/2018-08-01-how-i-built-pipewire-from-source/patch/systemduserunitdir.diff
git apply systemduserunitdir.diff

Now we will build pipewire from source… but before, I some folders a directory where pipewire files will be installed to:

mkdir -p ~/env/pipewire

and also I create the following directory because the .socket and .service unit files should be installed there instead of /usr/lib/systemd/user:

mkdir ~/.local/share/systemd/user

Now, to build pipewire:

mkdir builddir
cd builddir
meson --prefix=$HOME/env/pipewire ..
meson configure -Dsystemd_user_unit_dir=$HOME/.local/share/systemd/user
ninja
ninja install

If you had all the required dependencies, you will have pipewire installed now. In your ~/.bashrc file, you may want to add the following lines:

export PIPEWIRE_PREFIX=$HOME/env/pipewire
export PREFIX_PATH=$PIPEWIRE_PREFIX/bin:$PATH
export LD_LIBRARY_PATH=$PIPEWIRE_PREFIX/lib64:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$PIPEWIRE_PREFIX/lib64/pkgconfig
export GST_PLUGIN_PATH_1_0=$PIPEWIRE_PREFIX/lib64/gstreamer-1.0/:$GST_PLUGIN_PATH_1_0

In my case, I didn’t add these lines to my .bashrc, because I have a script called env.sh in ~/env/pipewire/env.sh, so whenever I want to build an application that uses pipewire from master I just run source ~/env/pipewire/env.sh to enter in my environment. You can download my script from here but I warn that it may contain things that you don’t need!

Now, you have pipewire installed from master. You have to start the daemon, but you don’t want to start the daemon from the pipewire that is installed from the repositories of your Linux distribution.

The solution is to tell systemd to start the daemon for the current user session:

systemctl --user start pipewire

However, when I did that I had the following error: systemd failed

I checked my log journalctl --user -u pipewire and the problem was because of an “error while loading shared libraries: libpipewire-0.2.so”:

systemd failed log

for some reason, systemd wasn’t recognizing my environment. Looking at the manual page of it I was really happy when I found an option to set an environment variable. Before executing the following command take care you dont’t have the LD_LIBRARY_PATH already set in systemd, you can check that with systemctl --user show-environment | grep LD_LIBRARY_PATH. If the output is empty, run this command:

systemctl --user set-environment LD_LIBRARY_PATH=$HOME/env/pipewire/lib64

Now you can start the daemon:

systemctl --user start pipewire

systemd running

Also, you will note that the output says that pipewire is loaded from the systemd unit user directory which in my case is /home/cfoch/.local/share/systemd/user/pipewire.service. That is exactly what I want.

To test your webcam being shared by multiple applications you can try the following command in two or more terminals:

gst-launch-1.0 pipewiresrc ! videoconvert ! xvimagesink

multiple pipewiresrc

You can also try one of the pipewire’s examples:

cd ~/pipewire/builddir/src/examples/
./video-play

video-play sample

Well… that’s it!

Feeds