September 21, 2020

2020-09-21 Monday

  • Mail chew; planning call; admin, marketing call.
  • Interested by Bjoern's latest blog on the filter of OSS projects. Unclear to me what a complement looks like for a pure-play FOSS company with no proprietary software or tangible hardware, or on-line infrastructure to complement. Certainly very aware of the pressure to get the price of a putative "LibreOffice Online" complement down to zero. That is in the direct financial interest of lots of "Mom & Pop local FLOSS installers / 'supporter'", als in the interest of hosters. Unfortunately, these tend not to have invested in the commons.

GtkColumnView

One thing that I left unfinished in my recent series on list views and models in GTK 4 is a detailed look at GtkColumnView. This will easily be the most complicated part of the series. We are entering into the heartland of GtkTreeView—anything aiming to replace most its features will be a complicated beast.

Overview

As we did for GtkListView, we’ll start with a high-level overview and with a picture.

If you look back at the listview picture, you’ll remember that we use a list item factory to create a widget for each item in our model that needs to be displayed.

In a column view, we need multiple widgets for each item—one for each column. The way we do this is by giving each column its own list item factory. Whenever we need to display a new item, we combine the widgets from each columns factory into a row for the new item.

Internally, the column view is actually using a list view to hold the rows. This is nice in that all the things I explained in the previous post about item reuse and about how to use list item factories apply just the same.

Of course, some things are different. For example, the column view has organize the size allocation so that the widgets in all rows line up to form proper columns.

Note: Just like GtkListView, the colum view only creates widgets for the segment of the model that is currently in view, so it shares the vertical scalability. The same is not true in the horizontal direction—every row is fully populated with a widget for each column, even if they are out of view to the left or right. So if you add lots of columns, things will get slow.

Titles, and other complications

The column objects contain other data as well, such as titles. The column view is using those to display a header for each column. If the column view is marked as reorderable, you can rearrange the columns by drag-and-drop of the the header widgets. And if the columns are marked as resizable, you can drag the border between two columns to resize them.

If you payed attention, you may now wonder how this resizing goes together with the fact that the cells in the rows can be arbitrary widgets which expect to have at least their minimum size available for drawing their content. The answer is that we are using another new feature of the GTK 4 rendering machinery: Widgets can control how drawing outside their boundaries (by child widgets) is treated, with

 gtk_widget_set_overflow (cell, GTK_OVERFLOW_HIDDEN)

Sorting, selections, and the quest for treeview parity

Since we want to match GtkTreeview, feature-wise, we are not done yet. Another thing that users like to do in tree views is to click on headers, to sort the content by that column. GtkColumnView headers allow this, too.

You may remember from the last post that sorting is done by wrapping your data in a GtkSortListModel, and giving it a suitable sorter object. Since we want to have a different sort order, depending on what column header you clicked, we give each column its own sorter, which you can set with

gtk_column_view_column_set_sorter (column, sorter)

But how do we get the right sorter from the column you just clicked, and attach it to the sort model? Keep in mind that the sort model is not going to be the outmost model that we pass to the column view, since that is always a selection model, so the column view can’t just switch the sorter on the sort list model on its own.

The solution we’ve come up with is to make the column view provide a sorter that internally uses the column sorters, with

gtk_column_view_get_sorter (view)

You can give this sorter to your sort model once, when you set up your model, and then things will automagically updates when the user clicks on column headers to activate different column sorters.

This sounds complicated, but it works surprisingly well. A nice benefit of this approach is that we can actually sort by more than one column at a time—since we have all the column sorters available, and we know which one you clicked last.

Selection handling is easy, by comparison. It works just the same as it does in GtkListView.

Summary

GtkColumnView is a complex widget, but I hope this series of posts will  make it a little easier to start using it.

September 20, 2020

2020-09-20 Sunday

  • Worship & Daniel sermon together in the morning. Relaxed variously; Numberzilla - so trivial yet engaging, hey ho.

Oculus Rift CV1 progress

In my last post here, I gave some background on how Oculus Rift devices work, and promised to talk more about Rift S internals. I’ll do that another day – today I want to provide an update on implementing positional tracking for the Rift CV1.

I was working on CV1 support quite a lot earlier in the year, and then I took a detour to get basic Rift S support in place. Now that the Rift S works as a 3DOF device, I’ve gone back to plugging away at getting full positional support on the older CV1 headset.

So, what have I been doing on that front? Back in March, I posted this video of a new tracking algorithm I was working on to match LED constellations to object models to get the pose of the headset and controllers:

The core of this matching is a brute-force search that (somewhat cleverly) takes permutations of observed LEDs in the video footage and tests them against permutations of LEDs from the device models. It then uses an implementation of the Lambda Twist P3P algorithm (courtesy of pH5) to compute the possible poses for each combination of LEDs. Next, it projects the points of the candidate pose to count how many other LED blobs will match LEDs in the 3D model and how closely. Finally, it computes a fitness metric and tracks the ‘best’ pose match for each tracked object.

For the video above, I had the algorithm implemented as a GStreamer plugin that ran offline to process a recording of the device movements. I’ve now merged it back into OpenHMD, so it runs against the live camera data. When it runs in OpenHMD, it also has access to the IMU motion stream, which lets it predict motion between frames – which can help with retaining the tracking lock when the devices move around.

This weekend, I used that code to close the loop between the camera and the IMU. It’s a little simple for now, but is starting to work. What’s in place at the moment is:

  • At startup time, the devices track their movement only as 3DOF devices with default orientation and position.
  • When a camera view gets the first “good” tracking lock on the HMD, it calls that the zero position for the headset, and uses it to compute the pose of the camera relative to the playing space.
  • Camera observations of the position and orientation are now fed back into the IMU fusion to update the position and correct for yaw drift on the IMU (vertical orientation is still taken directly from the IMU detection of gravity)
  • Between camera frames, the IMU does interpolates the orientation and position.
  • When a new camera frame arrives, the current interpolated pose is transformed back into the camera’s reference frame and used to test if we still have a visual lock on the device’s LEDs, and to label any newly appearing LEDs if they match the tracked pose
  • The device’s pose is refined using all visible LEDs and fed back to the IMU fusion.

With this simple loop in place, OpenHMD can now track multiple devices, and can do it using multiple cameras – somewhat. The first time the tracking block associated to a camera thinks it has a good lock on the HMD, it uses that to compute the pose of that camera. As long as the lock is genuinely good at that point, and the pose the IMU fusion is tracking is good – then the relative pose between all the cameras is consistent and the tracking is OK. However, it’s easy for that to go wrong and end up with an inconsistency between different camera views that leads to jittery or jumpy tracking….

In the best case, it looks like this:

Which I am pretty happy with 🙂

In that test, I was using a single tracking camera, and had the controller sitting on desk where the camera couldn’t see it, which is why it was just floating there. Despite the fact that SteamVR draws it with a Vive controller model, the various controller buttons and triggers work, but there’s still something weird going on with the controller tracking.

What next? I have a list of known problems and TODO items to tackle:

  • The full model search for re-acquiring lock when we start, or when we lose tracking takes a long time. More work will mean avoiding that expensive path as much as possible.
  • Multiple cameras interfere with each other.
    • Capturing frames from all cameras and analysing them happens on a single thread, and any delay in processing causes USB packets to be missed.
    • I plan to split this into 1 thread per camera doing capture and analysis of the ‘quick’ case with good tracking lock, and a 2nd thread that does the more expensive analysis when it’s needed.
  • At the moment the full model search also happens on the video capture thread, stalling all video input for hundreds of milliseconds – by which time any fast motion means the devices are no longer where we expect them to be.
    • This means that by the next frame, it has often lost tracking again, requiring a new full search… making it late for the next frame, etc.
    • The latency of position observations after a full model search is not accounted for at all in the current fusion algorithm, leading to incorrect reporting.
  • More validation is needed on the camera pose transformations. For the controllers, the results are definitely wrong – I suspect because the controller LED models are supplied (in the firmware) in a different orientation to the HMD and I used the HMD as the primary test.
  • Need to take the position and orientation of the IMU within each device into account. This information is in the firmware information but ignored right now.
  • Filtering! This is a big ticket item. The quality of the tracking depends on many pieces – how well the pose of devices is extracted from the computer vision and how quickly, and then very much on how well the information from the device IMU is combined with those observations. I have read so many papers on this topic, and started work on a complex Kalman filter for it.
  • Improve the model to LED matching. I’ve done quite a bit of work on refining the model matching algorithm, and it works very well for the HMD. It struggles more with the controllers, where there are fewer LEDs and the 2 controllers are harder to disambiguate. I have some things to try out for improving that – using the IMU orientation information to disambiguate controllers, and using better models for what size/brightness we expect an LED to be for a given pose.
  • Initial calibration / setup. Rather than assuming the position of the headset when it is first sighted, I’d like to have a room calibration step and a calibration file that remembers the position of the cameras.
  • Detecting when cameras have been moved. When cameras observe the same device simultaneously (or nearly so), it should be possible to detect if cameras are giving inconsistent information and do some correction.
  • hot-plug detection of cameras and re-starting them when they go offline or encounter spurious USB protocol errors. The latter happens often enough to be annoying during testing.
  • Other things I can’t think of right now.

A nice side effect of all this work is that it can all feed in later to Rift S support. The Rift S uses inside-out tracking to determine the headset’s position in the world – but the filtering to combine those observations with the IMU data will be largely the same, and once you know where the headset is, finding and tracking the controller LED constellations still looks a lot like the CV1’s system.

If you want to try it out, or take a look at the code – it’s up on Github. I’m working in the rift-correspondence-search branch of my OpenHMD repository at https://github.com/thaytan/OpenHMD/tree/rift-correspondence-search

September 17, 2020

Games 3.38

Games 3.38.0

I wanted to start this blog post with “It’s that time of year again”, but looks like Michael beat me to it. So, let’s take a look at some of the changes in GNOME Games 3.38:

retro-gtk 1.0

The library Games uses to implement Libretro frontend, retro-gtk, has been overhauled this cycle. I’ve already covered the major changes in previous blog post, but to recap:

  • Cores now run in a separate process. This provides a better isolation: a crashing core will display an error screen instead of crashing the whole app. This can also improve performance in case the window takes a long time to redraw, for example with fractional scaling.
  • Libretro cores that require OpenGL should now work correctly.
  • The core timing should now be more accurate.
  • Fast-forwarding should actually work, although we don’t make use of it in Games at the moment.

Finally, retro-gtk now has proper docs, published here to go along with the stable API.

Nintendo 64 support

The Legend of Zelda: Ocarina of Time running in Games 3.38, with controller pak switcher open

This was on the radar for a while, but wasn’t possible because all available Nintendo 64 cores use hardware acceleration. Now that retro-gtk supports OpenGL, we can ship ParaLLEl N64 core, and it just works. There’s even a menu to switch between Controller Pak and Rumble Pak.

Unfortunately, retro-gtk still doesn’t support Vulkan rendering, so we can’t enable the fast and accurate ParaLLEl RDP renderer yet.

Collections

Collections in Games 3.38

Neville Antony has been working on implementing collections as part of his GSoC project. So far we have favorites, recently played games and user-created collections.

You can read more about the project in Neville’s blog

Faster loading

In my previous blog post I already mentioned some improvements to the app startup time, but a noticeable progress was made since then. Newly added aggressive caching allows the app to show the collection almost instantly after the first run, and then it can scan and update the collection in background.

Search provider

Search Provider in Games 3.38

Having a complete cache of the collection also allowed to easily implement a fast search provider, so the collection can now be searched directly from GNOME Shell search.

Nintendo DS screen gap

Nintendo DS has two screens. Most games use one of the screens to display the game itself, and the other one for status or map. However, some games use both screens and rely on the fact they are arranged vertically with a gap between them.

If the two screens are shown immediately adjacent to each other, these games look confusing, so in 3.38 a screen gap will be automatically added when using vertical mode.

Sonic Colors in Games 3.36 Sonic Colors in Games 3.38

The size of the screen gap is optimized for a few known games, such as Contra 4, Sonic Colors or Yoshi’s Island DS, other games use a generic value of 80 pixels.

If you know another game that needs to use a specific gap size, please mention it in the comments or open an issue on GitLab.

Gamepad hot plugging in Flatpak

Previously, a major limitation of Flatpak version was that gamepads had to be plugged in before starting the app. Thanks to a change in libmanette 0.2.4, this has been fixed and gamepads can be connected or disconnected at any time now.

Technically, this isn’t a 3.38 addition, and in fact the 3.36.1 build on Flathub already supports it. Nevertheless, it was done since my last blog post and so here we are. :)

Miscellaneous changes

Games covers in Games 3.38

Game covers now look prettier thanks to Neville: they are now rounded and have a blurred version of the cover as background.

Preferences window in Games 3.38

The preferences window has been overhauled to be more similar to the libhandy preferences; unfortunately we can’t use HdyPreferencesWindow yet because of some missing functionality.

Controller testing in Games 3.38

When testing or re-mapping a controller, analog stick indicators now show the stick’s precise position instead of just highlighting one of the edges.

Search empty state in Games 3.38

Veerasamy Sevagen implemented an empty state screen for the collection search.

Swipe to go back in Games 3.38

Swipe gestures to go back are now supported throughout the app. The only place that lacks the gesture now is exiting from a running game, because the game may require pointer input, and having a swipe there may cause conflicts.

Getting Games

Download on Flathub

As always, the latest version of the app is available on Flathub.

September 16, 2020

The GNOME Extensions Rebooted Initiative

With the advent of the new release of GNOME 3.38 – we want to start focusing next cycle on improving the GNOME Extensions experience.

I’m using my blog for now – but we will have a extensions blog where we can start chatting about what’s going on in this important space.

What Extensions Rebooted Initiative is about

It is not a surprise that most people are aware that a large number of extensions break after each release which causes a lot of friction in the community.

Extensions Rebooted is a collaborative effort to address the issues around the GNOME Shell extension ecosystem. We want to start addressing this by making a number of policy changes and technological improvements while building a sustainable community.

Here are some highlights on how we plan to creating a better experience for GNOME extensions:

  • Proper documentation of how extensions work, reasonable expectations to be an extensions developer, participating in the GNOME extensions community.
  • Build CI pipeline (a virtual machine) for extension writers to test their extensions prior to GNOME releases.
  • Centralizing extensions for break testing on the GNOME gitlab space
  • Creating a forum for extension developers and extension writers to work together for the GNOME release cycle

To appreciate and expand on the details of this project, you should check out the Extensions Rebooted Bof on the last GUADEC and my GUADEC talk.

The Extensions Rebooted initiative’s ultimate goal is to get the extensions community to work with each other, have closer ties with GNOME shell developers and provide documentation and tools.

Extension writers are encouraged to get involved and build this better experience. Consumers of extensions are requested to help spread the word and encourage extensions developers to participate so we can all benefit.

To get involved:

GNOME Discourse:

  • Use the “extensions” tag when submitting questions about extensions.

Chat:

Gitlab:

The success of GNOME extensions cannot happen without participation and contributions from the community and so I hope that all of us who write extensions, who are interested in providing technical documentation, and have experience in CI pipelines/devops  can come together and make extensions a sustainable part of the GNOME ecosystem.

The next post will talk about using a pre-built VM image that extension developers can use to test their extensions and have them ready to be used prior to GNOME 3.38 appearing on distributions.

Epiphany 3.38 and WebKitGTK 2.30

It’s that time of year again: a new GNOME release, and with it, a new Epiphany. The pace of Epiphany development has increased significantly over the last few years thanks to an increase in the number of active contributors. Most notably, Jan-Michael Brummer has solved dozens of bugs and landed many new enhancements, Alexander Mikhaylenko has polished numerous rough edges throughout the browser, and Andrei Lisita has landed several significant improvements to various Epiphany dialogs. That doesn’t count the work that Igalia is doing to maintain WebKitGTK, the WPE graphics stack, and libsoup, all of which is essential to delivering quality Epiphany releases, nor the work of the GNOME localization teams to translate it to your native language. Even if Epiphany itself is only the topmost layer of this technology stack, having more developers working on Epiphany itself allows us to deliver increased polish throughout the user interface layer, and I’m pretty happy with the result. Let’s take a look at what’s new.

Intelligent Tracking Prevention

Intelligent Tracking Prevention (ITP) is the headline feature of this release. Safari has had ITP for several years now, so if you’re familiar with how ITP works to prevent cross-site tracking on macOS or iOS, then you already know what to expect here.  If you’re more familiar with Firefox’s Enhanced Tracking Protection, or Chrome’s nothing (crickets: chirp, chirp!), then WebKit’s ITP is a little different from what you’re used to. ITP relies on heuristics that apply the same to all domains, so there are no blocklists of naughty domains that should be targeted for content blocking like you see in Firefox. Instead, a set of innovative restrictions is applied globally to all web content, and a separate set of stricter restrictions is applied to domains classified as “prevalent” based on your browsing history. Domains are classified as prevalent if ITP decides the domain is capable of tracking your browsing across the web, or non-prevalent otherwise. (The public-friendly terminology for this is “Classification as Having Cross-Site Tracking Capabilities,” but that is a mouthful, so I’ll stick with “prevalent.” It makes sense: domains that are common across many websites can track you across many websites, and domains that are not common cannot.)

ITP is enabled by default in Epiphany 3.38, as it has been for several years now in Safari, because otherwise only a small minority of users would turn it on. ITP protections are designed to be effective without breaking too many websites, so it’s fairly safe to enable by default. (You may encounter a few broken websites that have not been updated to use the Storage Access API to store third-party cookies. If so, you can choose to turn off ITP in the preferences dialog.)

For a detailed discussion covering ITP’s tracking mitigations, see Tracking Prevention in WebKit. I’m not an expert myself, but the short version is this: full third-party cookie blocking across all websites (to store a third-party cookie, websites must use the Storage Access API to prompt the user for permission); cookie-blocking latch mode (“once a request is blocked from using cookies, all redirects of that request are also blocked from using cookies”); downgraded third-party referrers (“all third-party referrers are downgraded to their origins by default”) to avoid exposing the path component of the URL in the referrer; blocked third-party HSTS (“HSTS […] can only be set by the first-party website […]”) to stop abuse by tracker scripts; detection of cross-site tracking via link decoration and 24-hour expiration time for all cookies created by JavaScript on the landing page when detected; a 7-day expiration time for all other cookies created by JavaScript (yes, this applies to first-party cookies); and a 7-day extendable lifetime for all other script-writable storage, extended whenever the user interacts with the website (necessary because tracking companies began using first-party scripts to evade the above restrictions). Additionally, for prevalent domains only, domains engaging in bounce tracking may have cookies forced to SameSite=strict, and Verified Partitioned Cache is enabled (cached resources are re-downloaded after seven days and deleted if they fail certain privacy tests). Whew!

WebKit has many additional privacy protections not tied to the ITP setting and therefore not discussed here — did you know that cached resources are partioned based on the first-party domain? — and there’s more that’s not very well documented which I don’t understand and haven’t mentioned (tracker collusion!), but that should give you the general idea of how sophisticated this is relative to, say, Chrome (chirp!). Thanks to John Wilander from Apple for his work developing and maintaining ITP, and to Carlos Garcia for getting it working on Linux. If you’re interested in the full history of how ITP has evolved over the years to respond to the changing threat landscape (e.g. tracking prevention tracking), see John’s WebKit blog posts. You might also be interested in WebKit’s Tracking Prevention Policy, which I believe is the strictest anti-tracking stance of any major web engine. TL;DR: “we treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities. If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice.” No exceptions.

Updated Website Data Preferences

As part of the work on ITP, you’ll notice that Epiphany’s cookie storage preferences have changed a bit. Since ITP enforces full third-party cookie blocking, it no longer makes sense to have a separate cookie storage preference for that, so I replaced the old tri-state cookie storage setting (always accept cookies, block third-party cookies, block all cookies) with two switches: one to toggle ITP, and one to toggle all website data storage.

Previously, it was only possible to block cookies, but this new setting will additionally block localStorage and IndexedDB, web features that allow websites to store arbitrary data in your browser, similar to cookies. It doesn’t really make much sense to block cookies but allow other types of data storage, so the new preferences should better enforce the user’s intent behind disabling cookies. (This preference does not yet block media keys, service workers, or legacy offline web application cache, but it probably should.) I don’t really recommend disabling website data storage, since it will cause compatibility issues on many websites, but this option is there for those who want it. Disabling ITP is also not something I want to recommend, but it might be necessary to access certain broken websites that have not yet been updated to use the Storage Access API.

Accordingly, Andrei has removed the old cookies dialog and moved cookie management into the Clear Personal Data dialog, which is a better place because anyone clearing cookies for a particular website is likely to also want to clear other personal data. (If you want to delete a website’s cookies, then you probably don’t want to leave its SQL databases intact, right?) He had to remove the ability to clear data from a particular point in time, because WebKit doesn’t support this operation for cookies, but that function is probably  rarely-used and I think the benefit of the change should outweigh the cost. (We could bring it back in the future if somebody wants to try implementing that feature in WebKit, but I suspect not many users will notice.) Treating cookies as separate and different from other forms of website data storage no longer makes sense in 2020, and it’s good to have finally moved on from that antiquated practice.

New HTML Theme

Carlos Garcia has added a new Adwaita-based HTML theme to WebKitGTK 2.30, and removed support for rendering HTML elements using the GTK theme (except for scrollbars). Trying to use the GTK theme to render web content was fragile and caused many web compatibility problems that nobody ever managed to solve. The GTK developers were never very fond of us doing this in the first place, and the foreign drawing API required to do so has been removed from GTK 4, so this was also good preparation for getting WebKitGTK ready for GTK 4. Carlos’s new theme is similar to Adwaita, but gradients have been toned down or removed in order to give a flatter, neutral look that should blend in nicely with all pages while still feeling modern.

This should be a fairly minor style change for Adwaita users, but a very large change for anyone using custom themes. I don’t expect everyone will be happy, but please trust that this will at least result in better web compatibility and fewer tricky theme-related bug reports.

Screenshot demonstrating new HTML theme vs. GTK theme
Left: Adwaita GTK theme controls rendered by WebKitGTK 2.28. Right: hardcoded Adwaita-based HTML theme with toned down gradients.

Although scrollbars will still use the GTK theme as of WebKitGTK 2.30, that will no longer be possible to do in GTK 4, so themed scrollbars are almost certain to be removed in the future. That will be a noticeable disappointment in every app that uses WebKitGTK, but I don’t see any likely solution to this.

Media Permissions

Jan-Michael added new API in WebKitGTK 2.30 to allow muting individual browser tabs, and hooked it up in Epiphany. This is good when you want to silence just one annoying tab without silencing everything.

Meanwhile, Charlie Turner added WebKitGTK API for managing autoplay policies. Videos with sound are now blocked from autoplaying by default, while videos with no sound are still allowed. Charlie hooked this up to Epiphany’s existing permission manager popover, so you can change the behavior for websites you care about without affecting other websites.

Screenshot displaying new media autoplay permission settings
Configure your preferred media autoplay policy for a website near you today!

Improved Dialogs

In addition to his work on the Clear Data dialog, Andrei has also implemented many improvements and squashed bugs throughout each view of the preferences dialog, the passwords dialog, and the history dialog, and refactored the code to be much more maintainable. Head over to his blog to learn more about his accomplishments. (Thanks to Google for sponsoring Andrei’s work via Google Summer of Code, and to Alexander for help mentoring.)

Additionally, Adrien Plazas has ported the preferences dialog to use HdyPreferencesWindow, bringing a pretty major design change to the view switcher:

Screenshot showing changes to the preferences dialog
Left: Epiphany 3.36 preferences dialog. Right: Epiphany 3.38. Note the download settings are present in the left screenshot but missing from the right screenshot because the right window is using flatpak, and the download settings are unavailable in flatpak.

User Scripts

User scripts (like Greasemonkey) allow you to run custom JavaScript on websites. WebKit has long offered user script functionality alongside user CSS, but previous versions of Epiphany only exposed user CSS. Jan-Michael has added the ability to configure a user script as well. To enable, visit the Appearance tab in the preferences dialog (a somewhat odd place, but it really needs to be located next to user CSS due to the tight relationship there). Besides allowing you to do, well, basically anything, this also significantly enhances the usability of user CSS, since now you can apply certain styles only to particular websites. The UI is a little primitive — your script (like your CSS) has to be one file that will be run on every website, so don’t try to design a complex codebase using your user script — but you can use conditional statements to limit execution to specific websites as you please, so it should work fairly well for anyone who has need of it. I fully expect 99.9% of users will never touch user scripts or user styles, but it’s nice for power users to have these features available if needed.

HTTP Authentication Password Storage

Jan-Michael and Carlos Garcia have worked to ensure HTTP authentication passwords are now stored in Epiphany’s password manager rather than by WebKit, so they can now be viewed and deleted from Epiphany, which required some new WebKitGTK API to do properly. Unfortunately, WebKitGTK saves network passwords using the default network secret schema, meaning its passwords (saved by older versions of Epiphany) are all leaked: we have no way to know which application owns those passwords, so we don’t have any way to know which passwords were stored by WebKit and which can be safely managed by Epiphany going forward. Accordingly, all previously-stored HTTP authentication passwords are no longer accessible; you’ll have to use seahorse to look them up manually if you need to recover them. HTTP authentication is not very commonly-used nowadays except for internal corporate domains, so hopefully this one-time migration snafu will not be a major inconvenience to most users.

New Tab Animation

Jan-Michael has added a new animation when you open a new tab. If the newly-created tab is not visible in the tab bar, then the right arrow will flash to indicate success, letting you know that you actually managed to open the page. Opening tabs out of view happens too often currently, but at least it’s a nice improvement over not knowing whether you actually managed to open the tab or not. This will be improved further next year, because Alexander is working on a completely new tab widget to replace GtkNotebook.

New View Source Theme

Jim Mason changed view source mode to use a highlight.js theme designed to mimic Firefox’s syntax highlighting, and added dark mode support.

Image showing dark mode support in view source mode
Embrace the dark.

And More…

  • WebKitGTK 2.30 now supports video formats in image elements, thanks to Philippe Normand. You’ll notice that short GIF-style videos will now work on several major websites where they previously didn’t.
  • I added a new WebKitGTK 2.30 API to expose the paste as plaintext editor command, which was previously internal but fully-functional. I’ve hooked it up in Epiphany’s context menu as “Paste Text Only.” This is nice when you want to discard markup when pasting into a rich text editor (such as the WordPress editor I’m using to write this post).
  • Jan-Michael has implemented support for reordering pinned tabs. You can now drag to reorder pinned tabs any way you please, subject to the constraint that all pinned tabs stay left of all unpinned tabs.
  • Jan-Michael added a new import/export menu, and the bookmarks import/export features have moved there. He also added a new feature to import passwords from Chrome. Meanwhile, ignapk added support for importing bookmarks from HTML (compatible with Firefox).
  • Jan-Michael added a new preference to web apps to allow running them in the background. When enabled, closing the window will only hide the the window: everything will continue running. This is useful for mail apps, music players, and similar applications.
  • Continuing Jan-Michael’s list of accomplishments, he removed Epiphany’s previous hidden setting to set a mobile user agent header after discovering that it did not work properly, and replaced it by adding support in WebKitGTK 2.30 for automatically setting a mobile user agent header depending on the chassis type detected by logind. This results in a major user experience improvement when using Epiphany as a mobile browser. Beware: this functionality currently does not work in flatpak because it requires the creation of a new desktop portal.
  • Stephan Verbücheln has landed multiple fixes to improve display of favicons on hidpi displays.
  • Zach Harbort fixed a rounding error that caused the zoom level to display oddly when changing zoom levels.
  • Vanadiae landed some improvements to the search engine configuration dialog (with more to come) and helped investigate a crash that occurs when using the “Set as Wallpaper” function under Flatpak. The crash is pretty tricky, so we wound up disabling that function under Flatpak for now. He also updated screenshots throughout the  user help.
  • Sabri Ünal continued his effort to document and standardize keyboard shortcuts throughout GNOME, adding a few missing shortcuts to the keyboard shortcuts dialog.

Epiphany 3.38 will be the final Epiphany 3 release, concluding a decade of releases that start with 3. We will match GNOME in following a new version scheme going forward, dropping the leading 3 and the confusing even/odd versioning. Onward to Epiphany 40!

September 15, 2020

Want GCC's cleanup attribute in Visual Studio? Here's what to do.

A common pain point for people writing cross platform C code is that they can't use GCC's cleanup attribute and by extension GLib's g_auto family of macros. I had a chat with some Microsoft people and it turns out it may be possible to get this functionality added to Visual Studio. They add features based on user feedback. Therefore, if you want to see this functionality added to VS, here's what you should do:

  1. Create a Microsoft account if you don't already have one.
  2. Upvote this issue.
  3. Spread the word to other interested people.


September 14, 2020

GUADEC 2020

tl;dr: The virtual GUADEC 2020 conference had negligible carbon emissions, on the order of 100× lower than the in-person 2019 conference. Average travel to the 2019 conference was 10% of each person’s annual carbon budget. 2020 had increased inclusiveness; but had the downside of a limited social scene. What can we do to get the best of both for 2021?

It’s been several weeks since GUADEC 2020 was held, and this release cycle of GNOME is coming to a close. It’s been an interesting year. The conference was a different experience from normal, and despite missing seeing everyone in person I thought it went very well. Many thanks to the organising team and especially the sysadmin team. I’m glad an online conference was possible, and happy that it allowed many people to attend who can’t normally do so. I hope we can incorporate the best parts of this year into future conferences.

Measuring things

During the conference, with the help of Bart, I collected some data about the resource consumption of the servers during GUADEC. After a bit of post-processing, it looks like the conference emitted on the order of 0.5–1tCO2e (tonnes of carbon dioxide equivalent, the measure of global warming potential). These emissions were from the conference servers (21% of the total), network traffic (55%), and an estimate of the power used by people’s home computers while watching talks (24%).

By way of contrast, there were estimated emissions of 110tCO2e for travel to and from GUADEC 2019 in Thessaloniki. Travel emissions are likely to be the bulk of the emissions from that conference (insufficient data is available to estimate the other costs, such as building use, food, events, etc.). Of those travel emissions, 98% were from flights, and 79% of attendees flew. The lowest emissions for a return flight were a bit under 0.3tCO2e, the highest were around 3tCO2e, and the mode was the bracket [0.3, 0.6)tCO2e.

This shows quite a contrast between in-person and virtual conferences — a factor of 100 difference in carbon emissions. The conference in Thessaloniki (which I’m focusing on because I’ve got data for it from the post-conference survey, not because it was particularly unusual) had 198 registered attendees, and modal transport emissions per attendee of 0.42tCO2e.

Does it matter?

The recommended personal carbon budget for 2019/2020 is 4.1tCO2e, and it decreases each year until we reach emissions which are compatible with 2°C of global warming in 2050. That means that everyone should only emit 4.1tCO2e or less, per year. Modal emissions of 0.42tCO2e per person attending the 2019 conference is 10% of their carbon budget.

Other emissions pathways give lower budgets sooner, and perhaps would be better goals (2°C of global warming is a lot).

Everyone is in charge of their own carbon budgeting, and how they choose to spend it. It’s possible to spend 10% of your annual budget on one conference and still come in under-budget for the year, but it’s not easy.

For this reason, and for the reasons of inclusiveness which we saw at GUADEC 2020, I hope we keep virtual participation as a first-class part of GUADEC in future. It would be good to explore ways of keeping the social aspects of an in-person conference without completely returning to the previous model of flying everyone to one place.

What about 2021?

I say ‘2021’, but please take this to mean ‘next time it’s safe to host an international in-person conference’.

Looking at the breakdown of transport emissions for GUADEC 2019 by mode, flights are the big target for emissions reductions (note the logarithmic scale):

GUADEC 2019 transport emissions by mode (note logarithmic scale)

Splitting the flights up by length shows that the obvious approach of encouraging international train travel instead of short-haul flights (emissions bins up to 1.2tCO2e/flight in the graph below) would not have got us more than 38% reduction in transport emissions for Thessaloniki, but that’s a pretty good start.

GUADEC 2019 total flight emissions breakdown by flight length (where flight length is bucketed by return emissions; a lower emissions bucket means a shorter return flight)

Would a model where we had per-continent or per-country in-person meetups, all attending a larger virtual conference, have significantly lower emissions? Would it bring back enough of the social atmosphere?

Something to think about for GUADEC 2021! If you have any comments or suggestions, or have spotted any mistakes in this analysis, please get in touch. The data is available here.

Thanks to Will Thompson for proofreading.

The post GUADEC 2020 first appeared on Philip Withnall.

September 13, 2020

Oxidizing portals with zbus

One major pain points of writing a desktop application that interacts with the user's desktop is that a "simple" task can easily become complex.  If you want to write a colour palette generator and you wanted to pick a colour, how would you do that? 

GNOME Shell for example provides a DBus interface org.gnome.Shell.Screenshot that you can communicate with by calling the PickColor method. The method returns a HashMap containing a single key {"color" : [f64;3] }. Thankfully with zbus calling a DBus method is pretty straightforward. 

use zbus::dbus_proxy;
use zvariant::OwnedValue;

#[dbus_proxy(interface = "org.gnome.Shell.Screenshot", default_path = "/org/gnome/Shell/Screenshot")]
trait Screenshot {
    // zbus converts the method names from PascalCase to snake_case.
    fn pick_color(&self) -> zbus::Result<HashMap<String, OwnedValue>>;
}

By using the dbus_proxy macro, we can generate a Proxy containing the only method we care about. We can then use our auto generated ScreenshotProxy to call the pick colour method 

fn main() -> zbus::fdo::Result<()> {
    let connection = zbus::Connection::new_session()?;
    let proxy = ScreenshotProxy::new(&connection)?;
    
    let reply = proxy.pick_color()?;
    println!("{:#?}", reply); 
    // You can grab the color as Vec<f64> using
    let color = reply
            .get("color")
            .unwrap()
            .downcast_ref::<zvariant::Structure>()
            .unwrap()
            .fields()
            .iter()
            .map(|c| *c.downcast_ref::<f64>().unwrap())
            .collect::<Vec<f64>>();
    Ok(())
}

That looks simple if you were targeting GNOME Shell as the only desktop environment.  Supporting other desktop environments like KDE or any X11 based one for example would require more code from the application developer side.

Introducing portals

Portals as in XDG portals, are a bunch of DBus interfaces as a specification that a desktop environment can implement. An application developer can then communicate with the XDG portal DBus interface instead of the desktop environment specific one. The portal implementation will take care of calling the available backend.  By their nature, the portals can't be tied to Flatpak except the few portals that were made especially for Flatpak'ed applications like the update monitor one.

You can find the list of the available portals by looking at the specifications. Let's try to reimplement the same thing using the XDG portal instead of the GNOME Shell one. As portals requires a user interaction, most of the method calls in the various portals returns an ObjectPath like this one /org/freedesktop/portal/desktop/request/SENDER/TOKEN that represents a Request. We should then listen to a Response signal to get the result.

use zvariant::{OwnedObjectPath, OwnedValue};
use zbus::{dbus_proxy, fdo::Result};
use std::collections::HashMap;

#[dbus_proxy(
    interface = "org.freedesktop.portal.Screenshot",
    default_service = "org.freedesktop.portal.Desktop",
    default_path = "/org/freedesktop/portal/desktop"
)]
/// The interface lets sandboxed applications request a screenshot.
trait Screenshot {
    fn pick_color(
        &self,
        parent_window: &str,
        options: HashMap<String, OwnedValue>,
    ) -> Result<OwnedObjectPath>;
}

Calling the pick colour method will now gives as an Object Path instead of the result that would normally contain the colour.

let connection = zbus::Connection::new_session()?;
let proxy = ScreenshotProxy::new(&connection)?;
// We don't have a window identifier, see https://flatpak.github.io/xdg-desktop-portal/portal-docs.html#parent_window
let request_handle = proxy.pick_color("", HashMap::new())?;
println!("{:#?}", request_handle.as_str());

Now that we have the necessary object path we can listen to a response signal.  As of today, zbus doesn't provide a higher level API to await for a response signal. We will do with a simple loop as we can break out of it once we have received a Response signal.

loop {
	let msg = connection.receive_message()?;
	let msg_header = msg.header()?;
	if msg_header.message_type()? == zbus::MessageType::Signal
		&& msg_header.member()? == Some("Response")
	{
	  // We can retrieve the body here, but we need to de-serialize it.
	  let response = msg.body::<T>()?;
	}
}

The type T here should implement zvariant::Type & serde::de::DeserializeOwned. From the signal documentation we can figure out the struct we would need to de-serialize a typical response 

use serde_repr::{Deserialize_repr, Serialize_repr};
use serde::{Serialize, Deserialize};
use zvariant_derive::Type;

#[derive(Serialize_repr, Deserialize_repr, PartialEq, Debug, Type)]
#[repr(u32)]
enum ResponseType {
    /// Success, the request is carried out
    Success = 0,
    /// The user cancelled the interaction
    Cancelled = 1,
    /// The user interaction was ended in some other way
    Other = 2,
}

#[derive(Serialize, Deserialize, Debug, Type)]
pub struct Response(ResponseType, HashMap<String, OwnedValue>);

We can now de-serialize the body into a Response and call a FnOnce on it.  The HashMap in the case of a pick colour call will contain a single key color with a [f64;3] value. zvariant_derive has a neat macro that allow us to de-serialize a a{sv} into a struct.

use zvariant_derive::{SerializeDict, DeserializeDict, TypeDict};

#[derive(SerializeDict, DeserializeDict, Debug, TypeDict)]
struct ColorResponse {
	pub color: [f64; 3],
}

#[derive(Serialize, Deserialize, Debug, Type)]
pub struct Response(pub ResponseType, pub ColorResponse);

fn main() -> zbus::fdo::Result<()> {
	let connection = zbus::Connection::new_session()?;
	let proxy = ScreenshotProxy::new(&connection)?;

	proxy.pick_color("", HashMap::new())?;

	let callback = |r: Response| {
		if r.0 == ResponseType::Success {
			println!("{:#?}", r.1.color);
		}
	};
	loop {
		let msg = connection.receive_message()?;
		let msg_header = msg.header()?;
		if msg_header.message_type()? == zbus::MessageType::Signal
			&& msg_header.member()? == Some("Response")
		{
	  		let response = msg.body::<Response>()?;
	  		callback(response);
	  		break;
		}
	}
	Ok(())
}

Despite zbus being pretty straightforward to use, using portals requires the developer to look at the specifications and figure out which options a portal request can take or the possible responses that you might receive.  Those were the primary reasons I wrote ASHPD.

The ASHPD crate

an acronym of Aperture Science Handheld Portal Device, which is the name of the portal gun in the Portal game, is a crate that aims to provide a simple API around the portals to consume from Rust. 

Let's see how can we pick the colour now using ashpd

use ashpd::desktop::screenshot::{Color, PickColorOptions, ScreenshotProxy};
use ashpd::{RequestProxy, Response, WindowIdentifier};
use zbus::{self, fdo::Result};

fn main() -> Result<()> {
    let connection = zbus::Connection::new_session()?;
    let proxy = ScreenshotProxy::new(&connection)?;
    
    let request_handle = proxy.pick_color(
            WindowIdentifier::default(),
            PickColorOptions::default()
    )?;

    let request = RequestProxy::new(&connection, &request_handle)?;
    request.on_response(|response: Response<Color>| {
        if let Ok(color) = response {
            println!("({}, {}, {})", color.red(), color.green(), color.blue());
        }
   })?;

   Ok(())
}

The crate proves it's usefulness for more complex portals like the file chooser, here's an example from the docs on how easy it's to ask the user to select a file using the native file chooser without linking against GTK or Qt.

use ashpd::desktop::file_chooser::{
    Choice, FileChooserProxy, FileFilter, SelectedFiles, OpenFileOptions,
};
use ashpd::{RequestProxy, Response, WindowIdentifier};
use zbus::{fdo::Result, Connection};

fn main() -> Result<()> {
    let connection = Connection::new_session()?;

    let proxy = FileChooserProxy::new(&connection)?;
    let request_handle = proxy.open_file(
        WindowIdentifier::default(),
        "Select an SVG image to minify it",
        OpenFileOptions::default()
            .accept_label("_Open File")
            .modal(true)
            .multiple(true)
            .filter(FileFilter::new("SVG Image").mimetype("image/svg+xml")),
    )?;

    let request = RequestProxy::new(&connection, &request_handle)?;
    request.on_response(|r: Response<SelectedFiles>| {
        println!("{:#?}", r.unwrap());
    })?;

    Ok(())
}

Currently the crate supports all the available portals, though some signals might be missing till a proper signal support lands in upstream zbus. I'm very thankful for the help/support I've got from Zeeshan & Marc-André Lureau, my work wouldn't have been possible without zbus & zvariant.

If you would like to contribute, read a bit how some of the portals work or just have a more complex DBus API wrapper to learn how to use zbus

Source code: https://github.com/bilelmoussaoui/ashpd
Crates.io:  https://crates.io/crates/ashpd
Documentation : https://docs.rs/ashpd/

Until then, happy hacking!

Proposal for a computer game research topic: the walk/game ratio

I used to play a fair bit of computer games but in the recent years the amount of time spent on games has decreased. Then the lockdown happened and I bought a PS4 and got back into gaming, which was fun. As often is the case, once guy get back into something after a break you find yourself paying attention to things that you never noticed before.

In this particular case it was about those parts of games where you are not actually playing the game. Instead you are walking/driving/riding from one place to another because the actual thing you want to do is somewhere else. A typical example of this is Red Dead Redemption II (and by extension all GTAs). At first wondering the countryside is fun and immersive but at some point it becomes tedious and you just wish to be at your destination (fast travel helps, but not enough). Note that this does not apply to extra content. Having a lush open sandbox world that people can explore at their leisure is totally fine. This is about "grinding by walking" that you have to do in order to complete the game.

This brings up several interesting questions. How much time, on average, do computer games require players to spend travelling from one place to another as opposed to doing the thing the game is actually about (e.g. shooting nazis, shooting zombie nazis, hunting for secret nazi treasure and fighting underwater nazi zombies)? Does this ratio vary over time? Are there differences between genres, design studios and publishers? It turns out that determining these numbers is fairly simple but laborious. I have too many ongoing projects to do this myself, so here's a research outline for anyone to put in their research grant application:

  1. Select a representative set of games.
  2. Go to speedrun.com and download the fastest any-% glitchless run available.
  3. Split the video into separate segments such as "actual gameplay", "watching unskippable cutscenes", "walkgrinding" and "waiting for game to finish loading".
  4. Tabulate times, create graphs

Hypothetical results

As this research has not been done (that I know of and am able to google up) we don't know what the results would be. That does not stop us from speculating endlessly, so here are some estimates that this research might uncover:
  • Games with a lot of walkgrdinding: RDR II, Assassin's Creed series, Metroid Prime.
  • Games with a medium amount of walkgrinding: Control, Uncharted
  • Games with almost no walkgrinding: Trackmania, Super Meat Boy.
  • Games that require the player to watch the same unskippable cutscenes over and over: Super Mario Sunshine
  • Newer games require more walkgrinding simply because game worlds have gotten bigger

September 10, 2020

power-profiles-daemon: new project announcement

Despite what this might look like, I don't actually enjoy starting new projects: it's a lot easier to clean up some build warnings, or add a CI, than it is to start from an empty directory.

But sometimes needs must, and I've just released version 0.1 of such a project. Below you'll find an excerpt from the README, which should answer most of the questions. Please read the README directly in the repository if you're getting to this blog post more than a couple of days after it was first published.

Feel free to file new issues in the tracker if you have ideas on possible power-saving or performance enhancements. Currently the only supported “Performance” mode supported will interact with Intel CPUs with P-State support. More hardware support is planned.

TLDR; this setting in the GNOME 3.40 development branch soon, Fedora packages are done, API docs available:

 

 

From the README:

Introduction

power-profiles-daemon offers to modify system behaviour based upon user-selected power profiles. There are 3 different power profiles, a "balanced" default mode, a "power-saver" mode, as well as a "performance" mode. The first 2 of those are available on every system. The "performance" mode is only available on select systems and is implemented by different "drivers" based on the system or systems it targets.

In addition to those 2 or 3 modes (depending on the system), "actions" can be hooked up to change the behaviour of a particular device. For example, this can be used to disable the fast-charging for some USB devices when in power-saver mode.

GNOME's Settings and shell both include interfaces to select the current mode, but they are also expected to adjust the behaviour of the desktop depending on the mode, such as turning the screen off after inaction more aggressively when in power-saver mode.

Note that power-profiles-daemon does not save the currently active profile across system restarts and will always start with the "balanced" profile selected.

Why power-profiles-daemon

The power-profiles-daemon project was created to help provide a solution for two separate use cases, for desktops, laptops, and other devices running a “traditional Linux desktop”.

The first one is a "Low Power" mode, that users could toggle themselves, or have the system toggle for them, with the intent to save battery. Mobile devices running iOS and Android have had a similar feature available to end-users and application developers alike.

The second use case was to allow a "Performance" mode on systems where the hardware maker would provide and design such a mode. The idea is that the Linux kernel would provide a way to access this mode which usually only exists as a configuration option in some machines' "UEFI Setup" screen.

This second use case is the reason why we didn't implement the "Low Power" mode in UPower, as was originally discussed.

As the daemon would change kernel settings, we would need to run it as root, and make its API available over D-Bus, as has been customary for more than 10 years. We would also design that API to be as easily usable to build graphical interfaces as possible.

Why not...

This section will contain explanations of why this new daemon was written rather than re-using, or modifying an existing one. Each project obviously has its own goals and needs, and those comparisons are not meant as a slight on the project.

As the code bases for both those projects listed and power-profiles-daemon are ever evolving, the comments were understood to be correct when made.

thermald

thermald only works on Intel CPUs, and is very focused on allowing maximum performance based on a "maximum temperature" for the system. As such, it could be seen as complementary to power-profiles-daemon.

tuned and TLP

Both projects have similar goals, allowing for tweaks to be applied, for a variety of workloads that goes far beyond the workloads and use cases that power-profiles-daemon targets.

A fair number of the tweaks that could apply to devices running GNOME or another free desktop are either potentially destructive (eg. some of the SATA power-saving mode resulting in corrupted data), or working well enough to be put into place by default (eg. audio codec power-saving), even if we need to disable the power saving on some hardware that reacts badly to it.

Both are good projects to use for the purpose of experimenting with particular settings to see if they'd be something that can be implemented by default, or to put some fine-grained, static, policies in place on server-type workloads which are not as fluid and changing as desktop workloads can be.

auto-cpufreq

It doesn't take user-intent into account, doesn't have a D-Bus interface and seems to want to work automatically by monitoring the CPU usage, which kind of goes against a user's wishes as a user might still want to conserve as much energy as possible under high-CPU usage.

Avoid “Tag: v-3.38.0-fixed-brown-paper-bag”

Over the past couple of (gasp!) decades, I've had my fair share of release blunders: forgetting to clean the tree before making a tarball by hand, forgetting to update the NEWS file, forgetting to push after creating the tarball locally, forgetting to update the appdata file (causing problems on Flathub)...

That's where check-news.sh comes in, to replace the check-news function of the autotools. Ideally you would:

- make sure your CI runs a dist job

- always use a merge request to do releases

- integrate check-news.sh to your meson build (though I would relax the appdata checks for devel releases)

September 08, 2020

The Road to Mutter & GNOME Shell 3.38

The past two months were hectic, and many changes and new features landed just in time for the 3.37.90 release. As the release freezes approached (user interface freeze, API freeze, etc), part of the queue of merge requests needed a final decision on whether or not they were ready for the upcoming GNOME 3.38 release.

These two months were also punctuated by not one, but two conference talks related to Mutter and GNOME Shell.

As this is the final development summary before the 3.38 release, we decided to change the format a bit and cover the biggest changes that happened throughout this cycle.

Split Frame Clock

Back when Clutter was an application toolkit, it only had to deal with a single frame clock. However, now being a compositor toolkit, having a single frame clock became a limiting aspect of Clutter, since each monitor is ticking at potentially different rates and times. In practice, that meant that on multi-monitor scenarios, the monitor with the slowest refresh rate would limit other monitors.

A large surgery on Mutter’s fork of Clutter implemented a new frame clock, and made each ClutterStageView create its own frame clock, and that allowed monitors not to interfere with each other.

Picture of two monitors with glxgears running at different refresh rates
Multiple clients running at different refresh rates

You can read more about it in our previous article “Splitting Up The Frame Clock.”

Compositor Bypass

Applications such as games, media players, etc, allow running in fullscreen mode. Ideally, when an application is fullscreen should be able to avoid needlessly compositing the desktop when the only visible thing is the client’s fullscreen window. This is usually called “compositor bypass”, or “fullscreen unredirect”.

During this cycle, Mutter on Wayland supports bypassing the compositor when possible. When bypassing the compositor, the window’s content is put directly on screen, without any unnecessary compositing. The results vary from hardware to hardware, but in principle this should reduce CPU and GPU usage and, consequently, improve performance.

Screencast Improvements

Another bunch of screencast improvements made its way to Mutter. Noticeably, screencasting now works correctly even when an application is bypassing the compositor, in which case Mutter directly copies the contents of the application directly to the screencast stream. Have a look:

In addition to that, window screencasting received a round of bug fixes and improvements.

At last, GNOME Shell’s built-in screen recorder was split into a separate system service, which should make recording your screen using it smoother.

Customizable App Grid

A big change in GNOME Shell will be part of GNOME 3.38, and that’s the ability to customize the app grid. This feature was a long wish, and required a substantial number of under-the-hood improvements, and other changes to support it.

It currently supports creating folders by dragging application icons over each other, moving applications from and to folders, and repositioning applications inside the grid.

Folder dialogs also received small visual improvements to match these new changes.

Despite being a feature in itself, the customizable app grid is a necessary step to unlock future design changes that we’ve been researching and investigating for some time now. Stay tuned as we release more information about it!

Calendar Menu Updates

The calendar menu received a handful of visual improvements as well. Calendar events are now displayed below the actual calendar, and sections as more visually prominent.

Screenshot of the update calendar menu
Updated calendar menu

There are more improvements for this menu queued up for the next release. Google Summer of Code student Mariana is continuing to work on grouping notifications, and we expect it to be ready for the release after next.

Parental Controls

Various components of the desktop, including GNOME Shell, Settings, and others, now optionally integrate with the parental controls service that is part of GNOME 3.38.

This allows parents, guardians, supervisors, schools, among others, to limit which applications can be accessed by a particular user.

Other Changes

The shutdown menu was split, and “Restart” is now an entry, together “Shutdown”:

Screenshot of the Power Off / Log Out menu
Power Off / Log Out menu

The layout algorithm of the app grid was rewritten, and should improve the icon arrangement on various scenarios. The number of rows and columns is now defined based on the aspect ratio of the monitor, and the available space, and the icons themselves grow and shrink accordingly.

The number of icons per page is fixed at 24 icons per page. That’s because changing the number of icons per page would potentially incur losing the customizations made to the app grid.

In addition to that, both Mutter and GNOME Shell received an influx of under-the-hood optimizations and improvements throughout the cycle, ranging from Cogl, the underlying OpenGL layer that Mutter uses internally, to Clutter and it’s layout machinery, to Mutter itself and GNOME Shell. Together, these changes bring polish and robustness, and add up for small but noticeable improvements to the experience of using GNOME on a daily basis.

Conferences

GUADEC

GUADEC is the traditional, yearly GNOME conference. During the 2020 edition of this conference, Mutter & GNOME Shell contributors and maintainers had the chance to present the also traditional “State of the Shell”, where a recap of the past 2 cycles was made:

We appreciate the efforts of the GUADEC volunteers that made this year’s edition possible online.

Linux Plumbers Conference

In the 2020 edition of the Linux Plumbers Conference (LPC), we had the opportunity to be part of the debut of the Applications Ecossystem Micro-Conference, together with the Flatpak and Plasma Mobile projects:

We highly recommend everyone to watch and appreciate all three LPC talks in this video, for they contain lots of valuable and interesting information.

Congratulations for the LPC team for providing such a great online conference.

Epilogue

This release cycle was stuffed with new features, improvements, cleanups, code refactorings, and bug fixes, and we had a great time working on it. It is exciting to see such a large number of changes be part of a release, and we are proud of the advancements that we maintainers, contributors, and independent volunteers managed to achieve.

The results of our focus on GNOME Shell and Mutter have been really pleasing, but not just to us! Here are some folks who have great thoughts about it:

“The GNOME Shell team continues its steady path of technical and user experience improvements – not an easy thing to do in a codebase with diverse uses and a long history, such as GNOME Shell / Mutter. The team also sets an example for very successful cooperation between multiple companies and the community. Well done!”

— Matthias Clasen, GTK maintainer, Red Hat

“It’s incredible how polished and mature GNOME has become. I’ve used it to convert many new users away from Mac and Windows and I’m still looking forward to the future! Thanks GNOME team!”

— James (purpleidea)

As we celebrate the imminent GNOME 3.38 release, we also look forward the release after that, and contemplate the many challenges lie ahead.

Summertime sadness

Another summer is about to end and with it comes the autumn* with its typical leaf loss. There’s beauty to the leaves falling and turning yellow/orange, but there’s also an association with melancholia. The possibilities and opportunities of the summer are perceived to be gone, and the chill of the winter is on the horizon.

The weather changes set in at the same time our Google Summer of Code season comes to an end this year. For a couple of years, I have planned to write this blog post to our GSoC alumni, and considering the exceptional quality of our projects this year, I feel that another GSoC can’t go without me finally taking a shot at writing this.

Outreachy and GSoC have been critical to various free and open source communities such as ours. By empowering contributors to spend a few months working fulltime in our projects we are not only benefiting from the features that interns are implementing but also having a chance to recruit talent that will continue pushing our project forward as generations pass.

“Volunteer time isn’t fungible” is a catchphrase but there’s lots of truth to it. Many people cannot afford to contribute to FOSS in their free time. Inequality is on the rise everywhere and job security is a privilege. We cannot expect that interns are going to continue delivering with the same bandwidth if they need to provide for themselves, their families, and/or work towards financial stability. Looking at ways to fund contributors is not a new discussion. Our friends at Tidelift and GitHub have been trying to tackle the problem from various fronts. Either making it easier for people to donate to volunteers and/or helping volunteers get fulltime jobs, the truth is that we are still far from sustainability.

So, if you are a mentor, please take some time to reflect on the reasons why your intern won’t continue participating in the project after the internship period ends and see what you can do to help them continue.

Some companies allow their employees to work in FOSS technologies and our alumni have a proven record of their contributions that can definitely help them land entry-level jobs. Therefore referring interns to job opportunities within your company might be a great way to help. Some companies prioritize candidates referred by fellow employees, so your referral can be of great help.

If you are an intern, discuss with us and with your mentor about your next steps. Reflect on your personal goals and on whether you want to build a career in FOSS. My personal advice is to be persistent. Lots of doors will close, but possibly the right one will open. You have a great advantage of having GSoC/Outreachy on your resume and a proven record of your contributions out in the open. Expand your portfolio by contributing bits that are important to you, and eventually recognition may come.

All in all, a career in FOSS isn’t guaranteed, and as branches grow in different ways, remember that the trunk still holds them together. Your roots are in GNOME and we are very proud to see our alumni thrive in the world, even far away from us.

*at least if you live outside the tropics, but that’s a topic I want to address on another blog post: the obstacles to a career in FOSS if you are coming from the global south.

On list models

In the previous post, I promised to take a deeper look at list models and what GTK 4 offers in this area. Lets start be taking a look at the GListModel interface:

struct _GListModelInterface
{
  GTypeInterface g_iface;

  GType    (* get_item_type) (GListModel *list);
  guint    (* get_n_items)   (GListModel *list);
  gpointer (* get_item)      (GListModel *list,
                              guint       position);
};

An important part of implementing the interface is that you need to emit
the ::items-changed signal when required, using the helper function that
GLib has for this purpose:

void g_list_model_items_changed (GListModel *list,
                                 guint       position,
                                 guint       removed,
                                 guint       added)

A few things to note about this interface:

  • It is very minimal; which makes it easy to implement
  • The API is in terms of positions and only deals with changes in list membership—keeping track of changes to the items themselves is up to you

A list model zoo

GTK ships a sizable collection of list model implementations. Under closer inspection, they fall into several distinct groups.

List model construction kit

The first group is what could be called the list model construction kit: models that let you build new models by modifying or combining models that you already have.

The first model in this group, GtkSliceListModel, take a slice of an existing model, given by an offset and a size, and makes a new model containing just those items. This is useful if you want to present a big list in a paged view—the forward and back buttons will simply increase or decrease the offset by the size. A slice model can also be used to incrementally populate a list, by making the slice bigger over time. GTK is using this technique in some places.

The next model in this group, GtkFlattenListModel, takes several list models and combines them into one. Since this is all about list models, the models to combine are handed to the flatten model in the form of a list model of list models. This is useful whenever you need to combine data from multiple sources, as for example GTK does for the paper sizes in the print dialog.

Paper size list in print dialog
A flattened list

Note that the original models continue to exist behind the flatten model, and their updates will be propagated by the flatten list model, as expected.

Sometimes, you have your data in a list model, but it is not quite in the right form. In this case, you can use a GtkMapListModel replace every item in the original model with different one.

Concrete models

GTK and its dependencies include a number of concrete models for the types of data that we deal with ourselves.

The first example here are Pango objects that are implementing the list model interface for their data: PangoFontMap is a list model of PangoFontFamily objects, and PangoFontFamily is a list model of PangoFontFace objects. The font chooser is using these models.

font chooser dialog
A Pango list model

The next example are the GtkDirectoryList and GtkBookmarkList objects that will be used in the file chooser to represent directory contents and bookmarks. An interesting detail about these is that they both need to do IO to populate their content, and they do it asynchronously to avoid blocking the UI for extended times.

The last model in this group is a little less concrete: GtkStringList is a simple list model wrapper around the all-too-common string arrays. An example where this kind of list model will be frequently used is with GtkDropDown. This is so common that GtkDropDown has a convenience constructor that takes a string array and creates the GtkStringList for you:

GtkWidget *
    gtk_drop_down_new_from_strings (const char * const * strings)

Selection

The next group of models extends GListModel with a new interface: GtkSelectionModel. For each item in the underlying model, a GtkSelectionModel maintains the information whether it is selected or not.

We won’t discuss the interface in detail, since  it is unlikely that you need to implement it yourself, but the most important points are:

gboolean gtk_selection_model_is_selected (GtkSelectionModel *model)
                                          guint              pos)
GtkBitset *
       gtk_selection_model_get_selection (GtkSelectionModel *model)

So you can get the selection information for an individual item, or as a whole, in the form of a bitset. Of course, there is also a ::selection-changed signal that works in a very similar way to the ::items-changed signal of GListModel.

GTK has three GtkSelectionModel implementations: GtkSingleSelection, GtkMultiSelection and GtkNoSelection, which differ in the number of items that can be simultaneously selected (1, many, or 0).

The GtkGridView colors demo shows a multi-selection in action, with rubberbanding:

 

You are very likely to encounter selection models when working with GTK’s new list widgets, since they all expect their models to be selection models.

The big ones

The last group of models I want to mention are the ones doing the typical operations you expect in lists: filtering and sorting. The models are GtkFilterListModel and GtkSortListModel. The both use auxiliary objects to implement their operations: GtkFilter and GtkSorter. Both of these have subclasses to handle common cases: sorting and filtering strings or numbers, or using callbacks.

We have spent considerable effort on these two models in the run-up to GTK 3.99, and made them do their work incrementally, to avoid blocking the UI for extended times when working with big models.

The GtkListView words demo show interactive filtering of a list of 500.000 words:

The leftovers

There are some more list model implementations in GTK that do not fit neatly in any of the above groups, such as GtkTreeListModel, GtkSelectionFilterModel or GtkShortcutController. I’ll skip these today.

Models everywhere

I’ll finish with a brief list of GTK APIs that return list models:

  • gdk_display_get_monitors
  • gtk_widget_observe_children
  • gtk_widget_observe_controllers
  • gtk_constraint_layout_observe_constraints
  • gtk_constraint_layout_observe_guides
  • gtk_file_chooser_get_files
  • gtk_drop_down_get_model
  • gtk_list_view_get_model
  • gtk_grid_view_get_model
  • gtk_column_view_get_model
  • gtk_column_view_get_columns
  • gtk_window_get_toplevels
  • gtk_assistant_get_pages
  • gtk_stack_get_pages
  • gtk_notebook_get_pages

In summary, list models are everywhere in GTK 4. They are flexible and fun, you should use them!

September 07, 2020

v3dv status update 2020-09-07

So here a new update of the evolution of the Vulkan driver for the rpi4 (broadcom GPU).

Features

Since my last update we finished the support for two features. Robust buffer access and multisampling.

Robust buffer access is a feature that allows to specify that accesses to buffers are bounds-checked against the range of the buffer descriptor. Usually this is used as a debug tool during development, but disabled on release (this is explained with more detail on this ARM guide). So sorry, no screenshot here.

On my last update I mentioned that we have started the support for multisampling, enough to get some demos working. Since then we were able to finish the rest of the mulsisampling support, and even implemented the optional feature sample rate shading. So now the following Sascha Willems’s demo is working:

Sascha Willems deferred multisampling demo run on rpi4

Bugfixing

Taking into account that most of the features towards support Vulkan core 1.0 are implemented now, a lot of the effort since the last update was done on bugfixing, focusing on the specifics of the driver. Our main reference for this is Vulkan CTS, the official Khronos testsuite for Vulkan and OpenGL.

As usual, here some screenshots from the nice Sascha Willems’s demos, showing demos that were failing when I wrote the last update, and are working now thanks of the bugfixing work.

Sascha Willems hdr demo run on rpi4

Sascha Willems gltf skinning demo run on rpi4

Next

At this point there are no full features pending to implement to fulfill the support for Vulkan core 1.0. So our focus would be on getting to pass all the Vulkan CTS tests.

Previous updates

Just in case you missed any of the updates of the vulkan driver so far:

Vulkan raspberry pi first triangle
Vulkan update now with added source code
v3dv status update 2020-07-01
V3DV Vulkan driver update: VkQuake1-3 now working
v3dv status update 2020-07-31

September 05, 2020

Streaming your desktop

Changes are risky, taking on a new role on a new company with people you never worked before, growing a whole org from scratch is hard work that comes with a lot of uncertainties. When I decided that I wanted to try something new and join Amazon to work on the new Kindle site in Madrid I knew that it was a leap of faith. I’ve met amazing people and I’ve learned a lot about why Amazon is so successful as a consumer focused company, this is the first time I’ve joined a company to work on closed source software full time and that change has taken a bigger toll that I anticipated, so for a while I’ve been looking for a change. Dealing with this on top of raising a 2 year old while moving cities plus the COVID19 lockdown hasn’t made things any easier for me and my family either.


Luckily I didn’t have to look much further, when I mentioned to Nacho Casal from gedit/GtkSourceView fame that I was looking into something different he mentioned that the NICE DCV team within AWS HPC org was looking for an engineering manager. Suffice to say, I did the interviews, they went well and since mid August I’ve been part of this amazing team. I am peer with Paolo Borelli and I report to Paolo Maggi both former GNOME/gedit/GtkSourceView maintainers. And to add the cherry on top my skip level manager is Ian Colle from Inktank’s and also an ex-RedHatter. The team has made me feel at home.

DCV is a propietary remote desktop solution optimized for high resolution and low latency usecases, it is an amazing piece of technology and it is the most competitive remote desktop protocol for the Linux desktop. It builds upon many GNOME tecnologies like GTK for our Linux/Windows/macOS clients, GStreamer and recently the team has been making inroads into adopting Rust. Stack wise this is a very exciting job for me as it touchs pretty much all the areas I care about and they do their best to open source stuff when they can.

The scope of my team is going to cover mostly the customer facing deliverables such as the clients, packaging and other release process duties. However I will be coordinating upstream contributions as well which is pretty exciting, I am looking forward to work on Wayland integration and other GTK niceties as priority allows. The team understands the importance on investing in the sustainability of the FOSS projects we rely on and I want to make sure that is the case.

Happy hacking!

September 04, 2020

PipeWire Late Summer Update 2020

Wim Taymans

Wim Taymans talking about current state of PipeWire


Wim Taymans did an internal demonstration yesterday for the desktop team at Red Hat of the current state of PipeWire. For those still unaware PipeWire is our effort to bring together audio, video and pro-audio under Linux, creating a smooth and modern experience. Before PipeWire there was PulseAudio for consumer audio, Jack for Pro-audio and just unending pain and frustration for video. PipeWire is being done with the aim of being ABI compatible with ALSA, PulseAudio and JACK, meaning that PulseAudio and Jack apps should just keep working on top of Pipewire without the need for rewrites (and with the same low latency for JACK apps).

As Wim reported yesterday things are coming together with both the PulseAudio, Jack and ALSA backends being usable if not 100% feature complete yet. Wim has been running his system with Pipewire as the only sound server for a while now and things are now in a state where we feel ready to ask the wider community to test and help provide feedback and test cases.

Carla on PipeWire

Carla running on PipeWire

Carla as shown above is a popular Jack applications and it provides among other things this patchbay view of your audio devices and applications. I recommend you all to click in and take a close look at the screenshot above. That is the Jack application Carla running and as you see PulseAudio applications like GNOME Settings and Google Chrome are also showing up now thanks to the unified architecture of PipeWire, alongside Jack apps like Hydrogen. All of this without any changes to Carla or any of the other applications shown.

At the moment Wim is primarily testing using Cheese, GNOME Control center, Chrome, Firefox, Ardour, Carla, vlc, mplayer, totem, mpv, Catia, pavucontrol, paman, qsynth, zrythm, helm, Spotify and Calf Studio Gear. So these are the applications you should be getting the most mileage from when testing, but most others should work too.

Anyway, let me quickly go over some of the highlight from Wim’s presentation.

Session Manager

PipeWire now has a functioning session manager that allows for things like

  • Metadata, system for tagging objects with properties, visible to all clients (if permitted)
  • Load and save of volumes, automatic routing
  • Default source and sink with metadata, saved and loaded as well
  • Moving streams with metadata

Currently this is a simple sample session manager that Wim created himself, but we also have a more advanced session manager called Wireplumber being developed by Collabora, which they developed for use in automotive Linux usecases, but which we will probably be moving to over time also for the desktop.

Human readable handling of Audio Devices

Wim took the code and configuration data in Pulse Audio for ALSA Card Profiles and created a standalone library that can be shared between PipeWire and PulseAudio. This library handles ALSA sound card profiles, devices, mixers and UCM (use case manager used to configure the newer audio chips (like the Lenovo X1 Carbon) and lets PipeWire provide the correct information to provide to things like GNOME Control Center or pavucontrol. Using the same code as has been used in PulseAudio for this has the added benefit that when you switch from PulseAudio to PipeWire your devices don’t change names. So everything should look and feel just like PulseAudio from an application perspective. In fact just below is a screenshot of pavucontrol, the Pulse Audio mixer application running on top of Pipewire without a problem.

PulSe Audio Mixer

Pavucontrol, the Pulse Audio mixer on Pipewire

Creating audio sink devices with Jack
Pipewire now allows you to create new audio sink devices with Jack. So the example command below creates a Pipewire sink node out of calfjackhost and sets it up so that we can output for instance the audio from Firefox into it. At the moment you can do that by running your Jack apps like this:

PIPEWIRE_PROPS="media.class=Audio/Sink" calfjackhost

But eventually we hope to move this functionality into the GNOME Control Center or similar so that you can do this setup graphically. The screenshot below shows us using CalfJackHost as an audio sink, outputing the audio from Firefox (a PulseAudio application) and CalfJackHost generating an analyzer graph of the audio.

Calfjackhost on pipewire

The CalfJackhost being used as an audio sink for Firefox

Creating devices with GStreamer
We can also use GStreamer to create PipeWire devices now. The command belows take the popular Big Buck Bunny animation created by the great folks over at Blender and lets you set it up as a video source in PipeWire. So for instance if you always wanted to play back a video inside Cheese for instance, to apply the Cheese effects to it, you can do that this way without Cheese needing to change to handle video playback. As one can imagine this opens up the ability to string together a lot of applications in interesting ways to achieve things that there might not be an application for yet. Of course application developers can also take more direct advantage of this to easily add features to their applications, for instance I am really looking forward to something like OBS Studio taking full advantage of PipeWire.

gst-launch-1.0 uridecodebin uri=file:///home/wim/data/BigBuckBunny_320x180.mp4 ! pipewiresink mode=provide stream-properties="props,media.class=Video/Source,node.description=BBB"

Cheese paying a video through pipewire

Cheese playing a video provided by GStreamer through PipeWire.

How to get started testing PipeWire
Ok, so after seeing all of this you might be thinking, how can I test all of this stuff out and find out how my favorite applications work with PipeWire? Well first thing you should do is make sure you are running Fedora Workstation 32 or later as that is where we are developing all of this. Once you done that you need to make sure you got all the needed pieces installed:

sudo dnf install pipewire-libpulse pipewire-libjack pipewire-alsa

Once that dnf command finishes you run the following to get PulseAudio replaced by PipeWire.


cd /usr/lib64/

sudo ln -sf pipewire-0.3/pulse/libpulse-mainloop-glib.so.0 /usr/lib64/libpulse-mainloop-glib.so.0.999.0
sudo ln -sf pipewire-0.3/pulse/libpulse-simple.so.0 /usr/lib64/libpulse-simple.so.0.999.0
sudo ln -sf pipewire-0.3/pulse/libpulse.so.0 /usr/lib64/libpulse.so.0.999.0

sudo ln -sf pipewire-0.3/jack/libjack.so.0 /usr/lib64/libjack.so.0.999.0
sudo ln -sf pipewire-0.3/jack/libjacknet.so.0 /usr/lib64/libjacknet.so.0.999.0
sudo ln -sf pipewire-0.3/jack/libjackserver.so.0 /usr/lib64/libjackserver.so.0.999.0

sudo ldconfig

(you can also find those commands here

Once you run these commands you should be able to run

pactl info

and see this as the first line returned:
Server String: pipewire-0

I do recommend rebooting, to be 100% sure you are on a PipeWire system with everything outputting through PipeWire. Once that is done you are ready to start testing!

Our goal is to use the remainder of the Fedora Workstation 32 lifecycle and the Fedora Workstation 33 lifecycle to stabilize and finish the last major features of PipeWire and then start relying on it in Fedora Workstation 34. So I hope this article will encourage more people to get involved and join us on gitlab and on the PipeWire IRC channel at #pipewire on Freenode.

As we are trying to stabilize PipeWire we are working on it on a bug by bug basis atm, so if you end up testing out the current state of PipeWire then be sure to report issues back to us through the PipeWire issue tracker, but do try to ensure you have a good test case/reproducer as we are still so early in the development process that we can’t dig into ‘obscure/unreproducible’ bugs.

Also if you want/need to go back to PulseAudio you can run the commands here

Also if you just want to test a single application and not switch your whole system over you should be able to do that by using the following commands:

pw-pulse

or

pw-jack

Next Steps
So what are our exact development plans at this point? Well here is a list in somewhat priority order:

  1. Stabilize – Our top priority now is to make PipeWire so stable that the power users that we hope to attract us our first batch of users are comfortable running PipeWire as their only audio server. This is critical to build up a userbase that can help us identify and prioritize remaining issues and ensure that when we do switch Fedora Workstation over to using PipeWire as the default and only supported audio server it will be a great experience for users.
  2. Jackdbus – We want to implement support for the jackdbus API soon as we know its an important feature for the Fedora Jam folks. So we hope to get to this in the not to distant future
  3. Flatpak portal for JACK/audio applications – The future of application packaging is Flatpaks and being able to sandbox Jack applications properly inside a Flatpak is something we want to enable.
  4. Bluetooth – Bluetooth has been supported in PipeWire from the start, but as Wims focus has moved elsewhere it has gone a little stale. So we are looking at cycling back to it and cleaning it up to get it production ready. This includes proper support for things like LDAC and AAC passthrough, which is currently not handled in PulseAudio. Wim hopes to push an updated PipeWire in Fedora out next week which should at least get Bluetooth into a basic working state, but the big fix will come later.
  5. Pulse effects – Wim has looked at this, but there are some bugs that blocks data from moving through the pipeline.
  6. Latency compensation – We want complete latency compensation implemented. This is not actually in Jack currently, so it would be a net new feature.
  7. Network audio – PulseAudio style network audio is not implemented yet.

No user-specific XKB configuration in X

This is the continuation from these posts: part 1, part 2, part 3 and part 4.

In the posts linked above, I describe how it's possible to have custom keyboard layouts in $HOME or /etc/xkb that will get picked up by libxkbcommon. This only works for the Wayland stack, the X stack doesn't use libxkbcommon. In this post I'll explain why it's unlikely this will ever happen in X.

As described in the previous posts, users configure with rules, models, layouts, variants and options (RMLVO). What XKB uses internally though are keycodes, compat, geometry, symbols types (KcCGST) [1].

There are, effectively, two KcCGST keymap compilers: libxkbcommon and xkbcomp. libxkbcommon can go from RMLVO to a full keymap, xkbcomp relies on other tools (e.g. setxkbmap) which in turn use a utility library called libxkbfile to can parse rules files. The X server has a copy of the libxkbfile code. It doesn't use libxkbfile itself but it relies on the header files provided by it for some structs.

Wayland's keyboard configuration works like this:

  • the compositor decides on the RMLVO keybard layout, through an out-of-band channel (e.g. gsettings, weston.ini, etc.)
  • the compositor invokes libxkbcommon to generate a KcCGST keymap and passes that full keymap to the client
  • the client compiles that keymap with libxkbcommon and feeds any key events into libxkbcommon's state tracker to get the right keysyms
The advantage we have here is that only the full keymap is passed between entities. Changing how that keymap is generated does not affect the client. This, coincidentally [2], is also how Xwayland gets the keymap passed to it and why Xwayland works with user-specific layouts.

X works differently. Notably, KcCGST can come in two forms, the partial form specifying names only and the full keymap. The partial form looks like this:


$ setxkbmap -print -layout fr -variant azerty -option ctrl:nocaps
xkb_keymap {
xkb_keycodes { include "evdev+aliases(azerty)" };
xkb_types { include "complete" };
xkb_compat { include "complete" };
xkb_symbols { include "pc+fr(azerty)+inet(evdev)+ctrl(nocaps)" };
xkb_geometry { include "pc(pc105)" };
};
This defines the component names but not the actual keymap, punting that to the next part in the stack. This will turn out to be the achilles heel. Keymap handling in the server has two distinct aproaches:
  • During keyboard device init, the input driver passes RMLVO to the server, based on defaults or xorg.conf options
  • The server has its own rules file parser and creates the KcCGST component names (as above)
  • The server forks off xkbcomp and passes the component names to stdin
  • xkbcomp generates a keymap based on the components and writes it out as XKM file format
  • the server reads in the XKM format and updates its internal structs
This has been the approach for decades. To give you an indication of how fast-moving this part of the server is: XKM caching was the latest feature added... in 2009.

Driver initialisation is nice, but barely used these days. You set your keyboard layout in e.g. GNOME or KDE and that will apply it in the running session. Or run setxkbmap, for those with a higher affinity to neckbeards. setxkbmap works like this:

  • setkxkbmap parses the rules file to convert RMLVO to KcCGST component names
  • setkxkbmap calls XkbGetKeyboardByName and hands those component names to the server
  • The server forks off xkbcomp and passes the component names to stdin
  • xkbcomp generates a keymap based on the components and writes it out as XKM file format
  • the server reads in the XKM format and updates its internal structs
Notably, the RMLVO to KcCGST conversion is done on the client side, not the server side. And the only way to send a keymap to the server is that XkbGetKeyboardByName request - which only takes KcCGST, you can't even pass it a full keymap. This is also a long-standing potential issue with XKB: if your client tools uses different XKB data files than the server, you don't get the keymap you expected.

Other parts of the stack do basically the same as setxkbmap which is just a thin wrapper around libxkbfile anyway.

Now, you can use xkbcomp on the client side to generate a keymap, but you can't hand it as-is to the server. xkbcomp can do this (using libxkbfile) by updating the XKB state one-by-one (XkbSetMap, XkbSetCompatMap, XkbSetNames, etc.). But at this point you're at the stage where you ask the server to knowingly compile a wrong keymap before updating the parts of it.

So, realistically, the only way to get user-specific XKB layouts into the X server would require updating libxkbfile to provide the same behavior as libxkbcommon, update the server to actually use libxkbfile instead of its own copy, and updating xkbcomp to support the changes in part 2, part 3. All while ensuring no regressions in code that's decades old, barely maintained, has no tests, and, let's be honest, not particularly pretty to look at. User-specific XKB layouts are somewhat a niche case to begin with, so I don't expect anyone to ever volunteer and do this work [3], much less find the resources to review and merge that code. The X server is unlikely to see another real release and this is definitely not something you want to sneak in in a minor update.

The other option would be to extend XKB-the-protocol with a request to take a full keymap so the server. Given the inertia involved and that the server won't see more full releases, this is not going to happen.

So as a summary: if you want custom keymaps on your machine, switch to Wayland (and/or fix any remaining issues preventing you from doing so) instead of hoping this will ever work on X. xmodmap will remain your only solution for X.

[1] Geometry is so pointless that libxkbcommon doesn't even implement this. It is a complex format to allow rendering a picture of your keyboard but it'd be a per-model thing and with evdev everyone is using the same model, so ...
[2] totally not coincidental btw
[3] libxkbcommon has been around for a decade now and no-one has volunteered to do this in the years since, so...

What’s new with Glade

It’s been a long time since my last post. After doing the last major UI rework which included a new workflow and the use of a headerbar instead of a menu bar I had little free time to work on the project.

Early this year while on quarantine and in between jobs I started working on things I been wanting to but did not had the time

Fix Glade Survey

On January the GNOME infrastructure was migrated to a new server which broke a small web service running at https://people.gnome.org/~jpu/ used to collect Glade’s survey data.

They also added surveys.gnome.org to conduct any GNOME related surveys making my custom service redundant.

So in order to properly fix the survey I made Glade act like a browser and post the data directly to surveys.gnome.org, no need to open a browser!

This means Glade has to download the survey form, parse it to extract a session token and send it as a cookie for it to work!

JavaScript support

A few years ago while working at Endless Mobile I started adding support for JavaScript widgets since we had several widgets implemented in GJS.

Unfortunately back then a really obscure bug made GJS crash so I never added support for JS in Glade. To my surprise this time around GJS did not crash, at least not on Wayland which led me to find a workaround on X11 and move on

So in order for glade to support your JavaScript widgets you will have to:

  • specify gladegjs support code as your plugin library.
  • set glade_gjs_init as you init function.
  • make sure your catalog name is the same as your JavaScript import library since
    glade_gjs_init() will use this name to import your widgets into the
    interpreter.

gjsplugin.xml

<glade-catalog name="gjsplugin" library="gladegjs"
               domain="glade" depends="gtk+">
  <init-function>glade_gjs_init</init-function>

  <glade-widget-classes>
    <glade-widget-class title="MyJSGrid" name="MyJSGrid"
                        generic-name="mygrid"/>
  </glade-widget-classes>

  <glade-widget-group name="gjs" title="Gjs">
    <glade-widget-class-ref name="MyJSGrid"/>
  </glade-widget-group>
</glade-catalog>

gjsplugin.js

const GObject = imports.gi.GObject;
const Gtk = imports.gi.Gtk;

var MyJSGrid = GObject.registerClass({
  GTypeName: 'MyJSGrid',
  Properties: {
    'string-property': GObject.ParamSpec.string('string-property',
      'String Prop',
      'Longer description',
      GObject.ParamFlags.READWRITE | GObject.ParamFlags.CONSTRUCT,
      'Foobar'),
    'int-property': GObject.ParamSpec.int('int-property',
      'Integer Prop',
      'Longer description',
      GObject.ParamFlags.READWRITE | GObject.ParamFlags.CONSTRUCT,
      0, 10, 5)
    },
  Signals: {'mysignal': {param_types: [GObject.TYPE_INT]}},
}, class MyJSGrid extends Gtk.Grid {
  _init(props) {
    super._init(props);
    this.label = new Gtk.Label ({ visible: true });
    this.add (this.label);
    this.connect('notify::string-property',
                 this._update.bind(this));
    this.connect('notify::int-property',
                 this._update.bind(this));
    this._update();
  }
  _update (obj, pspec) {
    this.label.set_text ('JS Properties\nInteger = ' + 
                         this.int_property + '\nString = \'' +
                         this.string_property + '\'');
  }
});

Save these files in a directory added as extra catalog path in the preferences dialog, restart and you should have the new JS classes ready to use.

Composite templates improvements

So far the only way to add custom widgets to glade is by writing a catalog which can be tedious so probably not a feature used by most people.

So to make things easier I made Glade load any template file saved in any extra catalog path automatically.

This means all you have to do to is:

  1. Create your composite template as usual
  2. Save it on a directory
  3. Add the directory to the “Extra Catalog & Templates paths” in preferences
  4. And use your new composite type!

This features will be included in the next stable release, and are already available in nightly flatpak (wiki instructions)

Better deprecations checks

Glade started before GObject introspection so it keeps version and deprecation information in its catalog files which makes it hard to keep it in sync with Gtk. There was also no way to specify in which Gtk version a property or signal was deprecated.

Unfortunately even tough all this metadata is extracted and stored in Gir files it is not available in typelib which makes using libgirepository not an option for Glade instead I decided to make a small script that will take all this information from Gir files and update Glade’s catalog directly making maintaining versioning information way easier!

This means that Glade should catch almost all deprecations and version mismatches even if you are not targeting the latest Gtk version.

Enjoy!

Juan Pablo

September 02, 2020

Testing applications using Flatpak

There are many ways to test merge requests or development branches for GNOME applications. Developers usually clone the repository and build the code manually, or with the help of some tools. Newcomers are advised to use Builder, or BuildStream. However, if one doesn’t want to build the code and just test something, it might be handly to use Flatpak bundle instead. Note that this post assumes some basic Flatpak knowledge and that Flatpak is already installed on your system.

Files (Nautilus) and many other GNOME applications build Flatpak bundle as a part of their GitLab CI and CI artifacts can be used for testing. For example, to test some merge request, one has to wait until the CI is finished at first. Consequently, “View exposed artifact” needs to be expanded. Then, “Get Flatpak bundle here” can be used to download the artifact for Files (the title differs across projects).

After that the downloaded repo.tar archive needs to be extracted:

$ tar --extract --file repo.tar

Consequently the org.gnome.NautilusDevel application can be installed from the local repository (for other applications the different identifier has to be used obviously). This step also requires that the gnome-nightly remote is already added:

$ flatpak remote-add --if-not-exists gnome-nightly https://nightly.gnome.org/gnome-nightly.flatpakrepo
$ flatpak install --reinstall --user ./repo org.gnome.NautilusDevel

Then it should be possible to start the application from applications overview, but it can be started manually as well:

$ flatpak run org.gnome.NautilusDevel

Note that sometimes it is necessary to delete application data when for example testing some changes, which are not backward compatible:

$ rm -r ~/.var/app/org.gnome.NautilusDevel

See Flatpak website to learn more about Flatpak.

September 01, 2020

Shelved Wallpapers 2

Yet again the iterations to produce the default and complimentary wallpapers for 3.38 generated some variants that didn’t make the cut, but I’d like to share with fellow gnomies.

User-specific XKB configuration - putting it all together

This is the continuation from these posts: part 1, part 2, part 3

This is the part where it all comes together, with (BYO) fireworks and confetti, champagne and hoorays. Or rather, this is where I'll describe how to actually set everything up. It's a bit complicated because while libxkbcommon does the parsing legwork now, we haven't actually changed other APIs and the file formats which are still 1990s-style nerd cool and requires significant experience in CS [1] to understand what goes where.

The below relies on software using libxkbcommon and libxkbregistry. At the time of writing, libxkbcommon is used by all mainstream Wayland compositors but not by the X server. libxkbregistry is not yet used because I'm typing this before we had a release for it. But at least now I have a link to point people to.

libxkbcommon has a xkbcli-scaffold-new-layout tool that The xkblayout tool creates the template files as shown below. At the time of writing, this tool must be run from the git repo build directory, it is not installed.

I'll explain here how to add the us(banana) variant and the custom:foo option, and I will optimise for simplicity and brevity.

Directory structure

First, create the following directory layout:


$ tree $XDG_CONFIG_HOME/xkb
/home/user/.config/xkb
├── compat
├── keycodes
├── rules
│   ├── evdev
│   └── evdev.xml
├── symbols
│   ├── custom
│   └── us
└── types
If $XDG_CONFIG_HOME is unset, fall back to $HOME/.config.

Rules files

Create the rules file and add an entry to map our custom:foo option to a section in the symbols/custom file.


$ cat $XDG_CONFIG_HOME/xkb/rules/evdev
! option = symbols
custom:foo = +custom(foo)

// Include the system 'evdev' file
! include %S/evdev
Note that no entry is needed for the variant, that is handled by wildcards in the system rules file. If you only want a variant and no options, you technically don't need this rules file.

Second, create the xml file used by libxkbregistry to display your new entries in the configuration GUIs:


$ cat $XDG_CONFIG_HOME/xkb/rules/evdev.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xkbConfigRegistry SYSTEM "xkb.dtd">
<xkbConfigRegistry version="1.1">
<layoutList>
<layout>
<configItem>
<name>us</name>
</configItem>
<variantList>
<variant>
<configItem>
<name>banana</name>
<shortDescription>banana</shortDescription>
<description>US(Banana)</description>
</configItem>
</variant>
</variantList>
</layout>
</layoutList>
<optionList>
<group allowMultipleSelection="true">
<configItem>
<name>custom</name>
<description>custom options</description>
</configItem>
<option>
<configItem>
<name>custom:foo</name>
<description>This option does something great</description>
</configItem>
</option>
</group>
</optionList>
</xkbConfigRegistry>
Our variant needs to be added as a layoutList/layout/variantList/variant, the option to the optionList/group/option. libxkbregistry will combine this with the system-wide evdev.xml file in /usr/share/X11/xkb/rules/evdev.xml.

Overriding and adding symbols

Now to the actual mapping. Add a section to each of the symbols files that matches the variant or option name:


$ cat $XDG_CONFIG_HOME/xkb/symbols/us
partial alphanumeric_keys modifier_keys
xkb_symbols "banana" {
name[Group1]= "Banana (us)";

include "us(basic)"

key <CAPS> { [ Escape ] };
};
with this, the us(banana) layout will be a US keyboard layout but with the CapsLock key mapped to Escape. What about our option? Mostly the same, let's map the tilde key to nothing:

$ cat $XDG_CONFIG_HOME/xkb/symbols/custom
partial alphanumeric_keys modifier_keys
xkb_symbols "foo" {
key <TLDE> { [ VoidSymbol ] };
};
A note here: NoSymbol means "don't overwrite it" whereas VoidSymbol is "map to nothing".

Notes

You may notice that the variant and option sections are almost identical. XKB doesn't care about variants vs options, it only cares about components to combine. So the sections do what we expect of them: variants include enough other components to make them a full keyboard layout, options merely define a few keys so they can be combined with layouts(variants). Due to how the lookups work, you could load the option template as layout custom(foo).

For the actual documentation of keyboard configuration, you'll need to google around, there are quite a few posts on how to map keys. All that has changed is where from and how things are loaded but not the actual data formats.

If you wanted to install this as system-wide custom rules, replace $XDG_CONFIG_HOME with /etc.

The above is a replacement for xmodmap. It does not require a script to be run manually to apply the config, the existing XKB integration will take care of it. It will work in Wayland (but as said above not in X, at least not for now).

A final word

Now, I fully agree that this is cumbersome, clunky and feels outdated. This is easy to fix, all that is needed is for someone to develop a better file format, make sure it's backwards compatible with the full spec of the XKB parser (the above is a fraction of what it can do), that you can generate the old files from the new format to reduce maintenance, and then maintain backwards compatibility with the current format for the next ten or so years. Should be a good Google Decade of Code beginner project.

[1] Cursing and Swearing

August 31, 2020

Friends of GNOME Update August 2020

Welcome to the August 2020 Friends of GNOME Newsletter!

We’re going to be doing some rebranding soon, including looking for a new name. Our goal is to cover news and activities from the GNOME Foundation, as well as linking out to interesting GNOME news. Feel free to contact us with any name ideas you may have!

A beach, with blue water, brown sand, and a yellow beach umbrella.
“Llegó el verano – Summer is here” by GViciano is licensed under CC BY-SA 2.0

GNOME on the Road

We had an amazing GUADEC last month. We had talks, workshops, and Birds of a Feather sessions. Topics ranged from role of technology in education to team work best practices around building free software to GNOME specific technical discussions. The videos are now online.

GNOME is people and the community really came through at GUADEC, spending lots of social time together, taking advantage of the platform we used for GUADEC 2020.

We’re actively working on the Linux App Summit and GNOME.Asia. The [CFP for the Linux App Summit is currently open][5].

[5}: https://www.gnome.org/news/2020/08/linux-app-summit-2020-call-for-talks-now-open/

New Infrastructure for GNOME

We installed instances of Big Blue Button (video chat software) and Indico (event software) for GUADEC. These have been made available for general use to GNOME Foundation members and for Foundation activities.

Community Engagement Challenge Winners Announced

The Community Engagement Challenge is about coming up with new ways to get people involved in free software and GNOME. The Challenge is set up in phases – at the end of each phase winners are selected for the next stage and supplied with funding to work on their project. We recently announced phase one winners!

These twenty projects are all excellent and quite different from one another. Some are based in organizations, where others are being created fresh by one person. We look forward to seeing how they develop over phase two!

GNOME is Looking For Fundable Projects

We’ve looking at trying something new! A number of projects within GNOME are stuck at a point where funding could make a big difference. We’re looking to identify those and the people working on them in order to help them take the next steps they need to take. If you know of such a project, please add it to the Fundable Projects page.

In general, we’re in the early stages of starting a Fundraising Working Group. If you’re interested in getting involved, we’d love to hear from you!

Thank You!

Thank you so much for supporting the GNOME Foundation! We appreciate everything you do for us!

GStreamer 1.18 supports the Universal Windows Platform

tl;dr: The GStreamer 1.18 release ships with UWP support out of the box, with official GStreamer binary releases for it. Try out the 1.17.90 pre-release 1.18.0 release and let us know how it goes! There's also an example gstreamer app for UWP that showcases OpenGL support (via ANGLE), audio/video capture, hardware codecs, and WebRTC.

Short History Lesson

 
Last year at the GStreamer Conference in Lyon, I gave a talk (slides) about how “Firefox Reality� for the Microsoft HoloLens 2 mixed-reality headset is actually Servo, and it uses GStreamer for all media handling: WebAudio, HTML5 Video, and WebRTC.

I also spoke about the work we at Centricular did to port GStreamer to the HoloLens 2. The HoloLens 2 uses the new development target for Windows Store apps: the Universal Windows Platform. The majority of win32 APIs have been deprecated, and apps have to use the new Windows Runtime, which is a language-agnostic API written from the ground up.

So the majority of work went into making sure that Win32 code didn't use deprecated APIs (we used a bunch of them!), and making sure that we could build using the UWP toolchain. Most of that involved two components:
  • GLib, a cross-platform low-level library / abstraction layer used by GNOME (almost all our win32 code is in here)
  • Cerbero, the build aggregator used by GStreamer to build binaries for all platforms supported: Android, iOS, Linux, macOS, Windows (MSVC, MinGW, UWP)
The target was to port the core of GStreamer, and those plugins with external dependencies that were needed to do playback in <audio> and <video> tags. This meant that the only external plugin dependency we needed was FFmpeg, for the gst-libav plugin. All this went well, and Firefox Reality successfully shipped with that work.

Upstreaming and WebRTC

 
Building upon that work, for the past few months we've been working on adding support for the WebRTC plugin, and also upstreaming as much of the work as possible. This involved a bunch of pieces:
  1. Use only OpenSSL and not GnuTLS in Cerbero because OpenSSL supports targeting UWP. This also had the advantage of moving us from two SSL stacks to one.
  2. Port a bunch of external optional dependencies to Meson so that they could be built with Meson, which is the easiest way for a cross-platform project to support UWP. If your Meson project builds on Windows, it will build on UWP with minimal or no build changes.
  3. Rebase the GLib patches that I didn't find the time to upstream last year on top of 2.62, split into smaller pieces that will be easier to upstream, update for new Windows SDK changes, remove some of the hacks, and so on.
  4. Rework and rewrite the Cerbero patches I wrote last year that were in no shape to be upstreamed.
  5. Ensure that our OpenGL support continues to work using Servo's ANGLE UWP port
  6. Write a new plugin for audio capture called wasapi2, great work by Seungha Yang.
  7. Write a new plugin for video capture called mfvideosrc as part of the media foundation plugin which is new in GStreamer 1.18, also by Seungha.
  8. Write a new example UWP app to test all this work, also done by Seungha! 😄
  9. Run the app through the Windows App Certification Kit
And several miscellaneous tasks and bugfixes that we've lost count of.

Our highest priority this time around was making sure that everything can be upstreamed to GStreamer, and it was quite a success! Everything needed for WebRTC support on UWP has been merged, and you can use GStreamer in your UWP app by downloading the official GStreamer binaries starting with the 1.18 release.

On top of everything in the above list, thanks to Seungha, GStreamer on UWP now also supports:

Try it out!

 
The example gstreamer app I mentioned above showcases all this. Go check it out, and don't forget to read the README file!
 

Next Steps

 
The most important next step is to upstream as many of the GLib patches we worked on as possible, and then spend time porting a bunch of GLib APIs that we currently stub out when building for UWP.

Other than that, enabling gst-libav is also an interesting task since it will allow apps to use FFmpeg software codecs in their gstreamer UWP app. People should use the hardware accelerated d3d11 decoders and mediafoundation encoders for optimal power consumption and performance, but sometimes it's not possible because codec support is very device-dependent. 

Parting Thoughts

 
I'd like to thank Mozilla for sponsoring the bulk of this work. We at Centricular greatly value partners that understand the importance of working with upstream projects, and it has been excellent working with the Servo team members, particularly Josh Matthews, Alan Jeffrey, and Manish Goregaokar.

In the second week of August, Mozilla restructured and the Servo team was one of the teams that was dissolved. I wish them all the best in their future endeavors, and I can't wait to see what they work on next. They're all brilliant people.

Thanks to the forward-looking and community-focused approach of the Servo team, I am confident that the project will figure things out to forge its own way forward, and for the same reason, I expect that GStreamer's UWP support will continue to grow.

First Lenovo laptop with Fedora now available on the web!

This weekend the X1 Carbon with Fedora Workstation went live in North America on Lenovos webstore. This is a big milestone for us and for Lenovo as its the first time Fedora ships pre-installed on a laptop from a major vendor and its the first time the worlds largest laptop maker ships premium laptops with Linux directly to consumers. Currently only the X1 Carbon is available, but more models is on the way and more geographies will get added too soon. As a sidenote, the X1 Carbon and more has actually been available from Lenovo for a couple of Months now, it is just the web sales that went online now. So if you are a IT department buying Lenovo laptops in bulk, be aware that you can already buy the X1 Carbon and the P1 for instance through the direct to business sales channel.

Also as a reminder for people looking to deploy Fedora laptops or workstations in numbers, be sure to check out Fleet Commander our tool for helping you manage configurations across such a fleet.

I am very happy with the work that has been done here to get to this point both by Lenovo and from the engineers on my team here at Red Hat. For example Lenovo made sure to get all of their component makers to ramp up their Linux support and we have been working with them to both help get them started writing drivers for Linux or by helping add infrastructure they could plug their hardware into. We also worked hard to get them all set up on the Linux Vendor Firmware Service so that you could be assured to get updated firmware not just for the laptop itself, but also for its components.

We also have a list of improvements that we are working on to ensure you get the full benefit of your new laptops with Fedora and Lenovo, including working on things like improved power management features being able to have power consumption profiles that includes a high performance mode for some laptops that will allow it to run even faster when on AC power and on the other end a low power mode to maximize battery life. As part of that we are also working on adding lap detection support, so that we can ensure that you don’t risk your laptop running to hot in your lap and burning you or that radio antennas are running to strong when that close to your body.

So I hope you decide to take the leap and get one of the great developer laptops we are doing together with Lenovo. This is a unique collaboration between the worlds largest laptop maker and the worlds largest Linux company. What we are doing here isn’t just a minimal hardware enablement effort, but a concerted effort to evolve Linux as a laptop operating system and doing it in a proper open source way. So this is the culmination of our work over the last few years, creating the LVFS, adding Thunderbolt support to Linux, improving fingerprint reader support in Linux, supporting HiDPI screens, supporting hidpi mice, creating the possibility of a secure desktop with Wayland, working with NVidia to ensure that Mesa and Nvidia driver can co-exist through glvnd, creating Flatpak to ensure we can bring the advantages of containers to the desktop space and at the same way do it in a vendor neutral way. So when you buy a Lenovo laptop with Fedora Workstation, you are not just getting a great system, but you are also supporting our efforts to take Linux to the next level, something which I think we are truly the only linux vendor with the scale and engineering ability to do.

Of course we are not stopping here, so let me also use this chance to talk a bit about some of our other efforts.

Toolbox
Containers are popular for deploying software, but a lot of people are also discovering now that they are an incredible way to develop software, even if that software is not going to be deployed as a Flatpak or Kubernetes container. The term often used for containers when used as a development tool is pet containers and with Toolbox project we are aiming to create the best tool possible for developers to work with pet containers. Toolbox allows you to have always have a clean environment to work in which you can change to suit each project you work on, however you like, without affecting your host system. So for instance if you need to install a development snapshot of Python you can do that inside your Toolbox container and be confident that various other parts of your desktop will not start crashing due to the change. And when your are done with your project and don’t want that toolbox around anymore you can easily delete it without having to spend time to figure out which packages you installed can now be safely uninstalled from your host system or just not bother and have your host get bloated over time with stuff you are not actually using anymore.

One big advantage we got at Red Hat is that we are a major contributor to container technologies across the stack. We are a major participant in the Open Container Initiative and we are alongside Google the biggest contributor to the Kubernetes project. This includes having created a set of container tools called Podman. So when we started prototyping Toolbox we could base it up on podman and get access to all the power and features that podman provides, but at the same make them easier to use and consumer from your developer laptop or workstation.

Our initial motivation was also driven by the fact that for image based operating systems like Fedora Silverblue and Fedora CoreOS, where the host system is immutable you still need some way to be able to install packages and do development, but we quickly realized that the pet container development model is superior to the old ‘on host’ model even if you are using a traditional package based system like Fedora Workstation. So we started out by prototyping the baseline functionality, writing it as a shell script to quickly test out our initial ideas. Of course as Toolbox picked up in popularity we realized we needed to transition quickly to a proper development language so that we wouldn’t end up with an unmaintainable mess written in shell, and thus Debarshi Ray and Ondřej Míchal has recently completed the rewrite to Go (Note: the choice of Go was to make it easier for the wider container community to contribute since almost all container tools are written in Go).

Leading up towards Fedora Workstation 33 we are trying figure out a few things. One is how we can make giving you access to a RHEL based toolbox through the Red Hat Developer Program in an easy and straightforward manner, and this is another area where pet container development shines. You can set up your pet container to run a different Linux version than your host. So you can use Fedora to get the latest features for your laptop, but target RHEL inside your Toolbox to get an easy and quick deployment path to your company RHEL servers. I would love it if we can extend this even further as we go along, to for instance let you set up a Steam runtime toolbox to do game development targeting Steam.
Setting up a RHEL toolbox is already technically possible, but requires a lot more knowledge and understanding of the underlaying technologies than we wish.
The second thing we are looking at is how we deal with graphical applications in the context of these pet containers. The main reason we are looking at that is because while you can install for instance Visual Studio code inside the toolbox container and launch it from the command line, we realize that is not a great model for how you interact with GUI applications. At the moment the only IDE that is set up to be run in the host, but is able to interact with containers properly is GNOME Builder, but we realize that there are a lot more IDEs people are using and thus we want to try to come up with ways to make them work better with toolbox containers beyond launching them from the command line from inside the container. There are some extensions available for things like Visual Studio Code starting to try to improve things (those extensions are not created by us, but looking at solving a similar problem), but we want to see how we can help providing a polished experience here. Over time we do believe the pet container model of development is so good that most IDEs will follow in GNOME Builders footsteps and make in-container development a core part of the feature set, but for now we need to figure out a good bridging strategy.

Wayland – headless and variable refresh rate.
Since switching to Wayland we have continued to work in improving how GNOME work under Wayland to remove any major feature regressions from X11 and to start taking advantage of the opportunities that Wayland gives us. One of the last issues that Jonas Ådahl has been hard at work recently is trying to ensure we have headless support for running GNOME on systems without a screen. We know that there are a lot of sysadmins for instance who want to be able to launch a desktop session on their servers to be used as a tool to test and debug issues. These desktops are then accessed through tools such as VNC or Nice DCV. As part of that work he also made sure we could deal with having multiple monitors connected which had different refresh rates. Before that fix you would get the lowest common denominator between your screens, but now if you for instance got a 60Hz monitor and a 75Hz monitor they will be able to function independent of each other and run at their maximum refresh rate. With the variable refresh rate work now landed upstream Jonas is racing to get the headless support finished and landed in time for Fedora Workstation 33.

Linux Vendor Firmware Service
Richard Hughes is continuing his work on moving the LVFS forward having spent time this cycle working with the Linux Foundation to ensure the service can scale even better. He is also continuously onboarding new vendors and helping existing vendors use LVFS for even more things. We are now getting reports that LVFS has become so popular that we are now getting reports of major hardware companies who up to know hasn’t been to interested in the LVFS are getting told by their customers to start using it or they will switch supplier. So expect the rapid growth of vendors joining the LVFS to keep increasing. It is also worth nothing that many of vendors who are already set up on LVFS are steadily working on increasing the amount of systems they support on it and pushing their suppliers to do the same. Also for enterprise use of LVFS firmware Marc Richter also wrote an article on access.redhat.com about how to use LVFS with Red Hat Satelitte. Satellite for those who don’t know it is Red Hats tool for managing and keeping servers up to date and secure. So for large companies having their machines, especially servers, accessing LVFS directly is not a wanted behaviour, so now they can use Satelitte to provide a local repository of the LVFS firmware.

PipeWire
One of the changes we been working on that I am personally extremely excited about is PipeWire. For those of you who don’t know it, PipeWire is one of our major swamp draining efforts which aims to bring together audio, pro-audio and video under linux and provide a modern infrastructure for us to move forward. It does so however while being ABI compatible with both Jack and PulseAudio, meaning that applications will not need to be ported to work with PipeWire. We have been using it for a while for video already to handle screen capture under Wayland and for allowing Flatpak containers access to webcams in a secure way, but Wim Taymans has been working tirelessly on moving that project forward over the last 6 Months, focused a lot of fixing corner cases in the Jack support and also ramping up the PulseAudio support. We had hoped to start wide testing in Fedora Workstation 32 of the audio parts of PipeWire, but we decided that since such a key advantage that PipeWire brings is not just to replace Jack or PulseAudio, but also to ensure the two usecases co-exist and interact properly, we didn’t want to start asking people to test until we got the PulseAudio support close to being production ready. Wim has been making progress by leaps and bounds recently and while I can’t 100% promise it yet we do expect to roll out the audio bits of PipeWire for more widescale testing in Fedora Workstation 33 with the goal of making it the default for Fedora Workstation 34 or more likely Fedora Workstation 35.
Wim is doing an internal demo this week, so I will try to put out a blog post talking about that later in the week.

Flatpak – incremental updates
One of the features we added to Flatpaks was the ability to distribute them as Open Container Initiative compliant containers. The reason for this was that as companies, Red Hat included, built infrastructure for hosting and distributing containers we could also use that for Flatpaks. This is obviously a great advantage for a variety of reasons, but it had one large downside compared to the traditional way of distributing Flatpaks (as Ostree images) which is that each update comes as a single large update as opposed to the atomic update model that OStree provides.
Which is why if you would compare the same application when shipping from Flathub, which uses Ostree, versus from the Fedora container registry, you would quickly notice that you get a lot smaller updates from Flathub. For kubernetes containers this hasn’t been considered a huge problem as their main usecase is copying the containers around in a high-speed network inside your cloud provider, but for desktop users this is annoying. So Alex Larsson and Owen Taylor has been working on coming up with a way to do to incremental updates for OCI/Docker/Kubernetes containers too, which not only means we can get very close to the Flathub update size in the Fedora Container Catalog, but it also means that since we implemented this in a way that works for all OCI/Kubernetes containers you will be able to get them too with incremental update functionality. Especially as such containers are making their way into edge computing where update sizes do matter, just like they do on the desktop.

Hangul input under Wayland
Red Hat, like Lenovo, targets most of the world with our products and projects. This means that we want them to work great even for people who doesn’t use English or another European language. To achieve this we have a team dedicated to ensuring that not just Linux, but all Red Hat products work well for international users as part of my group at Red Hat. That team, lead by Jens Petersen, is distributed around the globe with engineers in Japan, China, India, Singapore and Germany. This team contributes to a lot of different things like font maintenance, input method development, i18n infrastructure and more.
One thing this team recently discovered was that the support for Korean input under Wayland. So Peng Wu, Takao Fujiwara and Carlos Garnacho worked together to come up with a series of patches for ibus and GNOME Shell to ensure that Fedora Workstation on Wayland works perfectly for Korean input. I wanted to highlight this effort because while I don’t usually mention efforts which such a regional impact in my blog posts it is a critical part of keeping Linux viable and usable across the globe. And ensuring that you can use your computer in your own language is something we feel is important and want to enable and also an area where I believe Red Hat is investing more than any other vendor out there.

GLX on EGL
We meet with NVidia on a regular basis to discuss topics of shared interest and one thing we been looking at for a while now is the best way to support Nvidia binary driver under XWayland. As part of that Adam Jackson has been working on a research project to see how feasible it would be to create a way to run GLX applications on top of EGL. As one might imagine EGL doesn’t have a 1to1 match with GLX APIs, but based on what we seen so far is that it should be close enough to get things going (Adam already got glxgears running :). The goal here would be to have an initial version that works ok, and then in collaboration with NVidia we can evolve it to be a great solution for even the most demanding OpenGL/GLX applications. Currently the code causes an extra memcopy compared to running on GLX native, but this is something we think can be resolved in collaboration with NVidia. Of course this is still an early stage effort and Adam and NVidia are currently looking at it so there is of course a chance still we will hit a snag and have to go back to the drawing board. For those interested you can take a look at this Mesa merge request to see the current state.

Where to next!, GSoC 2020 final Report

It has been 4 months since my acceptance in the GSoC program, it is really true, time flies when you’re having fun. Google Summer of Code was one of the good experiences I’ve had. I basically didn’t have any knowledge about the open source world, but it helped me get started into knowing more about it and start contributing to those awesome communities like GNOME and EteSync. I am really glade with the experience I’ve gained and the people I knew along the journey.

What was the project about?

My project during Google summer of code 2020 with GNOME organization and EteSync was to implement an EteSync module for Evolution data server and Evolution, to enable EteSync users to add their account into Evoltuion and be able to manage their data from there.

EteSync, it is a secure, end to-end encrypted and FLOSS sync solution for your contacts, calendars and tasks. So it help keep your data safe and only you can see it so anyone who has access to the servers can not actually get or see your data.

Evolution is a personal information management application that provides integrated mail, calendaring and address book functionality, you can also add tasks and it has memo taking features. So you can add multiple different accounts to Evolution and handle all of them from one place.

See the module in action

You can easily follow the guide in here to see how to install the module and use it or just watch it in action :p.

Want the code? you can get the code from this repo.

Module Features

The module can do all what you expect it to do as a user.

  1. Add an account to Evolution.
  2. Initialize new accounts (set encryption password for new users).
  3. Fetch you data (Address-books, Calendars and Task lists).
  4. Create, delete and modify data to existing journals (Contacts, Events and Tasks).
  5. Handle journals in your account.
    • Create
    • Delete
    • Modify (Rename and Change journal color)

about the module

The module is ready, it was created in its own repo, and the repo was approved by the maintainers and is now in the process of moving under the GNOME/Evolution umbrella.

The module uses EteSync C API. The module structure itself is somehow self explanatory. The module main code is in the src folder.

  • Src Contains:
    1. addressbook
      • Contains the address-book back-end files, these files have the main function for implementing the fetching and reading functionalities for the EteSync addressbooks.
    2. calendar
      • Contains the Calendar back-end files, these files have the main function for implementing the fetching and reading functionalities for the EteSync calendars.
    3. common
      • It has the common files used around the whole code, it also contain e-etesync-connection.c file which is a common object for handling the connection with the EteSync server, things like handling entries create, delete and modify functions same for journals create, delete and modify and other important functions for authentication and such.
    4. credentials
      • This folder is more towards the user experience with Evolution, as it contains the code for creating the credentials dialog and “initializing new users” dialog.
      • It also handles storing the user credentials in the keyring a safe place for all your important data so it can be used for authentication when needed.
    5. evolution
      • Has also folder for configuring with Evolution, the most important one is the e-etesync-config-lookup.c which is used when first adding an account to Evolution, checks if account exists in server and adds the required collection source file when the user chooses to.
    6. registry
      • Contains the back-end functionalities for handling the collection account, like fetching the existing journals, creating and deleting journals.

Challenges

This part is important as I still remember the struggles I had at first. I remember I was a bit worried about finishing the project. But actually in an open source world, you’ll find all the help you need from the communities. Thanks to my mentors (Milan and Tom) and the communities of both GNOME and EteSync. I had all the support and the help I need to be able to achieve what I did. It doesn’t matter if you know exactly how to do it, what matters is you knowing what you want to do and have the passion for it and you’ll just follow that and learn along the way until you get to your target.

Thing were difficult a little at first, during building stage of Evolution and Evolution data server it took me 2 weeks, my mistake here was that I was nervous to ask for help in the Evolution IRC. Maybe I thought “how can I ask a stupid question” but actually that wasn’t one. I could’ve asked and build it in much lesser time as the community is very very friendly but I learnt that in time :P.

Also getting used to the code required time, that’s why it is required that you contribute first before sending a proposal to GSoC as it makes the job easier when you start the actual coding period.

Other than these struggles mentioned above, others were related to the project itself and that’s where you gain the experience.

what can be done next?

The project can be upgraded to use EteSync 2.0 protocol once it’s released. This will come with new improvements and a better user experience for EteSync users.

Attribution

I would like to thank Milan Crha and Tom Hacohen for their assistance and cooperation during this project, couldn’t have done it with out their help. Also GNOME and EteSync for this opportunity to work with awesome communities :D.

Project Summary

Welcome to last project post!

Following GSoC Work Product Submission Guide (https://developers.google.com/open-source/gsoc/help/work-product) I wrote the following post to summarize the work done on GSoC ’20 on gnome-battery-bench project.

How to compile and test the project?

First blog post should be helpful to get ready (https://batterybenchgsoc2020.wordpress.com/2020/06/02/18/). To execute test, follow the guide inside README.

Notice there are two repositories:

Nothing is merged because it should be reviewed by my mentor (@gicmo) and GNOME community.

Project proposal

For project proposal I solved “Do not start testing if AC power is available” issue (https://gitlab.gnome.org/GNOME/gnome-battery-bench/-/issues/6). The pull request (PR) for this issue (https://gitlab.gnome.org/GNOME/gnome-battery-bench/-/merge_requests/4) pretends to check if AC power is online or not, to start testing.

Adding features

There a few issues/feature request on project Gitlab. At summer beginning I implemented “Optional timestamps log output” (https://gitlab.gnome.org/GNOME/gnome-battery-bench/-/issues/3). PR for this feature (https://gitlab.gnome.org/GNOME/gnome-battery-bench/-/merge_requests/6) adds a timestamp option to gbb command, so date and hour is available on program output.

Main summer milestone: test recording working on Wayland

I highly recommend read blog post about libevdev to understand how it works: https://batterybenchgsoc2020.wordpress.com/2020/07/05/project-main-change-port-to-wayland/

We implemented mouse and keyboard devices recognition and recording -it is easy to see on code changes. However, we had issues in order to get mouse position. We misunderstood EV_ABS, that is position on a device, no on the screen and we started to work with EV_REL, that returns relative position movements. But we need to have same position on recording and playing, so our idea is move the mouse to (0,0), that it wasn’t achieve on summer project, and play tests on Wayland, because it is not working from command line execution.

Future work and know issues

On Wayland port, issues that should be solved are:

  • Move mouse to (0,0) to reply same input as recorded test, using emulated touchscreen, for example.
  • Test output and see why tests are not playing on Wayland (on command line execution).
  • Test keyboard implementation.

Other issues to solve:

Acknowledgements

Firstly, thank you to my mentor, Christian Kellner (@gicmo) for his comprehension and help during summer. Thank you to @garnacho and @jadahl, and all #gnome-shell channel for helping on this project. And, finally, thank you to GNOME admins to help us and organizing GUADEC, specially to Felipe Borges.

GSoC Final Report of GNOME gitg Work

Hi everyone, GSoC is coming to an end, and I’d like to present you with all the changes I’ve made so far on gitg.

I’ve learnt a lot while working on gitg, it’s really well-structured with a great architecture design. I didn’t have to refactor that much of code while extending it’s functionality, and I was amazed by how well-written and extendible it was.

Blog Posts History

All of the blogs can be found under my account on medium, here are links to them:-

  1. My GSoC Proposal Got Accepted For GNOME
  2. Back On Track
  3. Implementing Branches Comparison on gitg

What Work Has Been Done?

My project issue (you can see it here), didn’t have a design proposal at first, so I had to try different implementations to see what would provide the best user experience.

So first, I created a prototype for selecting multiple commits in the History Activity and show the diff for them in the DiffView. I’ve created a MR for it, it didn’t and it shouldn’t be merged since I was just experimenting and trying to understand how gitg internal components/classes work. You can see a blog post about it here.

Then I’ve contacted Tobias Bernard(GNOME Designer) to discuss with him different approaches to implement this features, and we agreed that it should be in a separate Activity that can be accessed only from the History Activity to make the user workflow easier.

So I’ve made a second MR where I implemented all of the Project’s Goals, and it became fully functional. In this MR, I’ve created a new Compare Activity, where it implements two important features: comparing branches, and comparing any two selected commits from any two branches. You can also see a blog post about it here

What Has Been Merged?

Nothing have been merged yet, I’m waiting my mentor’s review and feedback, so that we can re-work and optimize parts that need more work

Deliverables, Work Details and Links

Prototype: Compare Two Non-Consecutive Commits

Regarding the prototype code, I’ve created a MR for it on gitg.
Here’s the MR: Compare Two Non-Consecutive Commits

Here is the complete work I’ve done on that prototype with links to it’s commits:-

  1. Adding multiple selection to the History Activity CommitListView (Commit Link)
  2. Refactoring History Activity to support the multiple selection functionality (Commit Link)
  3. extending Gitg.Commit class to get diff with another commit object passed to it (Commit Link)
  4. creating a new class that manages inserting/clearing commit from the DiffView (Commit Link)
  5. refactoring DiffView and Diff to use the newly created class (Commit Link)

In this MR, I’ve tried extending the GitgCommitListView to support selecting multiple commits, refactored History Activity to support the new selection functionality, and refactored both DiffPlugin and DiffView to support multiple selection and showing the “diff” between them.

Final Project: Extending `gitg` with a new Compare Activity That Supports Comparing Branches/Commits

Regarding the Final Project, I’ve created a MR for it on gitg.
Here’s the MR: Creates A New Compare Activity That Handles Comparing Between Two Different Branches/Commits

The complete list of work is also written on the MR, this just lists the work done with the corresponding links to commits.

This MR, as previously mentioned, creates a new Compare Activity in gitg.

Inside the Compare Activity, three views are implemented:-

  1. MainView, in which the users will be able to select different branches
  2. BranchView, in which will be shown the difference of commits between the two selected branches in a GitgCommitListView
  3. CommitsView, in which will be shown the two branches' commits, where users will be able to select two commits to compare, each from different branch.

I had to extend the backend as well as create the new Activity on the frontend, so here is the work done on the backend so far:-

  1. extending libgitg with a new Popover to list the available branches (Commit Link)
  2. modifying GitgCommitModel to be able to share the same RevisionWalker across different models (Commit Link)
  3. GitgDiffView updates the diff on Idle, which will enhance the responsiveness of the application in case of huge diff between two commits (Commit Link)
  4. adding new GitgRefActionCompare which if selected, will set the selected branch as one of the branches to be compared automatically, then make a transition to the Compare Activity (Commit Link)

Here is the work done on the frontend so far:-

  1. decide what’s the best way to implement it, whether to share the same RevisionWalker, or to create two models for the two branches with different RevisionWalker (I had to try it myself to see if there were any impact on the performance, here’s my work on a side pet-project. Also I had to ask on Stack Overflow to get a proper feedback and opinion on what would be the best option, here’s the link of my question)
  2. create the MainView of the Compare Activity (Commit Link)
  3. Show a message to users on the MainView if the selected repository was a bare one (Commit Link)
  4. create a BranchView, where comparison between branches is shown, e.g. Gitlab, and Github (Commit Link)
  5. create a CommitsView, where comparison between two different commits from different/same branch is shown (Commit Link)
  6. reload Compare Activity whenever there is a change in the repository, or new commits added (Commit Link)

What’s Left To Do?

Most of the things that I planned to do, has been already implemented on the Final Project MR, ready to be reviewed by my mentor Alberto Fanjul so that we can rework the necessary parts, if needed.

Future Plans

I’m planning to improve the DiffView of gitg to improve showing “diffs” since it may introduce some lagging on the UI if the “diff” between two commits was huge. I’ll work with my mentor (Alberto Fanjul) to refactor it and see what we can do to improve it’s performance.

Also I’m planning on continuing my work on an old MR for the “highlighting changes within lines” feature, here’s it link

Challenges and Learnings

I’ve enjoyed working on gitg a lot, it has been challenging for me to understand the details of certain implementations in the program. Also gitg is a complex project, so it was hard at first to get the grasp of the code, to browse the code efficiently, and to understand why a certain functionality is implemented in that way. However, I really learned and enjoyed reading, and challenging myself into understanding certain aspects of the code on my own, and I feel that this skill has really improved since then.

Since gitg is a git client application, so of course it’ll improve my “git” skills, also I’ve been learned more about Flatpak Packaging and the Meson Build System.

Overall it was an eye-opening experience, where I learnt a lot and it was really a great opportunity to see and learn how professional open-source applications are being developed.

Google Summer Of Code 2020 Final Report

GSoC @ Pitivi During this summer I improved Pitivi’s Media Library. The work included both refactoring and adding new functionalities. My proposal has a detailed roadmap on the goals I set to achieve during this summer.

Initial cleanup

To prepare the codebase for introducing new modes of displaying the clips in the Media Library, I refactored the code to unify the iconview and listview modes of the Media Library into a single responsive grid view. The two modes were using different types of widgets, requiring duplicate logic. Now both iconview and listview modes are powered by a single Gtk.FlowBox widget. Issue #1343

Link To Submitted MR. ( merged )

Link To Related Blog.

Shifting the asset action buttons to a new action bar

The Tagging functionality is accessed through a new Tag button, for which we needed to make space since the MediaLibrary’s toolbar was already very crowded.

medialibrary before the introudction of new action bar

We decided to group all the buttons related to the selected clips on a new toolbar at the bottom of MediaLibrary. Initially we came up with a design of a floating toolbar at the bottom using Gtk.Overlay. We went through a number of iterations on various ways to place it and settled on using a standard Gtk.ActionBar which is designed to present contextual actions, exactly what we needed.

medialibrary after the introduction of new action bar

Link To Submitted Commit.

Tagging clips in the Media Library

I introduced a new Tag button which reveals a Popover for tagging the selected clips. A clip can have multiple tags. Multiple clips can have common tags.

Tagging Feature

The Gtk.Popover shown by the Tag button displays all the tags using a Gtk.ListBox. The state of each tag can be controlled using a Gtk.CheckButton. A CheckButton is “checked” when all the selected clips have the corresponding tag, “unchecked” when none of the clips have the tag, and “inconsistent” when only some of the clips have the tag. Clicking the CheckButton takes it through the three states.

A Gtk.Entry allows to specify a new tag to be associated with all the selected clips.

An “Apply” button saves the changes in the project. The Apply button remains disabled unless there are changes to be applied. It does not permit creating a duplicate tag. Note: After applying the change via the Apply button it is not written in the project’s xges files. To do so we need to save the project. Taking advantage of the fact that the GES.UriClipAsset is a GES.MetaContainer, we store the tags under the individual clip’s metadata ( “pitivi::tags” ). When saving the project, the tags are thus saved in the project’s xges file. While working on saving and retrieving asset metadata we encountered a minor bug in GES because of which we were unable to retrieve the saved metadata from a reloaded project. So we worked on a fix before moving on.

I introduced new test cases exercising the UI for addition and removal of tags under several scenarios. Issue #537

Link To Submitted MR.

Filtering of clips based on their tags

After completing the Tagging feature our plan was to utilise it for filtering the clips. We worked on extending our current search functionality to include searching by tags. Luckily the search bar in MediaLibrary is composed of Gtk.Entry which has a convenient method set_completion to assign an Gtk.EntryCompletion to it.

Gtk.EntryCompletion allows us to use Gtk.TreeStore to provide suggestions based on the text or key entered inside the Gtk.Entry. We already had a global set of tags maintained in the MediaLibrary for the tagging feature. We used it to fill the model required by Gtk.EntryCompletion. We utilise it’s built in Autocompletion and Popover to manage our filtering operation using tags.

Filtering clips based on tags

Link To Submitted MR.

Work To Be Done

The work in https://gitlab.gnome.org/GNOME/pitivi/-/merge_requests/318 is currently under review, polishing and updating it as per the reviews would be my priority. The introduced test cases for Tagging feature can be reduced and extended to cover more scenarios.

One of the extended goals was to introduce filtering clips by date, we need to finalize the roadmap of this feature and my goal would be to implement it.

Updating the user manual to mention the Tagging Feature is also a task that I intend to do.

I am grateful to

Thibault Saunier and Alexandru Băluț for all the guidance and support they have given me throughout my time working at Pitivi, they were always present to solve so many of my doubts patiently. Without their support this much work would not have been possible.

August 30, 2020

GSoC final report!

My project

My project consisted of building a UI library for the GNOME web ecosystem. GNOME has many websites (gnome.org, extensions.gnome.org, discourse.gnome.org, planet.gnome.org, developer.gnome.org, surveys.gnome.org and many others, but, currently, they don’t have a consistent design between each other. The GNOME UI library then comes with those main following goals:

  • Keep a consistent look between the GNOME websites;
  • Ease the development and increase the maintainability of the websites, once the developers responsible for them won’t have to spend a lot of time designing;
  • Refresh the GNOME websites look.

I started the project with an assessment of the current state of GNOME. I worked on an inventory of the main GNOME websites, where I had to find and research, for each website:

  • Who is responsible for it;
  • In which platform it runs;
  • Theme information (does it use a theme? Is it extended from some framework like bootstrap?);
  • Who is the website target audience;
  • What are the website goals;
  • Which visual elements draw more of the attention.

I wrote the inventory on issues so it is documented to be used as a reference later on the project. Inventories:

After the inventory, I started working on an evaluation of the website components and elements: how do they look like now, how should they look like, fonts, spaces, and, meanwhile, I created some mockups for them on Figma to help me build the project. The Figma project is not an official source, I used it as a tool for helping me with the designs.

Just like the inventory, the evaluations are placed on issues as well. Between the evaluation issues, there is one especially describing the research I worked on to decide on which library the project would extend from. In conclusion, I chose the Tailwind utility-first CSS framework. In parallel with the evaluation and design, I created with my mentor, Claudio Wunder, the project configuration for the library.

I coded preflight setups: typography and colors and basic components/ elements: inputs, buttons, and cards. Tailwind helps a lot with spacing, accessibility classes, flex, grid, and other basics :D To place the project documentation, I configured a Jekyll project and my mentor helped me to configure a GitLab CI to have the project documentation placed at Gitlab Pages, which holds every component, configuration, element, etc that is already merged to master. The work I did is on the GNOME General Website Resources repository (on issues, MRs and code as well, me and my mentor were the only authors in the project).

Last but not least, during the project the GSoC and Outreachy interns had the opportunity to give a small talk on GUADEC - the main GNOME conference - and it was an enthusiastic way of sharing about our projects and contributions!

What were the challenges

  • Project requirements: Besides having some knowledge about usability, that diverges a lot from knowing how to design stuff and what I should look at on the current design to improve on the UI library. I struggled very much with the inventory and the evaluation parts, and my lack of knowledge about this messed up with my confidence in the project.
  • Project set up: It was very hard to find references about how to proceed to create a UI library extended from Tailwind. It took me a few days (maybe more than a week) experimenting stuff and project configurations to find the best approach, and yet, I couldn’t find it myself. Thankfully, Claudio helped me with that and the project was now moving forward.

Work in progress and future steps

The whole GNOME UI library project is still is far from over. I’m happy to continue the project and count on other contributors interested in helping the project to happen! For now, those are some of the missing parts of the project:

  • Finishing the basic library elements and components, like
    • Tables
    • Lists
    • Alerts
  • Publishing it into a CDN to be used on the websites
  • Create Platform-Specific Frameworks

Conclusion

I’m very happy to be given the opportunity of working with GNOME once again. Even having a lot of work to be done before using the project in real life, I feel fulfilled with the work I’ve done, and I want very much to continue in the project as much and as often as I can.

I want to thank my mentors, Claudio Wunder, Caroline Henriksen and Britt Yazel, for being so patient to me and for their help whenever I needed. I hope we can still work together on this project and other opportunities!

Google Summer of Code 2020

Final report on the project

It has been a great journey working on the libhandy project both challenge wise and outcome wise. My project requirement was to implement an adaptive version of Grid widget and I’m happy to say that the frame has successfully been laid out. The widget is not yet in its final shape and is still under a thorough review process and surely will need some bug fixes to reach a stable form. That being said, I believe it can be used to fiddle around and discover more use cases for it. The latest code for the same is available at this branch.

Break down of tasks involved —

  • Implement the new widget
  • Add a relevant demo in the example application

Project Progress Milestones —

  • Basic container widget showing all children added inside it
  • Develop a strategy to prioritise widgets when repositioning
  • Develop the code to handle a single row of widgets so that it can adapt to different widths
  • Extend the single row functionality to handle multiple rows of widgets
  • Able to parse custom tags in *.ui file definitions to provide an easy way for weight assignment to columns

Related MR — !530

Related Issue — #128

The detailed description of the implementation of the widget has been covered in my previous blog posts.

Take-Aways

  • The working environment was completely new to me when I first started and I got to learn a lot during this period of time.
  • A new paradigm of programming with C language was revealed to me as I progressed
  • Got acquainted with Gtk, GLib, Gnome Application Development, awesome development tools and softwares (Builder, the most used of all).
  • At last but not the least, got an opportunity to get involved with a really great community of people, got to be a part of GUADEC (online)
  • Improved coding practices, debugging Gtk applications

The GUADEC, though online this year (which allowed more participation btw), took place between 22–28 July. I got a chance to talk about my project to the community. Through the event, I also got to know more about the community, people involved and various projects.

Adrien has been a great person who was always around for help from the start.
I’m really glad that I got to know awesome people in the community.

Though this summer programme has come to an end, I’ll still be around. This is not a farewell IMO, so let’s keep it that way :P.

See you around!

GNOME Games: Final submission

This post serves as a compilation of all the work I did during these past 3 months of Google Summer of Code 2020.

Work done

!369 (merged): Refactoring old code by making a new Core interface and RetroCore class. These are used to generalizes all interactions that are related to firmware. Where RetroCore is an implementation class of Core interface. FirmwareManager class is for organizing checksums verification through Core/RetroCore when a game requiring a firmware is run by the runner.

!405 (merged): Made a Firmware interface and a RetroFirmware class that moves all the functions and information needed by firmware from FirmareManager to itself. By doing this, the Core interface is used to make firmware objects, FirmwareManager manages firmware objects, and the firmware object runs checksum verification and contains all information related to that firmware.

!408: Some minor changes to how checksum verification takes place to be more efficient. Added methods to FirmwareManager that handle addition and removal of firmware along with methods that listed all supported firmware and methods to check whether the file being added is a supported firmware or not.

!411 (merged): Since both SHA-512 and MD5 checksums will be mandatory by a commit in !408, present core descriptor files needed to be updated to have both SHA-512 and MD5 checksums.

!415: Merge request with back-end code from !408, generalization of overlay in preferences page and the drag n drop widget. The need for an overlay generalization is to make it easier for each preferences page to add an overlay. This will be useful later when Firmware page is implemented, to show error messages and undo bar when adding or removing firmware. The reason why this merge request has back-end code is for proper functionality of the merge request on it’s own as the drag n drop widget uses the back-end code introduced in !408.

!89: Merge request with the parser that reads firmware name from the core descriptor.

TODO

As the code will not be merged until 3.38 release, I just have to keep refining the back-end code and making the drag n drop experience smoother.

As for firmware page, I have untracked Firmware Page UI, after the translation issue is resolved, and back-end code and the parser lands, a new merge request with the untracked files will be opened, after which #145 will be resolved and closed.

Closing thoughts

Google Summer of code was experience of a lifetime for me, being a student and getting a chance to work on open source and writing production level code. While working on my GSoC internship I learnt a lot. Working remotely on open source project taught me things like, time management, communication, git, open source workflow etc.

These past 3 months have been quite hectic with working on my GSoC project while taking online classes, doing assignments and submitting reports. GSoC was a valuable experience for me as better time management means more productivity out of the same 24 hours without missing out on things.

With summer ending and and summer vacations beginning, I would like to express gratitude to my mentor Alexander Mikhaylenko (a.k.a @alexm a.k.a @exalm) for helping and guiding me this summer. Thank you.

This GSoC was a big learning experience as it taught me data flow and code design, UI/UX, widget writing, CSS and finally ways to use git to ease workflow.

I’ll try to keep posting, as I’ll still be working on GNOME Games and will probably look for more projects to work on later :^)

GSoC Final Report

I’ve been working on Music for the past three months, adding support for remote sources. The work included adding support for dLeyna and DMAP source for Music.

Why is this project needed:
Music currently list and plays songs only from the local filesystem (that are indexed by Tracker), there is no way to browse and play songs from remote sources such as DLNA and DAAP.
Most of the users have their media on media servers and not on file-system, right now if there are no songs on the computer an empty search view is created notifying you that no songs are present on the computer.

Some frameworks important to the project
Grilo is a framework focused on making media discovery and browsing easy for application developers, and Music uses grilo-plugins for media discovery. Grilo plugins support tons of sources such as Tracker, Jamendo, UPnP, etc.
Grilo-plugins also supports dLeyna(for DLNA) and DMAP(for DAAP) sources, so I’ll be using grilo-plugins for media discovery from DLNA and DAAP servers.

dLeyna-server is a high-level media content API that allows client applications to discover, browse, and search UPnP and DLNA media servers and is implemented over the GUPnP library. This takes care of all the communication between the server and the grilo-plugins. When we make any request to the server, grilo-plugins relay the request using dLeyna-server, and dLeyna-server uses GUPnP to actually make requests to the media server.

Libdmapsharing is a library that allows programs to access, share, and control the playback of media content using DMAP.

My project includes the following tasks:

  1. Adding support for dLeyna sources
  2. Adding support for DAAP sources
  3. Customizing search view to support various sources

dLeyna Support

Related Issue: #396

Related Merge Request: !713

dLeyna source is based on UPnP protocol that allows us to use media server with no configuration i.e. a zero-conf server, it is also the most commonly used media server, as it is easy to set up.
This feature will not require any additional configuration, to see media hosted on the home network, just shoot up the Music application and the media from the dLeyna source will be added to the respective views.

DAAP Support

Related Merge Request: !740

Problem:
The DAAP(or DMAP) protocol is built and used by Apple devices, so they haven’t released the specifications publically, but it has been reverse-engineered to the extent that media can be discovered and played on the other platforms(other than macOS). It really restricts us with the level of support, the media we get often do not contain enough data.
Right now a minimal support is provided in Music, this just list songs from DAAP server and can play them, but this doesn’t have all the media metadata such as artist, date, genre, etc.(it only displays the information given by the server)

This feature also doesn’t need any configuration, just connect to your home network that already has a DAAP server, and Music will list all the songs hosted and you will be able to play them.

What Works
  1. Music can fetch media from dLeyna sources and play them.
  2. Search works for dLeyna sources.
  3. Music can fetch media from DAAP sources and play them, but it doesn’t have all the information that should be associated with a media.
Future Plans
  1. Search Redesign and customization to take in and filter the media from different sources.
  2. Addressing some of the issues with dLeyna sources.

GNOME Community

This is the first time I’ve ever worked on a proper open source project, GNOME as a community has taught me a lot on how to work on a project, and I’ve to hand it to Jean Felder and Marinus Schraal, who have always been there to help me. I’ve also had encounters with Victor Toso and Jens Georg and these guys were really helpful, especially Jens listening to me patiently and helping me with very simple things.

GUADEC

GUADEC is the GNOME community’s conference, which brings together users, developers, and community members for a week-long package of events. Each talk very informative and I learned a lot from these talks.

GUADEC had an event for GSoC and Outreachy interns where we got to present our projects (HOW COOL IS THAT!!).

Conclusion

GSoC may have come to an end, but this doesn’t mark an end to my journey with GNOME, I would still be around contributing to Music and other projects.
At last, I would like to thank Jean and Marinus with all my heart for their support and guidance.

How Does it work, A Full guide for EteSync module in Gnome Evolution app

Welcome, in the past months I’ve been working on an EteSync module for Evolution so EteSync users can add their account to Evolution and mange all their data from there.

EteSync, it is a secure, end to-end encrypted and FLOSS sync solution for your contacts, calendars and tasks.
Evolution is a personal information management application that provides integrated mail, calendaring and address book functionality.
You can see all my past posts from here if you want to know more about the module.

This is basically a tutorial on how to use the EteSync module in Evolution. It should be simple and covering all of the important stuff that you’ll need to do to manage your data in your EteSync account.

You can play around with the module to do other thing that aren’t mentioned, try to right click on a existing journal to find more options as deleting. Also you can modify existing entries by double clicking on it or delete by right clicking and choose to delete.

Contents

Installing the module

First things first, you’ll need to install the EteSync module for Evolution, you can simply do this by following the installation guide found here.

Adding EteSync account

After installing the module, obviously you’ll need an EteSync account, if you don’t have one, you can create one from EteSync website.

The steps are very simple.

  1. click on the arrow next to “New” button
  2. choose “Collection Account”
  3. Enter your email/username.
  4. Choose “Look up for an EteSync account”
  5. Enter your password.

After that you’ll be asked to enter your Encryption password and all your data will be loaded successfully.

Adding data inside a journal

In this example I am adding an appointment to a calendar of mine.

Adding new journal

EteSync supports only three types of journals (Address-book, Calendars and Task lists).
There are two ways to add new journals in Evolution.

First is from “New” menu
Second is choosing category from the bottom-left area (here it is Contacts), right clicking on the account and choose “New address-book/calendar/task list”

Rename or change a journal color

Just right click on the journal (address-book, calendar or task list), select properties then you can rename or change color, then click ok.

Setting up newly created account

This section is for new users who just created an EteSync account and haven’t set an encryption password for their account (first time to use the account).

  1. Simply follow the adding account steps normally.
  2. A new dialog will pop up asking you to set an encryption password.
  3. Enter your encryption password twice to set it up and then your account will be initialized with 3 default journals (My Contacts, My Calendar and a My Tasks).
  4. Press Ok, and that’s it.

Running your own instance (self-host)

This part is a little advanced. Since EteSync has it’s code open-source so anyone can use the server code to easily self-host his own server but it comes with less benefit, however, if you wish to do so, please follow the instructions here.

After you have successfully set up your own instance, and verified it works by connecting to it from the browser, you can follow the steps do add an account while hosting your own server.

In this example I am self hosting the server with my localhost url on port 8000 (http://127.0.0.1:8000)

First steps with neural networks and NumPy

Motivation Last Friday at Red Hat we have another “Day of Learning”, the third one now. As will all repeated things, as an engineer I want to automate things – so this time I wanted to look into machine learning 😉. This was also an excuse to finally learn about NumPy, as that’s such a generic and powerful tool to have on one’s belt. At school in my 11th grade, I worked on speaker dependent single word speech recognition as my scientific project.

August 29, 2020

Drag n Drop

Work till now

After the translation debacle in my previous post, I started working on the back-end that will be used by later UI (I’ll be talking about one place where this back-end is used in this post) in a manner such that, when translations start functioning, they can easily be implemented by addition of a few lines of code. This back-end work involves methods that will be used to add or remove firmware, checking whether the firmware being added is acceptable/supported etc.

Update

The goal of adding drag n drop support would allow addition of firmware through, well, drag n drop. Which would be an added bonus to the firmware UI that will be implemented after translations are taken care of after 3.38 release.

My work here is to write a widget that can be easily added almost anywhere allowing that area to accept drops and install firmware. Not only that, but also give visual feedback of whether the dropped firmware has been successfully added or not.

The ideal way to do this would be by writing a widget that’s an overlay. Because the widget is an overlay, it would have the area presented to the widget it’s being attached to and would be able to give visual feedback through the shared real estate for better UX.

For any widget to accept drag,

Gtk.drag_dest_set (widget, dest_default, targets, actions);

line must be used. Here widget, is the widget surface that’s going to experience drag. The documentation for each parameter here can be read here.

The argument being passed as dest_default must be chosen carefully in my opinion. That is because, if the dest_default is Gtk.DestDefault.MOTION, whenever something is being dragged on the surface of the widget, this argument will forcefully emit drag_motion signal. But if drag_motion is connected to some other callback, that callback will be called twice due to it being connected to the default signal which would be emitted during a drag motion, and again by the destination default flag. Same with Gtk.DestDefault.DROP. In my widget, the dest_default flag was making the experience very clunky, so I simply passed a 0.

After setting the whole widget surface to accept drag, drag_motion was connected to a callback. This callback checks the target of the drag context. If the target is a list of uris, then the drag data is requested with a checking flag. Otherwise the drag_status is set to 0.

Then in the callback for drag_data_received (which is called when drag data is requested) when the checking flag is active, the files being dragged on the widget surface are checked for their compatibility. To do this, a function was written in the name space of FirmwareManager class, which takes a File as parameter, and runs for a match for checksums of accepted firmware that are not installed yet, or if they are installed, whether these installed firmware files are corrupt or not. If all these checks are clear the function returns true and the file being dragged is an acceptable file, then the overlay is activated and the drag_status is set to Gdk.DragAction.COPY. Otherwise the function returns false, and the drag_status remains 0.

If the data is not dropped and simply leaves the widget surface, drag_leave signal is triggered. This activates a callback that deactivates the overlay if active and the surface reverts back to normal.

If the data is dropped instead, then drag_drop signal is triggered, which again requests for drag data, but this time with a dropped flag. This way, when drag_data_received signal is emitted, the callback rather than checking the file again, simply tries to install the file that was dropped. After the dropped file installed, drag_finished is called. drag_finished is important as it tells the system that the drag n drop action has ended.

When drag action ends, the overlay shows the results of the drag process. If it was a success it shows a 1.5 sec success overlay after which all flags otherwise it shows an error message. After the overlay ends messages and flags are reset to prepare for the next drag.

With drag n drop implemented, the last things that needs to be done is the firmware page in the preferences UI in the user interface. But that needs to wait till the 3.38 release is over, after which translations will be implemented and work on firmware preferences page could be continues.

GSoC final project report

Hello again ! This is my GSoC final project report blog, so this is going to be a very simple and straightforward post without pictures(..but just one !) and jokes ! It will give you all the information about what work we did during GSoC and point you towards code and documentation produced during the project.

Project WorkFlow

This will describe how the code written and modified by me eventually ends up in the master branch of the project through a careful process of code-review and testing and rebasing. Both GNOME/Nautilus and myself maintained a seperate feature branch for GSoC.

the project workflow

The work I did was performed on the work branch which is obtained from my fork of GNOME/nautilus : master. A pull request was opened from my work branch to the GSoC-Staging-Branch maintained by GNOME/nautilus. After code-review and testing by my mentor Antonio, the code was merged into the staging branch. Later on when the main project goal was achieved the staging branch was rebased appropriatly and merged into GNOME/nautilus : master.The GSoC-Staging-Branch was updated weekly, with Merge Requests which represented the goals for the particular week.

What work was done ?

  1. Building the Basic Page : UI building code in the function create_basic_page () was ported to use GtkBuilder template.
  2. Building the Permissions Page : UI building code in the function create_permissions_page () was ported to use GtkBuilder template.
  3. Building the Open-With Page : UI building code in create_open_with_page () was ported to use GtkBuilder template.
  4. Code Cleanup : the functions who lost their functionality to GtkBuilder were dropped from the codebase
  5. Code Refactoring : Here we made use of modern GLib utilities for dynamic memory management, to prevent memory leaks.
  6. Finishing Properties Dialog : Making use of coding best-practices, and modifying the UI to stick to the GNOME-HIG guidelines, and fixing existing bugs in the properties-dialog.
  7. Stretch Goals : * porting Ctrl+S dialog to GtkBuilder
  • porting list columns dialog : It was planned in the proposal but cancelled later as porting it to GtkBuilder offered no advantages over the current Implementation.

What was merged ?

Given below is a list of Merge Requests which were merged into the GSoC-Staging-Branch and eventually into the master as well :-

  1. Inherit GtkWindow instead of GtkDialog : GSoC week 1
  2. Building Basic Page using GtkBuilder template : GSoC’ 20 Week 2 & 3 & 4
  3. Building Permissions Page using GtkBuilder template : GSoC’ 20 Week 5 & 6
  4. Finishing Basic, Permissions page, and Start porting Open-With Page : GSoC ’20 week 7
  5. Code Clean-up : GSoC ’20 week 8
  6. Building Open-With Page and dropping NautilusMimeApplicationChooser class : GSoC ’20 Week 10
  7. Basic styling, Restoring Esc to close property of properties-window : GSoC ’20 Week 11
  8. Restyling of UI based on GNOME-HIG : GSoC ’20 Week 9
  9. Code Refactoring and Modernization : GSOC ’20 week 11 part-II
  10. Merge GSoC feature branch into master branch : Port Properties to GtkBuilder

What hasn’t been merged ?

  1. Building Ctrl+S dialog using GtkBuilder : WIP: Porting Ctrl+S dialog to use GtkBuilder

Current Status : It’s still under review and expected to get merged soon.

What’s Left to do ?

Well, Everything that was planned during the project proposal has been achieved and merged into the master branch, nevertheless we do have future plans for the properties-window’s design.

The Blog posts I wrote for each deliverable are summarised below in the form of a table
Milestone Merge Request Blog Post
GSoC Begins The first Contribution, GNOME & GSoC
Building the Basic Page and inheriting GtkWindow Inherit GtkWindow instead of GtkDialog, Building Basic Page using GtkBuilder template The First Milestone
Building the Permissions Page Building Permissions Page using GtkBuilder template The Second Milestone
Completing Basic and Permissions Page Finishing Basic, Permissions page Revisiting Basic and Permissions Page
Building the Open-With Page Building Open-With Page The Final Piece
Code Cleanup Code Clean-up Celebration is in Order
Code Refactoring Code Refactoring and Modernization Celebration is in Order
Finishing Properties Dialog Restoring "Esc" to close,HIG Restyling Celebration is in Order
Stretch Goals Port Ctrl+S dialog to GtkBuilder (under review) (under review)
GNOME Conference – GUADEC My First GUADEC

What are Future Plans ?

The future plans involve a complete redesign of the properties-window with the help of our new-found powers of UI modification which this project offers us. We are going to begin with dropping the GtkGrid widget and using GtkListBox instead. The design mockups which would be used for the same could be found here : Modernize the appearance of Properties

GSoC 2020 @ Pitivi: Work Product

Overview

My GSoC 2020 internship project was improving the usability of Pitivi’s Render dialog. Below is a detailed summary of the work done during the last three months.

Refactoring the Render Presets’ selection

The previous UI for selecting a render preset was being constructed dynamically. The same mechanism is used to also construct the audio and video selection preset UI in the project settings dialog.

Can you even notice where the render Preset option is in all the clutter?

Old Render Dialog-1

We decided to move forward with a button which shows the current preset. When clicked, it opens a Gtk.Popover which lists the available presets. The presets listed in the popover include a relevant icon and an informative summary, making it an easy choice for the user. Thus I fixed issue #1813.

We mostly cared about rendering a project to be uploaded to an online sharing service such as YouTube, Vimeo, etc., so there are not many other options at the moment.

A custom icon is used for Custom profiles.

Updated Render Dialog with popover

Through multiple iterations, we fine-tuned the UI so it looks nice and clear, improving the User Experience.

MR: link

MR status: Merged.

Addition of path property in Gstreamer Encoding-Target

Previously when a render profile was deleted, it was just marked as such. While at it, I took the opportunity to properly delete them. Unfortunately the path of the file where a GstPbutils.EncodingTarget object originates was not available.

I added a new path property to GstPbUtils.EncodingTarget and populated it when the object is saved or loaded.

MR: link

MR status: Merged.

Hiding the advanced render UI in a new “Advanced” expander

The many settings displayed on the render dialog were intimidating to both new and seasoned users. Since many will never be interested in the advanced settings, I introduced an expander which hides the detailed settings of rendering dialog, at the same time keeping them easily accessible.

Also, I moved the Folder and Filename sections at the bottom, to give more prominence to the render preset which is now at the top. I made the UI follow the GNOME Visual layout for an attractive and intuitive design.

MR: link

MR status: Merged.

Addition of Quality selection for supported encoders

In the video rendering process, there is a tradeoff between quality of the rendered video and the size of the video and the time it takes to render. If the user requires the video to be in high quality then the size of the video also increases.

To simplify the user’s choice, a Gtk.Scale widget has been introduced for specifying the desired render quality. The quality setting affects different parameters for different encoders, but this is done in the background.

Updated Render Dialog with Quality Scale.

Updated Render Dialog with Quality Scale -2.

MR: link

MR status: Work in Progress.

Work to be done

Merge request !323 is still being reviewed in the final stages, and shall be merged soon.

We can add more presets, such as high-quality archiving to be able to dispose of the original video material.

We can show a summary of the render settings so the user can quickly check them and enter in the Advanced settings expander if something looks bad.