December 10, 2019

GNOME Outreachy 2019

The Outreachy Program

The Outreachy program provides internship to work in Free and Open Source Software. This year I've proposed two projects as part of the GNOME project and we've two interns working for three months, so we'll have a lot of improvements in the following months!

I'll be mentoring these interns, so I will need to spend some time helping them to work on the existing codebase, but it worth it, if this makes more people to collaborate in free software development and if this help us to improve some useful apps.

These two projects are Fractal and the GNOME translation editor. You can take a look to the list of outreachy interns.

Fractal

Fractal is a Matrix.org gtk client, and I've proposed for this year program to implement a video player in the message list. We've a preview for images, for audio files but nothing for video files.

Sonja Heinze is the one that will be working on this during the next three months. She has been working during the past month in some small issues in Fractal so I'm really sure that she will be able to do great contributions to the project.

Jordan Petridis (alatiera) will be helping in this project as a co-mentor, I don't know a lot about gstreamer, so he'll be really helpful here with the gstreamer and rust.

GNOME translation editor (Gtranslator)

GNOME translation editor (gtranslator) is a simple .po editor. I've proposed to Rework the search and replace dialog. We've right now a simple find/replace modal dialog and I want to modernize the interface to integrate better in the window as a popover.

Priyanka Saggu is the one that will be working on this during the next three months. She has been working on gtranslator during the past month and she has done great contributions and improvements during this time.

Daniel Mustieles is the other co-mentor for this project. He's an experienced GNOME translator so he will help us a lot with the app user experience design and testing.

Vanilla is a complex and delicious flavour

Last week, Tobias Bernard published a thought-provoking article, There is no “Linux” Platform (Part 1), based on a talk at LAS 2019. (Unfortunately I couldn’t make it to LAS, and I haven’t found the time to watch a recording of the talk, so I’m going solely from the blog post here.) The article makes some interesting observations, and I found a fair few things to agree with. But I want to offer a counterpoint to this paragraph of the final section, “The Wrong Incentives”:

The Endless OS shell is a great example of this. They started out with vanilla GNOME Shell, but then added ever more downstream patches in order to address issues found in in-house usability tests. This means that they end up having to do huge rebases every release, which is a lot of work. At the same time, the issues that prompted the changes do not get fixed upstream (Endless have recently changed their strategy and are working upstream much more now, so hopefully this will get better in the future).

If we’re looking at the code shipping in Endless OS today, then yes, our desktop is vanilla GNOME Shell with a few hundred patches on top, and yes, as a result, rebasing onto new GNOME releases is a lot of work. But the starting point for Endless OS was not “what’s wrong with GNOME?” but “what would the ideal desktop look like for a new category of users?”.

When Endless began, the goal was to create a new desktop computing product, targeting new computer users in communities which were under-served by existing platforms and products. The company conducted extensive field research, and designed a desktop user interface for those users. Prototypes were made using various different components, including Openbox, but ultimately the decision was made to base the desktop on GNOME, because GNOME provided a collection of components closest to the desired user experience. The key point here is that basing the Endless desktop on GNOME was an implementation detail, made because the GNOME stack is a robust, feature-rich and flexible base for a desktop.

Over time, the strategy shifted away from being based solely around first-party hardware, towards distributing our software a broader set of users using standard desktop and laptop hardware. Around the same time, Endless made the switch from first- and third-party apps packaged as a combination of Debian packages and an in-house system towards using Flatpak for apps, and contributed towards the establishment of Flathub. Part of the motivation for this switch was to get Endless out of the business of packaging other people’s applications, and instead to enable app developers to directly target desktop Linux distributions including, but not limited to, Endless OS.

A side-effect of this change is that our user experience has become somewhat less consistent because we have chosen not to theme apps distributed through Flathub, with the exception of minimize/maximize window controls and a different UI font; and, of course, Flathub offers apps built with many different toolkits. This is still a net positive: our users have access to many more applications than they would have done if we had continued distributing everything ourselves.

As the prototypal Endless OS user moved closer to the prototypal GNOME user, we have focused more on finding ways to converge with the GNOME user experience. In some cases, we’ve simply removed functionality which we don’t think is necessary for our current crop of users. For example, Endless OS used to target users whose display was a pre-digital TV screen, with a 480×720 resolution. I think persuading the upstream maintainers of GNOME applications to support this resolution would have been a hard sell in 2014, let alone in 2019!

Some other changes we’ve made can and have been simply be proposed upstream as they are, but the bulk of our downstream functionality forms a different product to GNOME, which we feel is still valuable to our users. We are keen to both improve GNOME, and reduce the significant maintenance burden which Tobias rightly refers to, so we’re incrementally working out which functionality could make sense in both Endless and GNOME in some form, working out what that form could be, and implementing it. This is a big project because engaging constructively with the GNOME community involves more thought and nuance than opening a hundred code-dump merge requests and sailing away into the sunset.

If you are building a product whose starting point is “GNOME, but better”, then I encourage you to seriously consider whether you can work upstream first. I don’t think this is a groundbreaking idea in our community! However, that was not the starting point for Endless OS, and even today, we are aiming for a slightly different product to GNOME.

Back out to the big picture that is the subject of Tobias’ article: I agree that desktop fragmentation is a problem for app developers. Flatpak and Flathub are, in my opinion, a major improvement on the status quo: app developers can target a common environment, and have a reasonable expectation of their apps working on all manner of distributions, while we as distro maintainers need not pretend that we know best how to package a Java IDE. As the maintainer of a niche app written using esoteric tools, Flathub allowed me – for the first time since I wrote the first version in 2008 – to distribute a fully-functional, easy-to-install application directly to users without burdening distribution developers with the chore of packaging bleeding-edge versions of Haskell libraries. It gave me a big incentive to spend some of my (now very limited) free time on some improvements to the app that I had been putting off until I had a way to get them to users (including myself on Endless OS) in a timely manner.

On the other hand, we shouldn’t underestimate the value of GNOME – and distros like Debian – being a great base for products that look very different to GNOME itself: it enables experimentation, exploration, and reaching a broader base of users than GNOME alone could do, while pooling the bulk of our resources. (On a personal level, I owe pretty much my entire career in free software to products based on Debian and the GNOME stack!)

Some caveats: I joined Endless in mid-2016, midway through the story above, so I am relying on my past and current colleagues’ recollections of the early days of the company. Although today I am our Director of Platform, I am still just one person in the team! We’re not a hive mind, and I’m sure you’ll find different opinions on some of these points if you ask around.

December 09, 2019

Developing Leaderboard for GNOME Hackers

After completing my Google Summer of Code assignment, I had an idea in my mind for a project where the hard-working people on GNOME, known as GNOME Hackers, could be appreciated based on the amount of work they do for the FLOSS community. In the quest for the same, I wrote a leaderboard web app, GNOME Hackers. It was an awesome experience and I utilized my weekends very well by learning many new things. I will give a brief of them below.

Gitlab API

All of the GNOME groups and projects are hosted on the Gitlab instance of GNOME. The most typical activities that happen on Gitlab are commits, issues and merge requests. These form the basis for scoring that builds up the leaderboard. The data is fetched from the Gitlab Instance of GNOME using the Python wrapper for Gitlab API.

Static Website

Landing page for GNOME Hackers :c-shadow

To create a static website, I could have used any Static Site Generator such as Jekyll. But this website required some logic such as scoring, selecting top hackers, giving them awards, etc., so I settled for Python. I used Frozen Flask to freeze the website into a static website which could then be hosted on Netlify. This great library reduced the codebase and gave me the power to build the website based on JAMstack.

Scoring

For allocating points and building up the leaderboard, the script uses the following scheme, If you feel that a rule is biased against the others, you can open an issue and we will have a conservation regarding the same.

Event Points
Each line of commit 0.01
Opened Merge Request 5
Closed Merge Request 10
Opened Issue 1
Closed Issue 2

Awards

The script gives you awards for staying on the leaderboard. You can get four types of awards:

  • Gold
  • Silver
  • Bronze
  • Top 10

For each day spent on the leaderboard, the hacker gets a +1 for an award, which he/she is eligible for.

GitHub Actions

Since I have a GitHub Pro pack, I get free 3000 build mins for GitHub Actions, which is an effective tool to automate the tasks. The workflow is simple and clearly explained by the graphic below.

Workflow for GNOME Hackers :c-shadow

The website builds every day at 00:00 UTC. After the workflow is executed successfully, the website build is pushed to the website branch, which triggers a deploy script on the Netlify and publishes the website accordingly.

Personal Page

Personal Profile page for GNOME Hackers :c-shadow

What’s next

If you liked by work, you can appreciate the same by buying me a cup of coffee. Also, I was given GNOME Membership, while I was working on this project. It made me feel so happy and I want to thank GNOME for all the support. For this website, I am looking forward to new ideas that I can implement on the website to make it even more interesting. If you liked my project, I would love you to star it as well. These little things encourage me to work further.

I am looking for an internship where I can implement my skills and learn the new ones as well. If you guys have any such oppurtunity for me, you can reach me through my email.

Lemme know if you have any doubt, appreciation or anything else that you would like to communicate to me. You can tweet me @ravgeetdhillon. I reply to all the questions as quickly as possible. 😄 And if you liked this post, please share it with your twitter community as well.

December 08, 2019

«Chris stared out of the window», a theological tale

My English class. You should write an story starting with 🙶Chris stared out of the window waiting for the phone to ring🙷. Let’s do it.


Chris stared out of the window waiting for the phone to ring. He looked into the void while his mind wandered. Time is passing, but it is not. For an eternal being all time is present, but not always is time present. The past, the future are just states of mind for an overlord. But he is still waiting for the phone to ring. Time is coming. The decision was made. It had always been made, before time existed indeed. Chris knows all the details of the plan. He knows because he is God too. He knows because he conceived it. No matter if he had been waiting for the starting signal. No matter if he expects the Holly Spirit to announce it to him. You can can call it protocol. He knows because he had decided how to do it. But Chris doubts. He is God. He is Holly Spirit. But he has been human too. The remembrance of his humanity brings him to a controversial state of mind. Now he doubts. He has been always doubting since the world is the world and before the existence of time. And after too, because he is an eternal being. He now relives the feelings of being a human. He relives all the feelings to be all the humans. He revisits joy and pain. Joy is good, sure. But joy is is nothing special for an overlord god. But pain… pain matters for a human — and Chris has been a human. Chris knows. Chris feels. Chris understands how sad human life can be. Chris knows because he is the Father creator. He created humans. He knows how mediocrity drives the character of all the creatures privileged with consciousness. A poisoned gift. He knows how evil is always just an expression of insecurities, the lack of certainty of what will happen tomorrow. What will happen just the next second. 🙶Will I be alive? Will be the pain still exists?🙷. He knows because he feels. And he doubts because he feels. He feels it can’t be fair to judge, to condemn, to punish a living being because it was created in that way. This would not be the full of love god the Evangelists announced. How could he punish for a sin he was the cause of. But, if not, can it be fair to all the others who behave according the Word. All of those, maybe with love, maybe will pain, maybe just for selfishness, fulfilled God’s proposal of virtue and goodness. How can not distinguish their works, to award their efforts. How can it be fair. How can he be good. His is the power and the glory. He is in doubt. The phone rings.

HTML overlays with GstWPE, the demo

Once again this year I attended the GStreamer conference and just before that, Embedded Linux conference Europe which took place in Lyon (France). Both events were a good opportunity to demo one of the use-cases I have in mind for GstWPE, HTML overlays!

As we, at Igalia, usually have a booth at ELC, I thought a GstWPE demo would be nice to have so we can show it there. The demo is a rather simple GTK application presenting a live preview of the webcam video capture with an HTML overlay blended in. The HTML and CSS can be modified using the embedded text editor and the overlay will be updated accordingly. The final video stream can even be streamed over RTMP to the main streaming platforms (Twitch, Youtube, Mixer)! Here is a screenshot:

The code of the demo is available on Igalia’s GitHub. Interested people should be able to try it as well, the app is packaged as a Flatpak. See the instructions in the GitHub repo for more details.

Having this demo running on our booth greatly helped us to explain GstWPE and how it can be used in real-life GStreamer applications. Combining the flexibility of the Multimedia framework with the wide (wild) features of the Web Platform will for sure increase the synergy and foster collaboration!

As a reminder, GstWPE is available in GStreamer since the 1.16 version. On the roadmap for the mid-term I plan to add Audio support, thus allowing even better integration between WPEWebKit and GStreamer. Imagine injecting PCM audio coming from WPEWebKit into an audio mixer in your GStreamer-based application! Stay tuned.

December 07, 2019

There are (at least) three distinct dependency types

Using dependencies is one of the main problems in software development today. It has become even more complicated with the recent emergence of new programming languages and the need to combine them with existing programs. Most discussion about it has been informal and high level, so let's see if we can make it more disciplined and how different dependency approaches work.

What do we mean when we say "work"?

In this post we are going to use the word "work" in a very specific way. A dependency application is said to work if and only if we can take two separate code projects where one uses the other and use them together without needing to write special case code. That is, we should be able to snap the two projects together like Lego. If this can be done to arbitrary projects with a success rate of more than 95%, then the approach can be said to work.

It should be especially noted that "I tried this with two trivial helloworld projects and it worked for me" does not fulfill the requirements of working. Sadly this line of reasoning is used all too often in online dependency discussions, but it is not a response that holds any weight. Any approach that has not been tested with at least tens (preferably hundreds) of packages does not have enough real world usage experience to be taken seriously.

The phases of a project

Every project has three distinctive phases on its way from source code to a final executable.
  1. Original source in the source directory
  2. Compiled build artifacts in the build directory
  3. Installed build artifacts on the system
In the typical build workflow step 1 happens after you have done a Git checkout or equivalent. Step 2 happens after you have successfully built the code with ninja all. Step 3 happens after a ninja install.

The dependency classes

Each of these phases has a corresponding way to use dependencies. The first and last ones are simple so let's examine those first.

The first one is the simplest. In a source-only world you just copy the dependency's source inside your own project, rewrite the build definition files and use it as if it was an integral part of your own code base. The monorepos used by Google, Facebook et al are done in this fashion. The main downsides are that importing and updating dependencies is a lot of work.

The third approach is the traditional Linux distro approach. Each project is built in isolation and installed either on the system or in a custom prefix. The dependencies provide a pkg-config file explaining which defines both the dependency and how it should be used. This approach is easy to use and scales really well, the main downside being that you usually end up with multiple versions of some dependency libraries on the same file system, which means that they will eventually get mixed up and crash in spectacular but confusing ways.

The second approach

A common thing people want to do is to mix two different languages and build systems in the same build directory. That is, to build multiple different programming languages with their own build systems intermixed so that one uses the built artifacts of the other directly from the build dir.

This turns out to be much, much, much more difficult than the other two. But why is that?

Approach #3 works because each project is clearly separated and the installed formats are simple, unambiguous and well established. This is not the case for build directories. Most people don't know this, but binaries in build directories are not the same as the installed ones Every build system conjures its own special magic and does things slightly differently. The unwritten contract has been that the build directory is each build system's internal implementation detail. They can do with it whatever they want, just as long as after install they provide the output in the standard form.

Mixing the contents of two build systems' build directories is not something that "just happens". Making one "just call" the other does not work simply because of the N^2 problem. For example, currently you'd probably want to support C and C++ with Autotools, CMake and Meson, D with Dub, Rust with Cargo, Swift with SwiftPM, Java with Maven (?) and C# with MSBuild. That is already up to 8*7 = 56 integrations to write and maintain.

The traditional way out is to define a data interchange protocol to declare build-dir dependencies. This has to be at least as rich in semantics as pkg-config, because that is what it is: a pkg-config for build dirs. In addition to that you need to formalise all the other things about setup and layout that pkg-config gets for free by convention and in addition you need to make every build system adhere to that. This seems like a tall order and no-one's really working on it as far as I know.

What can we do?

If build directories can't be mixed and system installation does not work due to the potential of library mixups, is there anything that we can do? It turns out not only that we can, but that there is already a potential solution (or least an approach for one): Flatpak.

The basic idea behind Flatpak is that it defines a standalone file system for each application that looks like a traditional Linux system's root file system. Dependencies are built and installed there as if one was installing them to the system prefix. What makes this special is that the filesystem separation is enforced by the kernel. Within each application's file system only one version of any library is visible. It is impossible to accidentally use the wrong version. This is what traditional techniques such as rpath and LD_LIBRARY_PATH have always tried to achieve, but have never been able to do reliably. With kernel functionality this becomes possible, even easy.

What sets Flatpak apart from existing app container technologies such as iOS and Android apps, UWP and so on is its practicality. Other techs are all about defining new, incompatible worlds that are extremely limited and invasive (for example spawning new processes is often prohibited). Flatpak is not. It is about making the app environment look as much as possible like the enclosing system. In fact it goes to great lengths to make this work transparently and it succeeds admirably. There is not a single developer on earth who would tolerate doing their own development inside a, say, iOS app. It is just too limited. By contrast developing inside Flatpak is not only possible and convenient, but something people already do today.

The possible solution, then, is to shift the dependency consumption from option 2 to option 3 as much as possible. It has only one real new requirement: each programming language must have a build system agnostic way of providing prebuilt libraries. Preferably this should be pkg-config but any similar neutral format will do. (For those exclaiming "we can't do that, we don't have a stable ABI", do not worry. Within the Flatpak world there is only one toolchain, system changes cause a full rebuild.)

With this the problem is now solved. All one needs to do is to write a Flatpak builder manifest that builds and installs the dependencies in the correct order. In this way we can mix and match languages and build systems in arbitrary combinations and things will just work. We know it will, because the basic approach is basically how Debian, Fedora and all other distros are already put together.

What this blog is about

In this blog, I’m going to post about the progress I make and the things that strike me during my Outreachy internship. Outreachy interns work on an open-source project for three months under the guidance of a mentor from the community. In my case the community is GNOME, my mentor is danigm, and the project is Fractal. Fractal is a pretty cool gtk-desktop application for real-time communication through the Matrix protocol. It’s written in Rust. Here’s a link: https://wiki.gnome.org/Apps/Fractal

The task

The goal of my internship is to implement a video player in Fractal. Right now receiving a message of a video attachment is handled the same way as receiving a pdf attachment: the functionalities Fractal provides for it are “open” (with another application) and “save”.

I’m going to integrate a video player into the Fractal application that allows the user to directly play the video inside the application. I’ll use GStreamer for that.

About the programming language: Rust

In order to ask for an Outreachy grant for a certain open-source project, applicants first have to contribute to that project for about a month. When choosing a project, I didn’t know any Rust. But the fact that Fractal is written in Rust was an important point in favor due to curiosity. But I also expected to have a hard time at the beginning. Fortunately, that wasn’t really the case. For those who haven’t used Rust, let me give two of the reasons why:

If you just start coding, the compiler takes you by the hand giving you advice like “You have done X. You can’t do that because of Y. Did you maybe mean to do Z?”. I took those pieces of advice as an opportunity to dig into the rules I had violated. That’s definitely a possible way to get a first grip on Rust.

Nevertheless, there are pretty good sources to learn the basics, for example, the Rust Book. Well, to be precise, there’s at least one (sorry, I’m a mathematician, can’t help it, I’ve only started reading that one so far). It’s not short, but it’s very fast to read and easy to understand. In my opinion, the only exception being the topics on lifetimes. But lifetimes can still be understood by other means.

About the GUI library: GTK-rs

The GUI library Fractal uses is GTK-rs, a wrapper in Rust around the C-library GTK. One random interesting fact about GTK-rs, that called my attention at some point reading the Fractal code, was the following. Based on GObject, GTK uses inheritance structures. For example, the class Label is a subclass of Widget. In Rust there aren’t classes. Label is a type and Widget is so as well. So how does Label inherit from Widget in GTK-rs? Well, strictly speaking, it doesn’t. But both types implement a trait called Cast (in Haskell jargon: they are of type class Cast). In fact, any type in GTK-rs coming from GObject implements Cast. The Cast trait allows a type to be converted to types corresponding to superclasses (or subclasses, when that makes sense) in the GObject tree. That’s how you can convert a label to a widget, call a Widget method on it, and -if you want- convert it back.

Converting an instance of a type to a type corresponding to a super- or subclass (in the GObject logic) is called upcast or downcast, respectively. But how does GTK-rs capture the subclass/superclass logic, if the concept of classes doesn’t exist? The answer is: via the trait IsA<T> (here, T is a type parameter). If a type corresponds to a GObject subclass of another type P, then it implements the trait IsA<P> (here, P is a concrete type: the one corresponding to the superclass). For example, Label implements the trait IsA<Widget>. Of course, Widget has far more subclasses than just Label. All of them implement the trait IsA<Widget>

Now, let me come back to the end of the penultimate paragraph and explain why the Cast trait allows a label to be upcasted to a widget. By definition of the trait Cast, saying that Label implements Cast means that it has to have one method upcast<P> for every type P for which it implements IsA<P>. So it has to have a method upcast<Widget>. That method converts a label into a widget.

Downcasting methods are guaranteed very similarly. To start with, whenever a type T implements the trait IsA<P> for some type P (i.e. P corresponds to a GObject superclass of T), the type P implements the trait CanDowncast<T>. Therefore, Widget implements CanDowncast<Label>. Again by definition of the trait Cast, that Widget implements Cast means that it has to have one method downcast<T> for every type T for which it implements CanDowncast<T>. So it has to have a method downcast<Label>. Notice that a widget can only be downcasted to a label, if it comes from a label in the first place. That is captured by the fact that the return type of downcast<Label> on Widget is Result<Label, Widget>.

Of course, if it wasn’t for wrapping around an object- and inheritance-oriented library, one might directly work in a different mindset in Rust. But it’s interesting to see the tricks that have been used to realize this GObject mindset in Rust.

December 06, 2019

Barcelona: LAS 2019

This November I was in Barcelona for the Linux App Summit 2019. It was awesome \o/. I really liked that the conference was a joint event by GNOME and KDE, I met so many cool new people. During the conference I volunteered to show the “time left” signs to speakers, and helped out at the registration desk.

Aside from normal conference stuff I also managed to do quite a bit of hacking during the week. I made my first contribution to Gnome Initial Setup, and cleaned up Teleport a bit so I can hopefully get a new release out soon.

I’m bad at taking pictures, so here’s a picture of a tree in the middle of the stairs on the slopes of Mount Montjuic.

Thanks to the GNOME Foundation and Purism for sponsoring my travel and accommodation. The whole event was a lot of fun, so my thanks also go to all the organizers and people who helped make it happen!

LaTeX or ConTeXt for writing documents

I’ve started to learn a little ConTeXt, a language for writing documents. It is similar to LaTeX, but is easier to manage.

To quote a former user of GNOME LaTeX who has filed a feature request to add ConTeXt support:

“It includes about all packages you probably need, so in most cases, you won’t need additional ones – and thus don’t have package rivalries at all! Furthermore it’s much more consistent, way more convenient and also more powerful.

I did my bachelor’s thesis with LaTeX and my master’s thesis with ConTeXt – which was SO MUCH more comfortable than LaTeX, that I won’t use LaTeX anymore at all.”

If I wanted to re-implement GNOME LaTeX, it would target the ConTeXt language instead. If there are any ConTeXt user reading this, I would be interested to know what application you use for writing ConTeXt documents, and what features are important to you.

Additional links:

Outreachy - Week 01, Day 04!

December 06, 2019

Task for the week:

  • Try to replicate the gnome-builder “search and replace bar” widget (just the wire-frame) in the Gtranslator project.

    (Hopefully, I will continue it sometime in the next week)

  • Understand the structure and working of libdazzle’s example application.

Progress of the day:

Today was good. Fortunately, I wasted (almost) no time in solving application build errors. Besides, I even started writing actual code which didn’t work properly though, but I’m happy that atleast I managed to start from somewhere. :)

  • I started the day continuing building the Gedit application. It was smooth but there was a lot to do before I actually got it working. Meson marked almost half of the runtime dependencies as missing. So, I’d to install them manually. I’m writing my process here, hoping it might help someone else facing the same issues later.
    • The first one was Tepl. If you get this error, I seriously recommend not to check the gitlab/github repository at all (you’ll end up wasting all your time and still it won’t work). The best solution is to look for the source tarball on GNOME wiki here. I needed to install amtk-5 & uchardet as well before. The steps to install are simple ( the same will go for almost all the other dependencies as well):
      $ ./configure
      $ make
      # change into root if required.
      $ make install
      
    • Now, the process will be same for rest of the missing dependencies as well. So, just naming them here. You can find the respective tarballs on their GNOME wiki project pages.
      1. libpeas
      2. libsoup
      3. libsoup
      4. gspell
      5. enchant
    • After installing all these missing dependencies, the next error was Couldn't find include 'GtkSource-4.gir' (search path: '['/usr/share/gir-1.0', '/usr/share', 'gir-1.0', '/usr/share/gir-1.0', '/usr/share/gir-1.0']'). The GtkSource-4.gir file was actually present in the path /usr/local/share/gir-1.0/. Thus, creating a symlink of the file to the required search path i.e. /usr/share/gir-1.0 solved the issue.

    • Lastly, the ittools module was missing. So, simply installing it from the debian package manager, finally ended into the successful run o Gedit build. :)
  • After getting the Gedit build running, next step was to understand it’s search bar implemenation by making modifications in the source-code. It was defintely worth spending time on but soon I realised I should better focus on the libdazzle implementation (in gnome-builder application) only. As the latter is more code-efficient and exactly the way I am actually supposed to do. Gedit’s implementation of just search-baritself is around 1500 lines of code, where gnome-builder using libdazzle implements the whole search-and-replace combo in just 1/3rd of the total.

  • Next then, I started copying source code for search-and-replace-bar from gnome-builder’s src/libide/editor/ide-editor-search-bar.c file and it’s respective headers into gtranslator (good gracious, finally I’m back to my own project). And as expected it didn’t work. Even though, it’s just 1/3rd of the Gedit’s implementation but still around 600 lines of code, and simply copying will surely not work. Therefore, the only solution left is to take some step backwards and learn how to use libdazzle from scratch via it’s example application tutorial. (And that is why the task for the week is changed now.)

  • Learning from the last time’s failure of trying building the example/app project as a separate individual project, this time I’ve taken one more step back. I will first cover up the flaptpak’s Getting started documentation and then will try building it again from start. Hopefully, it’ll turn out to be a good foundation. Or if nothing works, then at last, will discuss it again with danigm in the first weekly meeting.

  • I also learnt writing proper structs. (I know it is quite simple for most others, but I never wrote them earlier, thus, needed some good insights.)

(I am quite satisfied with my today’s progress. Atleast, I have some takeaways to decide my further plannings. )

That’s all for now.

Till tomorrow. o/

December 05, 2019

Suspending Patreon

I originally wrote a version of this post on Patreon itself but suspending my page hides my posts on there. Oops.

There’s been a lot of change for me over the past year or two, in real life and as a member of the free software community (like my recent joining of Purism), that has shifted my focus away from why I originally launched a Patreon, so I felt it was time to deactivate my creator page.

The support I got on Patreon for my humble projects and community participation over the many months my page was active will always be much appreciated! Having a Patreon (or some other kind of small recurring financial support service) as a free software contributor fueled not only my ability to contribution but my enthusiasm for free software. Support for small independent free software developers, designers, contributors and projects from folks in the community (not just through things like Patreon) goes a long way and I look forward to shifting into a more supportive role myself.

I’m going forward with gratitude to the community, so much thanks to all the folks who were my patrons. Go forth and spread the love! ❤️

How to Run a Usability Test

One of the most important steps of the design process is “usability testing”, it gives designers the chance to put themselves in other people’s shoes by gathering direct feedback from people in real time to determine how usable an interface may be. This is just as important for free and open source software development process as it is any other.

Though free software projects often lack sufficient resources for other more extensive testing methods, there are some basic techniques that can be done by non-experts with just a bit of planning and time—anyone can do this!

Free Software Usability

Perhaps notoriously, free software interfaces have long been unapproachable; how many times have you heard: “this software is great…once you figure it out.” The steep learning curve of many free software applications is not representative of how usable or useful it is. More often than not it’s indicative of free software’s relative complexity, and that can be attributed to the focus on baking features into a piece of software without regard for how usable they are.

A screenshot of the calibre e-book management app's poor UI

Free software developers are often making applications for themselves and their peers, and the steps in development where you’d figure out how easy it is for other people to use—testing—gets skipped. In other words, as the author of the application you are of course familiar with how the user interface is laid out and how to access all the functionality, you wrote it. A new user would not be, and may need time or knowledge to discover the functionality, this is where usability testing can come in to help you figure out how easy your software is to use.

What is “Usability Testing”?

For those unfamiliar with the concept, usability testing is a set of methods in user-centric design meant to evaluate a product or application’s capacity to meet its intended purpose. Careful observation of people while they use your product, to see if it matches what it was intended for, is the foundation of usability testing.

The great thing is that you don’t need years of experience to run some basic usability tests, you need only sit down with a small group of people, get them to use your software, and listen and observe.

What Usability Testing is Not

Gathering people’s opinion (solicited or otherwise) on a product is not usability testing, that’s market research. Usability testing isn’t about querying people’s already formed thoughts on a product or design, it’s about determining if they understand a given function of a product or its purpose by having them use said product and gather feedback.

Usability is not subjective, it is concrete and measureable and therefore testable.

Preparing a Usability Test

To start, pick a series of tasks within the application that you want to test that you believe would be straightforward for the average person to complete. For example: “Set the desktop background” in a photos app, “Save a file with a new name” in a text editor, “Compose a new email” in an email client, etc. It is easiest to pick tasks that correspond to functions of your application that are (intended to be) evident in the user interface and not something more abstract. Remember: you are testing the user interface not the participant’s own ability to do a task.

You should also pick tasks that you would expect to take no more than a few minutes each, if participants fail to complete a task in a timely manner that is okay and is useful information.

Create Relatable Scenarios

To help would-be participants of your test, draft simple hypothetical scenarios or stories around these tasks which they can empathize with to make them more comfortable. It is very important in these scenarios that you do not use the same phrasing as present in the user interface or reference the interface as it would be too influential on the testers’ process. For instance, if you were testing whether an email client’s compose action was discoverable, you would not say:

Compose an email to your aunt using the new message button.

This gives too much away about the interface as it would prompt people to look for the button. The scenario should be more general and have aspects that everyone can relate to:

It’s your aunt’s birthday and you want to send her a well-wishes message. Please compose a new email wishing her a happy birthday.

These “relatable” aspects gives the participant something to latch onto and it makes the goal of the task clearer for them by allowing them to insert themselves into the scenario.

Finding Participants

Speaking of participants, you need at least five people for your test, after five there are diminishing returns as the more people you add, the less you learn as you’ll begin to see things repeat. This article goes into more detail, but to quote its summary:

Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford.

This is not to say that you stop after a single test with five individuals, it’s that repetitive tests with small groups allow you to uncover problems that you can address and retest efficiently, given limited resources.

Also, the more random the selection group is, the better the results of your test will be—“random” as if you grabbed passers-by the hallway or on the street. As a bonus, it’s also best to offer some sort of small gratuity for participating, to motivate people to sign up.

Warming Up Participants

It’s also important to not have the participants jump into a test cold. Give participants some background and context for the tests and brief them on what you are trying to accomplish. Make it absolutely clear that the goal is to test the interface, not them or their abilities; it is very important to stress to the participants that their completion of a task is not the purpose of the test but determining the usability of the product is. Inability to complete a task is a reflection of the design not of their abilities.

Preliminary Data Gathering

Before testing, gather important demographic information from your participants, things like age, gender (how they identify), etc. and gauge their level of familiarity with or knowledge of the product category, such as: “how familiar are you with Linux/GNOME/free software on a scale from 1-5?” All this will be helpful as you break down the test results for analysis to see trends or patterns across test results.

Running the Test

Present the scenarios for each task one at a time and separately as to not overload the participants. Encourage participants to give vocal feedback as they do the test, and to be as frank and critical as possible as to make the results more valuable, assuring them your feelings will not be hurt by doing so.

During the task you must but attentive and observe several things at once: the routes they take through your app, what they do or say during or about the process, their body language and the problems they encounter—this is where extensive note-taking comes in.

No Hints!

Do not interfere in the task at hand by giving hints or directly helping the participant. While the correct action may be obvious or apparent to you, the value is in learning what isn’t obvious to other people.

If participants ask for help it is best to respond with guiding questions; if a participant gets stuck, prompt them to continue with questions such as “what do you think you should do?” or “where do you think you should click?” but if they choose not finish or are unable to, that is okay.

Be Watchful

The vast majority of stumbling blocks are found by watching the body language of people during testing. Watch for signs of confusion or frustration—frowning, squinting, sighing, hunched shoulders, etc.—when a participant is testing your product and make note of it, but do not make assumptions about why they became frustrated or confused: ask them why.

It is perfectly alright to pause the test when you see signs of confusion or frustration and say:

I noticed you seemed confused/frustrated, care to tell me what was going through your mind when you were [the specific thing they were doing]?

It’s here where you will learn why someone got lost in your application and that insight is valuable.

Take Notes

For the love of GNU, pay close attention to the participants and take notes. Closely note how difficult a participant finds a task, what their body language is while they do the task, how long it takes them, and problems and criticisms participants have. Having participants think aloud or periodically asking them how they feel about aspects of the task, is extremely beneficial for your note-taking as well.

To supplement your later analysis, you may make use of screen and/or voice-recording during testing but only if your participants are comfortable with it and give informed consent. Do not rely on direct recording methods as they can often be distracting or disconcerting and you want people to be relaxed during testing so they can focus, and not be wary of the recording device.

Concluding the Test

When the tasks are all complete you can choose to debrief participants about the full purpose of the test and answer any outstanding questions they may have. If all goes well you will have some data that can be insightful to the development of your application and for addressing design problems, after further analysis.

Collating Results

Usability testing data is extremely useful to user experience and interaction designers as it can inform our decision-making over interface layouts, interaction models, etc. and help us solve problems that get uncovered.

Regardless of whether the testing and research is not conducted ourselves, it’s important that the data gathered is clearly presented. Graphs, charts and spreadsheets are incredibly useful in your write-up for communicating the break down of test results.

Heat Maps

It helps to visualize issues with tasks in a heat map, which is an illustration that accounts for the perceived difficulty of a given task for each participant by colour-coding them in a table.

Example Heat Map

The above is a non-specific example that illustrates how the data can be represented: green for successful completion of the task, yellow for moderate difficulty, red for a lot of difficulty, and black for an incomplete. From this heat map, we can immediately see patterns that we can address by looking deeper in the results; we can see how “Task 1” and Task 6” presented a lot of difficulty for most of the participants, and that requires further investigation.

More Usable Free Software

Conducting usability testing on free software shouldn’t be an afterthought of the development process but rather it should be a deeply integrated component. However, the reality is that the resources of free software projects (including large ones like GNOME) are quite limited, so one of my goals with this post is to empower you to do more usability testing on your own—you don’t have to be an expert—and to help out and contribute to larger software projects to make up for the limits on resources.

Usability Testing GNOME

Since I work on the design of GNOME, I would be more than happy to help you facilitate usability testing for GNOME applications and software. So do not hesitate to reach out if you would like me to review your plans for usability testing or to share results of any testing that you do.


Further Reading

If you’re interested in more resources or information about usability, I can recommend some additional reading:

First responder: Fire alarm, iffy emergency exit doors

Within Netherlands each company is by law required to have first responders. These handle various situations until the professionals arrive. It’s usually one of (possible) fire, medical or an evacuation. Normally I’d post this at Google+ but as that’s gone I’m putting the details on this blog. I prefer writing it down so later on I still can read the details.

November 6: P2000 warning

Within Netherlands any ambulance/fire department call is sent via the P2000 system. There’s various sites and apps which make that information easily available. I have one of these apps on my phone.

I noticed a Ambulance P2000 message for our postal code. The postal code could either be our building, or the one under construction next door. Until a few years ago P2000 would give the exact address, but due to privacy reasons nowadays it’s only the postal code. I ask security if there’s something. They’re not aware but they’ll monitor more closely (possibly ask around, e.g. reception might be aware).

A bit later security sees an ambulance arriving for the building under construction. The ambulance has difficulty getting to the right location due, for a while security was still under the impression it might be for our building. The building next door had various vans and construction stuff incorrectly being placed on the road. Things they should not have there, though they could’ve and should’ve cleared this while waiting for the ambulance to arrive.

November 11: Fire alarm

In the morning security had a talk with the construction crews next door about all the vans and so on blocking the road. Reoccurring problem mere days after their ambulance incident. This time they blocked way more of the road. Even the place near the goods elevator was blocked (registered fire department arrival location, plus where we direct all ambulances towards).

At 13:23 an automatic fire alarm for our building. Various people respond via walkie talkie. The location normally would require walking (running) up 7 floors; unfortunately this time I was on 6th floor. Arriving on the floor various indicator lights are indeed on, plus I meet various other first responders, plus security (who happened to be near the affected floor). We’re 4 in total.

The fire detector is located in a shaft. Opening this up requires special keys. Interestingly enough 3 out of the 4 responders are the ones who carry such keys. We open the shaft after doing a door check. The detector is one which detects smoke (security advised this); so it’s not the usual temperature sensitive one. The shaft itself is very warm, but there’s also some heating equipment in there. The heat combined with a detector which only responds to smoke gives us a suspicion that the heat is not indicative of anything. Despite that, there’s no indication of a reason for the detector to go off.

Two (me + another) are sent to investigate lower floors. We check one floor lower. Again we follow door procedure, accidentally open the wrong shaft, then the right one, nothing aside from heat. We try again another floor down, shaft is way cooler but nothing can be found. After communicating the latest finding we’re asked to regroup. After regrouping we decide to investigate the floor above.

On the floor above we notice some renovation going on (security is aware). But also an opened elevator shaft and some running equipment near that shaft. Opened shafts are very dangerous, it would allow any fire to easily spread within the building. This happened to a building 300m from our location.

As first responders this is the end of it. It was a fun exercise. Security however is not amused (understatement). Due to various rules no evacuation sound went off. Due to the fire detector being in a shaft any additional detector would’ve resulted in the automatic and legally unstoppable evacuation of the entire building.

As common in pretty much any bigger building, the fire detectors are linked to the fire department. After the detector went off the fire department call centre allowed us to investigate. There was some communication mishap in that call centre. While we were investigating the fire department crew showed up, lights and everything. This was quite visible for the next doors construction crew.. the ones who keep blocking the road. This was entirely unexpected, so it took a bit of time to talk to the fire department crew. The crew was however more than happy to wait for our investigation to be finished.

Iffy emergency exit doors

In case of a fire people are supposed to use the emergency stair cases. If you open any of such doors anywhere in the building security is immediately notified. This is for safety reasons; maybe something is wrong, there’s a real evacuation going on and it’s not something caught by the fire detectors or a call to security.

These emergency doors can also automatically unlock. Due to age, all over the building these doors are now automatically unlocking often. Leading to all kinds of false positives for security. A bit like a testcase failing because the hardware running the testcase is faulty.

I get asked a few times to manually close a few doors. A few times I notice. Many other times I do not notice the walkie talkie communication, nor my office phone or mobile being called. Oops.

I’m wondering if I can make calls from numbers to always be noisy. I often use vibrate/silent mode instead of e.g. do not disturb.

Eventually the problem is mostly fixed. It required the right company coming down various times.

Outreachy - Week 01, Day 03!

December 05, 2019

Task for the week:

  • Try to replicate the gnome-builder “search and replace barâ€� widget (just the wire-frame) in the Gtranslator project.

Progress of the day:

  • Today was an extremely long day (even I am still working and it’s 23:47 here). There is not much actual progress from the day though, but I definitely learnt a hell lot of new things. And there are even more left for me to just understand.

I am extremely tired now 😅, so compiling the rest of it in very short and quickly now.

  • I’m yet to learn how to use libdazzle properly (and I literally spent my whole day figuring out just this only). None of my example projects worked as intended but definitely there is a considerable progress in terms that I can understand the required code much better way now.

  • Yesterday, Though I was able to build the gnome-builder project properly, I still was facing another major issue. Whenever I tried to run the project in Builder IDE, it was not reflecting any changes that I made in the source code. Rather it always ran the same as the installed Builder IDE instance. So, there was no way for me to understand the effects, the changes in the source-code were making to the application. Lastly, I had to reach out to my mentor only for the solution. And I am quoting his reply as-it-is here.

Builder uses build profiles to build.

Check the build preferences in the builder. Click top-left button -> Build preferences and check the flatpak manifest that is being used.

Maybe the problem is the installed application name. Because if the application name is org.gnome.Builder, and it is already running, then the build run for the application under development won’t launch a new instance.

So, you can try to change the app-id property in the flatpak manifest file and use org.gnome.BuilderDevel or something like that instead.

  • Changing the app-id in the flatpak manifest file does work to a great extent for me. The build run is finally isolated from the actual installed application, but stil the changes are not being reflected. And that I am leaving for tomorrow now.

  • This whole process actually made me learn how to write basic meson.build files from scratch and then the flatpak manifest files as well. Which ultimately makes it really easy for me to understand the basic project structure, build system, runtime, external dependencies used and a lot other things.

  • I was trying to build libdazzle/example/app as an individual project, but failed. I even took danigm’s help but still I was not able to rectify the build files (which means I’m yet to learn a lot more when it comes to writing meson.build files for large projects). So, again leaving it for tommorow.

  • The best thing for the day was that I figured out another application that implements the search bar (not entirely but still something worth) as the gnome-builder’s. Gedit have a drop-down kind of search-bar that invokes with ctrl-f shortcut-keys. So, I can refer to the respective source-code for help.

  • I need to work a lot on my focus as today I spent half of the day just digging rabbit-holes for stuffs that were not needed at all. Hopefully, tomorrow will be better in terms of my work-focus.

  • Misc:

    • I spent a couple of hours rectifying ERROR: Dependency "tepl-4" not found, tried pkgconfig and cmake. I managed to get the package’s git files but the master branch has some config errors. So, no solution so far.

That’s all for today.

(And the time is 00:59 now. Will straight-away jump to bed now.)

Good night o/

Resources:

  1. Meson Tutorial (https://mesonbuild.com/Tutorial.html)
  2. Flatpak Manifests (http://docs.flatpak.org/en/latest/manifests.html)
  3. libdazzle example project (https://gitlab.gnome.org/GNOME/libdazzle/tree/master/examples/app)

December 04, 2019

There is no “Linux” Platform (Part 1)

This blog post is based on the talk Jordan Petridis and I gave at LAS 2019 in Barcelona.

In our community there is this idea that “Linux” is the third platform next to Windows and macOS. It’s closely connected to things like the “year of the Linux desktop”, and can be seen in the language around things like Flatpak, which bills itself as “The Future of Apps on Linux” and the Linux App Summit, which is “designed to accelerate the growth of the Linux application ecosystem”.

But what does that actually mean? What does a healthy app ecosystem look like? And why don’t we have one?

I think the core of the problem is actually the layer below that: Before we can have healthy ecosystems, we need healthy platforms to build them on.

What is a Platform?

The word “platform” is often used without a clear definition of what exactly that entails. If we look at other successful platforms there are a ton of different things enabling their success, which are easy to miss when you just look at the surface.

On the developer side you need an operating system developers can use to make apps. You also need a developer SDK and tooling which are integrated with the operating system. You need developer documentation, tutorials, etc. so people can learn how to develop for the platform. And of course once the apps are built there needs to be an app store to submit them to.

Developers can’t make great apps all by themselves, for that you also need designers. Designers need tools to mock up and prototype apps, platform UI patterns for things like layout and navigation, so every app doesn’t have to reinvent the wheel, and a visual design language so designers can make their app fit in with the rest of the system visually. You also need Human Interface Guidelines documenting all of the above, as well as tutorials and other educational resources to help people learn to design for the platform.

On the end user side you need a consumer OS with an integrated app store, where people can get the great apps developers make. The consumer OS can be the same as the developer OS, but doesn’t have to be (e.g. it isn’t for Android or iOS).  You also need a way for people to get help/support when they have problems with their system (whether that’s physical stores, a help website, or just easily google-able Stackoverflow questions).

That’s a lot of different things, but we can group them into four major pieces which are needed in order for something to be a real platform:

  • Operating System
  • Developer Platform
  • Design Language
  • App Store

So if we look at the free software world, where are the platforms?

Linux?

Linux is a kernel, which can be used to build OSes, which can be used to build platforms. Some people (e.g. Google with Android) have done so, but a kernel by itself clearly doesn’t have any the four things outlined above, and therefore is not a platform.

FreeDesktop.org?

What about “Desktop Linux”, which is what people usually mean when they say “Linux”? The problem is that this term doesn’t have a clear definition. You could take it to mean “FreeDesktop.org”, but that also doesn’t come close to being a platform. FreeDesktop is a set of standards that can be used to build platforms (and/or ensure some level of compatibility between different platforms). Endorsement of a single platform or set of technologies goes directly against FreeDesktop’s aims, and as such it should only be thought of as the common building blocks platforms might share.

Ubuntu?

What about distributions? Ubuntu is one of the most popular ones, and unlike others it has its own app store. It still isn’t a platform though, because it doesn’t have the most critical pieces: a developer SDK/technology stack, and a design language.

Other distributions are in a similar but worse position because they don’t have an app store.

GNOME?

GNOME is the most popular desktop stack, and it does have an SDK and design language. However, it only sort of has an app store (because GNOME people work on Flathub), and it doesn’t have an OS. Many distributions ship GNOME, but they are all different in various ways (more on this later), so they don’t provide a unified development target.

Elementary?

Despite being a relatively small project, elementary is attracting third party developers making apps specifically for their platform

Interestingly, the only project which currently has all the pieces is elementary. It has an OS, an SDK, a HIG, and an app store to submit apps to. The OS is largely Ubuntu and the technology stack largely GNOME, but it develops its own desktop and apps on top of that, and does the integration work to make it into a complete consumer product.

This begs the question, why is elementary the only one?

The Means of Distribution

The reasons for this are largely historical. In the early days, free software desktops were a bunch of independently developed components. They were not necessarily designed for each other, or well integrated. This meant in order to have a usable system, someone needed to curate these components and assemble them into an operating system: The first distributions were born.

Over the last decades this landscape has changed drastically, however. While GNOME 1 was a set of loosely coupled components, GNOME 2 was already much more cohesive and GNOME 3 is now essentially an integrated product. The shell, core apps, and underlying technologies are all designed with each other in mind, and provide a complete OS experience.

Desktops like GNOME have expanded their scope to cover most of the responsibilities of platforms, and are in effect platforms now, minus the OS part. They have a very clear vision of how the system should work, and app developers target them directly.

The elementary project has taken this development to its logical end point, and made its own vertically integrated OS and app store. This is why it’s the only “real” platform in the free software space at the moment.

GNOME has a relatively vibrant ecosystem of nice third party apps now, despite not being a complete platform (yet). This gives us a glimpse of the potential of this ecosystem.

Distributions, on the other hand, have not really changed since the 90s. They still do integration work on desktop components, package system and applications, set defaults, and make UX decisions. They still operate as if they’re making a product from independent components, even though the actual product work is happening at the desktop layer now.

This disconnect has led to tensions in many areas, which affect both the quality of the system user experience, and the health of the app ecosystem.

What’s interesting about this situation is that desktop developers are now in the same situation app developers have always been in. Unlike desktops, apps have always been complete products. Because of this they have always suffered from the fragmentation and disconnect between developers and users introduced by distribution packaging.

Because of this, app developers and desktop developers now more or less share a common set of grievances with the distribution model, which include:

  • Release schedule: Developers don’t have control over the pace at which people get updates to their software. For apps this can mean people still get old versions of software with issues that were fixed upstream years ago. For desktops it’s even worse, because it means app developers don’t know what version of the platform to target, especially since this can vary wildly (some distributions release every 6 months, others every 2+ years).
  • Packaging errors: Distribution packaging is prone to errors because individual packagers are overloaded (often maintaining dozens or hundreds of packages), and don’t know the software as well as the developers.
  • Overriding upstream decisions: When distributions disagree with upstream decisions, they sometimes keep old version of software, or apply downstream patches that override the author’s intentions. This is very frustrating if you’re an app developer, because users never experience your app as you intended it to be. However, similar to the release schedule issue, it’s even worse when it happens to the core system, because it fragments the platform for app developers.
  • Distro Theming: App developers test with the platform stylesheet and icons, so when distributions change these it can break applications in highly visible ways (invisible widgets, unreadable text, wrong icon metaphors). This is especially bad for third party apps, which get little or no testing from the downstream stylesheet developers. This blog post explains the issue in more detail.

The Wrong Incentives

The reason for a lot of these issues is the incentives on the distribution side. Distributions are shipping software directly to end users, so it’s very tempting to solve any issues they find downstream and just ship them directly. But because the distributions don’t actually develop the software this leads to a number of other problems:

  • Perpetual rebasing: Any change that isn’t upstreamed needs to be rebased on every future version of the upstream software.
  • Incoherent user experience: Downstream solutions to UX problems are often simplistic and don’t fix the entire issue, because they don’t have the development resources for a proper fix. This leads to awkward half-redesigns, which aren’t as polished or thought-through as the original design.
  • Ecosystem fragmentation: Every downstream change adds yet another variable app developers need to test for. The more distributions do it, the worse it gets.

The Endless OS shell is a great example of this. They started out with vanilla GNOME Shell, but then added ever more downstream patches in order to address issues found in in-house usability tests. This means that they end up having to do huge rebases every release, which is a lot of work. At the same time, the issues that prompted the changes do not get fixed upstream (Endless have recently changed their strategy and are working upstream much more now, so hopefully this will get better in the future).

This situation is clearly bad for everyone involved: Distributions spend a ton of resources rebasing their patches forever, app developers don’t have a clear target, and end users get a sub-par experience.

So, what could we do to improve this? We’ll discuss that in Part 2 of this series :)

GNOME.Asia Summit 2019

Back from Gresik more than one month, here is my late report for GNOME.Asia Summit 2019.

This year, GNOME.Asia Summit 2019 was held in Universitas Muhammadiyah Gresik. It’s my seventh GNOME.Asia Summit that I attend.

Indonesia Traditional Dance at opening

At the first day of conference, the opening ceremony was very traditional and splendid.  For the topics, I listened Opening Talk by Neil McGovern and “pulseaudio: Improvement on Audio Streams Switch” by Hui Wang, I learnt a lot knowledge of pulseaudio. And I gave a beginner talk to introduce GNOME Shell.

Cool background wall

On the second day, I enjoyed “GNOME Foundation — We’re Here to Help” by Rosanna Yuen and “How To Contribute To FOSS Projects” by Ahmad Haris, although the second one was in local language. I could still feel the exciting atmosphere, especially the lucky draw in closing session. In lightening talks, Fenris has given me a deep impression, he would like to bid for GNOME.Asia 2020 in Malaysia.

Group photo by Muhammad Firdaus

In the third day, local team organized one day tour in Gresik, it’s a good experience to know about how to make Batik. After that, we also visited some local places, such as GIRI museum and University International Semen Indonesia.

Batik workshop

And the great part of conference is that I enjoy meeting a lot of old friends and making new friends in these three days.

Finally, thanks GNOME Foundation’s sponsorship for my trip to GNOME.Asia Summit 2019. Hope to see you in GNOME.Asia Summit 2020!

Here is the GNOME.Asia Summit 2019 photo stream.

https://photos.app.goo.gl/dXwiYBzHWWUvnkg67

OSFC 2019 – Introducing the Linux Vendor Firmware Service

A few months ago I gave a talk at OSFC.io titled Introducing the Linux Vendor Firmware Service.

If you have a few minutes it’s a really useful high-level view of the entire architecture, along with a few quick dives into some of the useful things the LVFS can do. Questions and comments welcome!

December 03, 2019

GNOME programs go global

The GNOME project is built by a vibrant community and supported by the GNOME Foundation, a 501(c)(3) nonprofit charity registered in California (USA). The GNOME community has spent more than 20 years creating a desktop environment designed for the user. We‘re asking you to join us by becoming Friend of GNOME.

The GNOME community hosts numerous hackfests, meetings, workshops, and first time contributor events around the world. We also host two very special events: GUADEC and GNOME.Asia. These two conferences are for GNOME contributors, enthusiasts, and the GNOME curious together twice a year on two different continents. Over the past few years, we have also organized Linux Application Summit (LAS) with the KDE community.

Every year, GUADEC (GNOME’s biggest annual conference) brings together developers, designers, users, and other experts and enthusiasts for a week of talks, workshops, roundtables, team building, and more. GUADEC is one of the most important events for the GNOME community, giving us an unparalleled opportunity to push the project forward. GUADEC 2019 was no exception. Taking place in the beautiful city of Thessaloniki, Greece from 23 – 28 of August, we had conversations on a variety of topics and a splendid range of presentations, many of which are available online.

A photo of ten people on a stage. Many of them are smiling.

GUADEC not only offers a place for people to enjoy different sessions and workshops, but it’s also a unique opportunity to bring together the GNOME Foundation staff, board members, and Advisory Board for making strategic decisions.

While GUADEC has historically been in Europe, we are very excited that GUADEC 2020 will take place in Zacatecas, Mexico. This will provide an opportunity for people who have trouble traveling to Europe. By hosting the event on the North American continent, a whole new group of people will be able to join us to celebrate GNOME.

Another interesting event we have is GNOME.Asia. GNOME.Asia 2019 took place in Gresik, Indonesia between 11 – 13 of October at the Universitas Muhammadiyah Gresik (UMG). This too was a rousing success. It was the biggest event organized by the GNOME community in Asia, with the first day dedicated to workshops and the second and third days for presentations.

In 2019 we also worked with the KDE community on organizing LAS in Barcelona, Spain. LAS is designed to accelerate the growth of the Linux application ecosystem by bringing together everyone involved in creating a great Linux application user experience. Thanks to the generosity of sponsors and the hard work of the organizing team, attendance was free for everyone.

Among the hackfests this past year, there was a particularly large West Coast Hackfest, which took place in Portland, OR. The focus was on getting the members of Documentation team, Engagement team and GTK team working together for four days to push some initiatives forward. This was a unique opportunity for the Documentation team to work on ideas that had been planned for some time. Members of the Engagement team worked activities such as social media strategy, event planning, and merchandise design. The GTK team continued their outstanding work on one of the most popular free libraries for graphical user interfaces.

GNOME events are organized by the GNOME community, with the support of GNOME Foundation employees, principally Programs Coordinator Kristi Progri, with sponsorship assistance from Strategic Initiatives Manager Molly de Blanc. These events are built by the GNOME community, and supported by the GNOME Foundation. We provide infrastructure and organizational support for the local and global teams who spearhead these events. We work alongside the community to make these events happen.

In 2020, we are going to continue to step up for the community and are asking you to join us by becoming a Friend of GNOME. Though this, you’re helping to make amazing events like these possible. By continuing our work, we are able to support the GNOME community and help it grow. We want to keep doing this, and we want you to help us.

We recommend a recurring, monthly donation of $25 ($5/month for students). As thanks for becoming a Friend of GNOME, we’ll send you a thank you postcard from a GNOME hacker and offer you a discount on swag at events. If you donate more than $30 a month, you are eligable for a subscription to LWN at no additional cost to you. If you donate more than $500 a year, Executive Director Neil McGovern will send you a special thank you note.

Everything the GNOME Foundation does is for the GNOME community. By supporting us, you’re supporting a global community looking to serve everyone, regardless of geography or language. Join us in working towards a brighter future for GNOME by becoming a Friend of GNOME today.

This Month in Mutter & GNOME Shell | November 2019

GNOME Shell

GNOME Shell saw many improvements during November. The commit log was dominated by cleanups, but a few improvements and polishments also found their way into the code.

The authentication dialog received a batch of bugfixes, many cleanups of deprecated objects and functions landed. The top panel’s application name is now correctly sized by hiding the spinner near it.

GNOME Shell’s cache of icons and textures received a fix to invalidate properly when dealing with scaling changes. All-day events are properly displayed in the messaging menu now.

Finally, the Alt-Tab switcher now doesn’t mistakenly show an overflow indicator when the list of windows fits the screen size.

Libcroco Removal

The libcroco dependency was dropped by importing the source files into St. This is an important step in getting rid of libcroco, which is a dated CSS parsing library.

App Grid Improvements

The icon grid saw an important fix to dragging application icons. The icons were not properly being destroyed, and thus were piling up after dragging and dropping them over time. This fix was further improved to work on more situations. This set of fixes was backported to the 3.34 release.

A nice visual improvement landed on the page indicator of the icon grid.

 

System Font

GNOME Shell now respects the system font!

Mutter

For Mutter, November highlights were the introduction of regional clipping in Cogl, and big code cleanups.

Regional Clipping

When applications and GNOME Shell draw themselves, they communicate which parts of their contents changed. That information allows Mutter to submit only the changed contents to the monitor, which is an important optimization.

Example of GNOME Clocks being partially redrawn
Example of GNOME Clocks being partially redrawn. The changed parts are painted in red.

Until GNOME 3.34, Mutter would calculate the bounding rectangle between all the regions that changed:

Mutter would submit the bounding box of all updated regions (in blue). In many situations, such as the above, that would include more than necessary.

This month, Mutter received the ability to update multiple regions independently, without using the bounding rectangle. In the example, Mutter now updates only what has actually changed:

The regions that Mutter submits (in blue) now matches the regions that really changed in the first picture (in red)

This yielded a significant improvement too! Under some circumstances, this change alone can reduce the time to submit frames by up to 44%.

Shadow Buffer

In some situations, in the native backend we now use a shadow buffer to render the stage off-screen before copying the content over to the actual buffer handed over to the display panel. While this may sound counter productive, it significantly increases performance from unusable to fairly pleasant on those systems that need it.

Other Highlights

We now prevent full window redraws when using dma-buf or EGLImage buffers on Wayland (mutter!948). This fixes partial updates of windows on Wayland, which can reduce the amount of data transferred between GPUs, CPUs, and the monitor. Together with the regional clipping explained above, this should significantly help saving battery.

Many, many Clutter and Cogl cleanups (mutter!921, mutter!819, mutter!933, mutter!932) landed too. These merge requests remove deprecated functions and features that, as time passes, are an increasingly burden for maintenance, and in some cases also prevent improvements and optimizations . About 28000 lines of legacy code has been cleaned out from Mutters own Cogl and Clutter versions so far, since we entered the 3.36 development phase. Extension authors, please make sure your extensions don’t use any of the removed code.

One legacy feature that dates back to when Clutter was a separate library used to write client applications was removed (mutter!911) from Mutter’s internal copy of Clutter. Not clearing the stage doesn’t make sense on a compositor.

Xwayland games that run fullscreen and change resolution should behave better now (mutter!739).

We’ve also seen a few bug fixes landing, for example fixes to Drag n’ Drop, a couple of memory leak fixes, crash fixes including one related to hot plugging and another that sometimes occurred when running Intellij, and a bug fix avoiding stuck full screen content.

December 02, 2019

GNOME Shell Hackfest 2019

This October I attended the GNOME Shell Hackfest 2019 in the Netherlands. It was originally just planned as a small hackfest for core Shell developers, but then us designers decided to crash the party and it became a pretty big thing. In the end we were about 15 people from lots of different companies, including Red Hat, Endless, Purism, and Canonical. The venue was the Revspace hackerspace in Leidschendam, which is somewhere between the Hague and Leiden.

The venue was very cool, with plenty of hackerspace-y gadgets and a room with couches and a whiteboard, which was perfect for the design team’s planning sessions.

Excitement on the first day

Allan, Jakub, and I were primarily there to make progress on some long-standing issues with GNOME Shell, such as new user onboarding, the app grid, and the spatial model of the Shell. We’ve wanted to address many of these things for a long time (in fact, some of them were already discussed at the London hackfest 2 years ago). In the weeks leading up to the hackfest we had already been working on this (together with Sam Hewitt who couldn’t make it to the hackfest unfortunately), preparing a number of concepts to be worked out in more detail.

Jakub and Allan hard at work

At the hackfest we made these concepts more concrete, worked on mockups and prototypes, and discussed them with Shell developers. It’s still early days for all of this, but we’re very excited about sharing it more widely soon.

Jakub presenting some exciting prototypes to the Shell developers

We also worked on a number of other things, such as the new lock screen design, which Georges has started to implement, prettier Shell dialogs, and some changes to the system status menu.

Dinner on the final day

Thanks to Carlos Garnacho and Hans de Goede for organizing, Revspace for hosting us, and the GNOME Foundation for sponsoring my travel and accommodation!

Conferences

This year I haven’t done any drone-related travelling. The sponsorship deal fell through and Rotorama didn’t participate in DCL. I admit I haven’t been practicing as much as I would need to to do any better in the local races either.

So at least I got the world of FOSS to get out of the couch.

Berlin

Tobias organized yet another icon-related hackfest in Berlin earlier this year. This time we had some talented young developers help us out with the tooling. This effort to focus on the tools as well as the assets is continuing and we’ll have some more exciting news to share soon.

Hackfest Berlin 2019 from jimmac on Vimeo.

Thessaloniki

GUADEC continues bringing awesome southern locations, which a vitamin D deprived monkey from a rainy climate can’t appreciate enough. I have fallen back to my comfort zone and only given a short workflow/demo on icon design this year, mainly because Tobias has been giving great talks on focusing on design.

I still have a video to finish editing, but it ended up more of a personal one so I’m not sure I’ll publicize it that much.

the Hague

And we’re closing the year with another design hackfest. Big shout out to Hans de Goede and Carlos Garnacho for organizing a shell hackfest in the Netherlands, and mainly allow some designers crash the party to revive our efforts in attacking some of the downsides of the current overview design. The facilities of Revspace allowed us to meet face to face, mind map on the whiteboard, iterate on some prototypes and move forward considerably compared to the usual cycle spanning months.

December 01, 2019

Into the Pyramid

November 2019 wasn’t an easy month, for various reasons, and it also rained every single day of the month. But there were some highlights!

LibreTeo

At the bus stop one day I saw a poster for a local Free Software related event called LibreTeo. Of course I went, and saw some interesting talks related to technology and culture and also a useful workshop on improving your clown skills. Actually the clown workshop was a highlight. It was a small event but very friendly, I met several local Free Software heads, and we were even invited for lunch with the volunteers who organized it.

Purr Data on Flathub

I want to do my part for increasing the amount of apps that are easy to install Linux. I asked developers to Flatpak your app today last year, and this month I took the opportunity to package Purr Data on Flathub.

Here’s a quick demo video, showing one of the PD examples which generates an ‘audible illusion’ of a tone that descends forever, known as a Shepard Tone.

As always the motivation is a selfish one. I own an Organelle synth – it’s a hackable Linux-based device that generates sound using Pure Data, and I want to be able to edit the patches!

Pure Data is a very powerful open source tool for audio programming, but it’s never had much commercial interest (unlike its proprietary sibling Max/MSP) and that’s probably why the default UI is still implemented in TCL/TK in 2019. The Purr Data fork has made a lot of progress on an alternative HTML5/JavaScript UI, so I decided this would be more suitable for a Flathub package.

I was particularly motivated by the ongoing Pipewire project which is aiming to unify pro and consumer audio APIs on Linux in a Flatpak-friendly way. Christian Schaller mentioned this recently:

There is also a plan to have a core set of ProAudio applications available as Flatpaks for Fedora Workstation 32 tested and verified to work perfectly with Pipewire.

The Purr Data app will benefit a lot from this work. It currently has to use the OSS backend inside the sandbox and doesn’t seem to successfully communicate over MIDI either — so it’s rather a “tech preview” at this stage.

The developers of Purr Data are happy about the Flatpak packaging, although they aren’t interested in sharing the maintenance effort right now. If anyone reading this would like to help me with improving and maintaining the Purr Data Flatpak, please get in touch! I expect the effort required to be minimal, but I’d like to have a bus factor > 1.

Tracker bug fixes

This month we fixed a couple of issues in Tracker which were causing system lockups for some people. It was very encouraging to see people volunteering their time to help track down the issue, both in Gitlab issue 95 and in #tracker on IRC, and everyone involved in the discussion stayed really positive even though it’s obviously quite annoying when your computer keeps freezing.

In the end there were several things that come together to cause system lockups:

  • Tracker has a ‘generic image extraction’ rule that tries to find metadata for any image/* MIME type that isn’t a .bmp, .jpg, .gif, or .png. This codepath uses the GstDiscoverer API, the same as for video and audio files, in the hope that a GStreamer plugin on the system can give us useful info about the image.
  • The GstDiscoverer instance is created with a timeout of 5 seconds. (This seems quite high — the gst-typefind utility that ships with GStreamer uses a timeout of 1 second).
  • GStreamer’s GstDiscoverer API feeds any file where the type is unknown into an MPEG decoder, which is effectively an unwanted fuzz test and can trigger periods of high CPU and memory usage.
  • 5 seconds of processing non-MPEG data with an MPEG decoder is somehow enough to cause Linux’s scheduler to lock up the entire system.

We fixed this in the stable branches by blocking certain problematic MIME types. In the next major release of Tracker we will probably remove this codepath completely as the risks seem to outweigh the benefits.

Other bits

I also did some work on a pet project of mine called Calliope, related with music recommendations and playlist generation. More on this in a separate blog post.

And I finally installed Fedora on my partner’s laptop. It was nice to see that Gnome Shell works out-of-the-box on 12 year old consumer hardware. The fan, which was spinning 100% of the time under Windows 8, is virtually silent now – I had actually thought this problem was due to dust buildup or a hardware issue, but once again the cause was actually low-quality proprietary software.

Adopting GitLab workflow

In October 2018 there was a face-to-face short meeting with a big part of libosinfo maintainers, some contributors, and some users.

This short meeting took place during a lunch break in one of KVM Forum 2018 days and, among other things, we discussed whether we should allow, and / or prefer receiving patches through GitLab Merge Requests.

Here’s the announcement:

[Libosinfo] Merge Requests are enabled!

    From: Fabiano Fidêncio <fidencio redhat com>
    To: "libosinfo redhat com" <libosinfo redhat com>
    Subject: [Libosinfo] Merge Requests are enabled!
    Date: Fri, 21 Dec 2018 16:48:14 +0100

People,

Although the preferred way to contribute to libosinfo, osinfo-db and
osinfo-db-tools is still sending patches to this ML, we've decided to
also enable Merge Requests on our gitlab!

Best Regards,
--
Fabiano Fidêncio

Now, one year past that decision, let’s check what has been done, review some numbers, and discuss what’s my take, as one of the maintainers, of the decision we made.

2019, the experiment begins …

After the e-mail shown above was sent, I’ve kept using mailing list as the preferred way to submit and review patches, keeping an eye on GitLab Merge Requests, till August 2019 from when I did a full switch to using GitLab instead of mailing list.

… and what changed? …

Well, to be honest, not much. But in order to extend a little bit more, I have to describe a little bit my not optimal workflow.

Even before describing my workflow, let me just make clear that:

  • I don’t have any scripts that would fetch the patches from my e-mail and apply them automagically for me;

  • I never ever got used to text-based mail clients (I’m a former Evolution developer, I’m an Evolution user for several years);

Knowing those things, this is how my workflow looks like:

  • Development: I’ve been using GitLab for a few years as the main host of, my forks of. the projects I contribute to. When developing a new feature, I would:

    • Create a new branch;
    • Do the needed changes;
    • Push the new branch to the project on my GitLab account;
    • Submit the patches;
  • Review: It may sound weird, maybe it really is, but the way I do review patches is by:

    • Getting the patches submitted;
    • Applying atop of master;
    • Doing a git rebase -i so I can go through each one of the patchesR;
    • Then, for each one of the patches I would:
      • Add comments;
      • Do fix-up changes;
      • Squash my fixes atop of the original patch;
      • Move to the next patch;

And now, knowing my workflow, I can tell that pretty much nothing changed.

As part of the development workflow:

  • Submitting patches:

    • git publish -> click in the URL printed when a new branch is pushed to GitLab;
  • Reviewing patches:

    • Saving patch e-mails as mbox, applying them to my tree -> pull the MR

Everything else stays pretty much the same. I still do a git rebase -i and go through the patches, adding comments / fix-ups which, later on I’ll have to organise and paste somewhere (either replying to the e-mail or adding to GitLab’s web UI) and that’s the part which consumes the most of my time.

However, although the change was not big to me as a developer, some people had to adapt their workflow in order to start reviewing all the patches I’ve been submitting to GitLab. But let’s approach this later on … :-)

Anyways, it’s important to make it crystal clear that this is my personal experience and that I do understand that people who rely more heavily on text-based mail clients and / or with a bunch of scripts tailored for their development would have a different, way way different, experience.

… do we have more contributions since the switch? …

As by November 26th, I’ve checked the amount of submissions we had on both libosinfo mailing list and libosinfo GitLab page during the current year.

Mind that I’m not counting my own submissions and that I’m counting osinfo-db’s addition, which usually may consist in adding data & tests, as a single submission.

As for the mailing list, we’ve received 32 patches; as for the GitLab, we’ve received 34 patches.

Quite similar number of contributions, let’s dig a little bit more.

The 32 patches sent to our mailing list came from 8 different contributors, and all of them had at least one previous patch merged in one of the libosinfo projects.

The 34 patches sent to our GitLab came from 15 different contributors and, from those, only 6 of them had at least one previous patch merged in one of the libosinfo projects, whilst 9 of them were first time contributors (and I hope they’ll stay around, I sincerely do ;-)).

Maybe one thing to consider here is whether forking a project on GitLab is easier than subscribing to a new mailing list when submitting a patch. This is something people usually do once per project they contribute to, but subscribing to a mailing list may actually be a barrier.

Some people would argue, though, it’s a both ways barrier, mainly considering one may extensively contribute to projects using one or the other workflow. IMHO, it’s not exactly true. Subscribing to a mailing list, getting the patches correctly formatted feels more difficult than forking a repo and submitting a Merge Request.

In my personal case, I can tell the only projects I contribute to which still didn’t adopt GitLab / GitHub workflow are the libvirt ones, although it may change in the near future, as mentioned by Daniel P. Berrangé on his KVM Forum 2019 talk.

… what are the pros and cons? …

When talking about the “pros” and “cons” is really hard to get exactly which are the objective and subjective pros and cons.

  • pros

    • CI: The possibility to have a CI running for all libosinfo projects, running the tests we have on each MR, without any effort / knowledge of the contributor about this;

    • Tracking non-reviewed patches: Although this one may be related to each one’s workflow, it’s objective that figuring out which Merge Requests need review on GitLab is way easier for a new contributor than navigating through a mailing list;

    • Centralisation: This is one of the subjective ones, for sure. For libosinfo we have adopted GitLab as its issue tracker as well, which makes my life as maintainer quite easy as I have “Issues” and “Merge Requests” in a single place. It may not be true for different projects, though.

  • cons

    • Reviewing commit messages: It seems to be impossible to review commit messages, unless you make a comment about that. Making a comment, though, is not exactly practical as I cannot go specifically to the line I want to comment and make a suggestion.

    • Creating an account to yet another service: This is another one of the subjective ones. It bothers me a lot, having to create an account on a different service in order to contribute to a project. This is my case with GitLab, GNOME GitLab, and GitHub. However, is that different from subscribing to a few different mailing lists? :-)

Those are, for me, the most prominent “pros” and “cons”. There are a few other things that I’ve seen people complaining, being the most common one related to changing their workflow. And this is something worth its own section! :-)

… is there something out there to make my workflow change easier? …

Yes and no. That’s a horrible answer, ain’t it? :-)

Daniel P. Berrangé has created a project called Bichon, which is a tool providing a terminal based user interface for reviewing GitLab Merge Requests.

Cool, right? In general, yes. But you have to keep in mind that the project is still in its embryonic stage. When more mature, I’m pretty sure it’ll help people used to mailing lists workflow to easily adapt to GitLab workflow without leaving behind the facilities of doing everything via command-line.

I’ve been using the tool for simple things, I’ve been contributing to the tool with simple patches. It’s fair to say that it I do prefer adding a comment to Merge Requests, Approve, and Merge them using Bichon than via the web UI. Is the tool enough to suffice all the people’s needs? Of course not. Will it be? Hardly. But it may be enough to surpass the blockers of migrating away from mailing lists workflow.

… a few words from different contributors …

I’ve decided to ask Cole Robinson and Felipe Borges a word or two about this subject as they are contributors / reviewers of libosinfo projects.

It should go without saying that their opinions should not be taken as “this workflow is better than the other”. However, take their words as valid points from people who are heavily using one workflow or the other, as Cole Robinson comes from libvirt / virt-tools world, which rely heavily on mailing list, and Felipe Borges comes from GNOME world, which is a huge GitLab consumer.

“The change made things different for me, slightly worse but not in any disruptive way. The main place I feel the pain is putting review comments into a web UI rather than inline in email which is more natural for me. For a busier project than libosinfo I think the pain would ramp up, but it would also force me to adapt more. I’m still largely maintaining an email based review workflow and not living in GitLab / GitHub” - Cole Robinson

“The switch to Gitlab has significantly lowered the threshold for people getting started. The mailing list workflow has its advantages but it is an entry barrier for new contributors that don’t use native mail clients and that learned the Pull Request workflow promoted by GitLab/GitHub. New contributors now can easily browse the existing Issues and find something to work on, all in the same place. Reviewing contributions with inline discussions and being able to track the status of CI pipelines in the same interface is definitely a must. I’m sure Libosinfo foresees an increase in the number of contributors without losing existing ones, considering that another advantage of Gitlab is that it allows developers to interact with the service from email, similarly to the email-driven git workflow that we were using before.” - Felipe Borges

… is there any conclusion from the author’s side?

As the first thing, I’ve to emphasize two points:

  • Avoid keeping both workflows: Although we do that on libosinfo, it’s something I’d strongly discourage. It’s almost impossible to keep the information in sync in both places in a reasonable way.

  • Be aware of changes, be welcome to changes: As mentioned above, migrating from one workflow to another will be disruptive at some level. Is it actually a blocker? Although it was not for me, it may be for you. The thing to keep in mind here is to be aware of changes and welcome them knowing you won’t have a 1:1 replacement for your current workflow.

With that said, I’m mostly happy with the change made. The number of old time contributors has not decreased and, at the same time, the number of first time contributors has increased.

Another interesting fact is that the number of contributions using the mailing list has decreased as we only had 4 contributions through this mean since June 2019.

Well, that’s all I have to say about the topic. I sincerely hope a reading through this content somehow helps your project and the contributors of your project to have a better idea about the migration.

November 27, 2019

Adventures in fixing suspend/resume on a HP x2 Detachable 10-p0XX

I got contacted by a user with a HP X2 10 p018wm 2-in-1 about the device waking up 10-60 seconds after suspend. I have access to a HP X2 10 p002nd myself which in essence is the same HW and I managed to reproduce the problem there. This is when the fun started:

1. There were a whole bunch of ACPI related errors in dmesg. It turns out that these affect almost all HP laptop models and we have a multiple bugs open for this. Debugging these pointed to the hp-wmi driver. I wrote 2 patches fixes 2 different kind of errors and submitted these upstream. Unfortunately this does not help with the suspend/resume issue, but it does fix all those errors people have been complaining about :)

2. I noticed some weird messages in dmesg with look like a PCI bus re-enumeration is started during suspend when suspending by closing the lid and then the re-enumeration continues after resume. This turns out to be triggered by this piece of buggy AML code which
is used for monitor hotplug notification on gfx state changes (the i915 driver ACPI opregion also tracks the lid state for some reason):

                Method (GNOT, 2, NotSerialized)
                {
                    ...
                    CEVT = Arg0
                    CSTS = 0x03
                    If (((CHPD == Zero) && (Arg1 == Zero)))
                    {
                        If (((OSYS > 0x07D0) || (OSYS < 0x07D6)))
                        {
                            Notify (PCI0, Arg1)
                        }
                        Else
                        {
                            Notify (GFX0, Arg1)
                        }
                    }
                    ...
                }

Notice how "If (((OSYS > 0x07D0) || (OSYS < 0x07D6)))" is always true, the condition is broken the "||" clearly should have been a "&&" this is causing the code to send a hotplug notify to the PCI root instead of to the gfx card, triggering a re-enumeration. Doing a grep for this on my personal DSDT collection shows that 55 of the 93 DSDTs in my collection have this issue!

Luckily this can be easily fixed by setting CHPD to 1 in the i915 driver, which is something which we should do anyways according to the
opregion documentation. So I wrote a patch doing this and submitted it upstream. Unfortunately this also does not help with the suspend/resume issue.

3. So the actual spurious wakeups are caused by HP using an external embedded controller (EC) on the "legacy-free" platform which they use for these laptops. Since these are not designed to use an external EC they lack the standard interface for this, so HP has hooked the EC up over I2C and using an ACPI GPIO event handler as EC interrupt.

These devices use suspend2idle (s2idle) instead of good old firmware handled S3, so the EC stays active during suspend. It does some housekeeping work which involves a round-trip through the AML code every minute. Normally EC wakeups are ignored durin s2idle by some special handling in the kernel, but this is only done for ECs using the standardized ACPI EC interface, not for this bolted on the
side model. I've started a discussion on maybe extending our ACPI event handling to deal with this special case.

For now as a workaround I ended up writing 2 more patches to allow blacklisting wakeup by ACPI GPIO event handlers on select models. This breaks wakeup by opening the LID, the user needs to wake the laptop with the powerbutton. But at least the laptop will stay suspended now.

Step up and become a Friend of GNOME

The GNOME project is built by a vibrant community and supported by the GNOME Foundation, a 501(c)(3) non-profit charity registered in California (USA). The GNOME community has spent more than 20 years creating a desktop environment designed for the user. We‘re asking you to step up for GNOME and become Friend of GNOME. We’re working to have 100 new Friends of GNOME join by January 6, 2020.

A photo of a group of GNOME contributors at GUADEC, standing behind a large beach blanket full of colorful GNOME logos.

The GNOME Foundation was founded in 2000, to support the activities of the GNOME project and our goal of building a desktop environments that respects the freedom of every user, developer, and contributor. We continue to make great strides towards this.

2019 has been an exciting year for us with the expansion of the Foundation‘s staff and efforts:

This year has not been without challenges. Most notably, October brought with it allegations of patent infringement from Rothschild Imaging, Ltd. Rather than settling or backing down, we are taking this fight as far as we have to in order to say that patent trolls have no place in free software. This effort is something we’ll be carrying forward into the coming year.

Looking ahead to 2020, we already have a lot going on in addition to our patent case. There’s kicking off the GNOME Coding Education Challenge in order to expand the tools we have available to learn and teach. We will be seriously expanding our accessibility efforts, and are currently planning an accessibility audit and making plans for updates to the Orca screen reader. We’ve already started planning GUADEC 2020, which will bring us to our first North American GUADEC in Zacatecas, Mexico. We have a GNOME.Asia in the works. There will be more hackfests and newcomer events, intern and mentorship opportunities, and constant efforts to work on, for, and with the community. We’ll do all of this while upholding the standards of technical excellence you have come to expect from the GNOME project, building software for people of every country with every level of ability.

The GNOME Foundation supports the work of the GNOME community, and we need your help to keep going. We’re working on the future, not just of how you interact with your computer, but the future of free software and we want you to join us. Step up for GNOME! You can become a Friend of GNOME, to support us on either an annual or monthly basis. We ask for a minimum donation of $10/month, and recommend $25 a month ($5 for students). Every donation comes with a Thank You postcard from a GNOME hacker and a discount on GNOME swag when you find our booth at a conference. For $30 a month, you can get a subscription to LWN. If you donate $500 or more on an annual basis, you’ll get a wonderful Thank You note especially from executive director Neil McGovern.

We’re bringing software freedom to the desktop. We‘re developing a safe, secure, accessible desktop environment for everyone; building a global community of contributors; and fostering the next generation of free and open source software contributors. By becoming a Friend of GNOME you are becoming a part of that.

Cheers,

Andrea, Bart, Emmanuele, Kristi, Molly, Neil, and Rosanna

Photo courtesy of Ana Rey. Licensed under a Creative Commons Attribution Share Alike license.

November 26, 2019

g_clear_{s,}list() in GLib 2.63.3

On the topic of new APIs in GLib, https://gitlab.gnome.org/GNOME/glib/commit/58ba7d78fbd7ecb4c0df2dc7e251627ebbffb9d5 is now a thing (whenever 2.63.3 sees the light of day).

Nothing super exciting, but it will scratch my own itch in refactoring some code later, since we do lots of (or variants of)

if (NULL != bar)
{
    foo_free (bar);
    bar = NULL;
}

That also includes lists, for which there is no sugar in case you want to also free the elements. With the change, you will be able to simply write

g_clear_list (&slist, NULL);    // does g_list_free()
g_clear_slist (&slist, g_free); // does g_slist_free_full()

Moving gnome-shell's styles to Rust

Gnome-shell uses CSS processing code that dates from HippoCanvas, a CSS-aware canvas from around 2006. It uses libcroco to parse CSS, and implements selector matching by hand in C.

This code is getting rather dated, and libcroco is unmaintained.

I've been reading the code for StTheme and StThemeNode, and it looks very feasible to port it gradually to Rust, by using the same crates that librsvg uses, and eventually removing libcroco altogether: gnome-shell is the last module that uses libcroco in distro packages.

Strategy

StTheme and StThemeNode use libcroco to load CSS stylesheets and keep them in memory. The values of individual properties are just tokenized and kept around as a linked list of CRTerm; this struct represents a single token.

Later, the drawing code uses functions like st_theme_node_lookup_color(node, "property_name") or st_theme_node_lookup_length() to query the various properties that it needs. It is then that the type of each property gets determined: prior to that step, property values are just tokenized, not parsed into usable values.

I am going to start by porting the individual parsers to Rust, similar to what Paolo and I did for librsvg. It turns out that there's some code we can share.

So far I have the parser for colors implemented in Rust. This removes a little bunch of code from the C parsers, and replaces it with a little Rust code, since the cssparser crate can already parse CSS colors with alpha with no extra work — libcroco didn't support alpha.

As a bonus, this supports hsl() colors in addition to rgb() ones out of the box!

After all the parsers are done, the next step would be to convert the representation of complete stylesheets into pure Rust code.

What can we expect?

A well-maintained CSS stack. Firefox and Servo both use the crates in question, so librsvg and gnome-shell should get maintenance of a robust CSS stack "for free", for the foreseeable future.

Speed. Caveat: I have no profile data for gnome-shell yet, so I don't know how much time it spends doing CSS parsing and cascading, but it looks like the Rust version has a good chance of being more efficient.

The selectors crate has some very interesting optimizations from Mozilla Servo, and it is also now used in Firefox. It supports doing selector matching using Bloom filters, and can also avoid re-cascading child nodes if a change to a parent would not cause its children to change.

All the parsing is done with zero-copy parsers thanks to Rust's string slices; without so many malloc() calls in the parsing code path, the parsing stage should really fly.

More CSS features. The selectors crate can do matching on basically all kinds of selectors as defined by recent CSS specs; one just has to provide the correct hooks into the calling code's representation of the DOM tree. The kind of matching that StTheme can do is somewhat limited; the rustification should make it match much more closely to what people expect from CSS engines in web browsers.

A well-defined model of property inheritance. StThemeNode's model for CSS property inheritance is a bit ad-hoc and inconsistent. I haven't quite tested it, but from looking at the code, it seems that not all properties get inherited in the same way. I hope to move it to something closer to what librsvg already does, which should make it match people's expectations from the web.

In the meantime

I have a merge request ready to simply move the libcroco source code directly inside gnome-shell's source tree. This should let distros remove their libcroco package as soon as possible. That MR does not require Rust yet.

My playground is here:

This does not compile yet! I'll plug things together tomorrow.

(Oh, yes, the project to redo Firefox's CSS stack in Rust used to be called Stylo. I'm calling this Stylish, as in Styles for the Shell.)

November 25, 2019

2019-11-25 Monday.

  • Sync call with Aron, Kendy, Javier; poked at financials, poked at patch merging, testing, merged a nice online patch from 1&1 to add a REST monitoring API.
  • Bit of quartet editing & re-arrangement in Musescore - which is a great tool.

Process invocation will forever be broken

Invoking new processes is, at its core, a straightforward operation. Pretty much everything you need to know to understand it can be seen in the main declaration of the helloworld program:

#include<stdio.h>

int main(int argc, char **argv) {
    printf("Hello, world.\n");
    return 0;
}

The only (direct) information passed to the program is an array of strings containing its (command line) arguments. Thus it seems like an obvious conclusion that there is a corresponding function that takes an executable to run and an array of strings with the arguments. This turns out to be the case, and it is what the exec family of functions do. An example would be execve.

This function only exists on posixy operating systems, it is not available on Windows. The native way to start processes on Windows is the CreateProcess function. It does not take an array of strings, instead it takes a string:

BOOL CreateProcessA(
  LPCSTR lpApplicationName,
  LPSTR  lpCommandLine,
  ...

The operating system then internally splits the string into individual components using an algorithm that is not at all simple or understandable and whose details most people don't even know.

Side note: why does Windows behave this way?

I don't know for sure. But we can formulate a reasonable theory by looking in the past. Before Windows existed there was DOS, and it also had a way of invoking processes. This was done by using interrupts, in this case function 4bh in interrupt 21h. Browsing through online documentation we can find a relevant snippet:

Action: Loads a program for execution under the control of an existing program. By means of altering the INT 22h to 24h vectors, the calling prograrn [sic] can ensure that, on termination of the called program, control returns to itself.
On entry: AH = 4Bh
AL = 0: Load and execute a program
AL = 3: Load an overlay
DS.DX = segment:offset of the ASCIIZ pathname
ES:BX = Segment:offset of the parameter block
Parameter block bytes:
0-1: Segment pointer to envimmnemnt [sic] block
2-3: Offset of command tail
4-5: Segment of command tail

Here we see that the command is split in the same way as in the corresponding Win32 API call, into ta command to execute and a single string that contains the arguments (the command tail, though some sources say that this should be the full command line). This interrupt handler could take an array of strings instead, but does not. Most likely this is because it was the easiest thing to implement in real mode x86 assembly.

When Windows 1.0 appeared, its coders probably either used the DOS calls directly or copied the same code inside Windows' code base for simplicity and backwards compatibility. When the Win32 API was created they probably did the exact same thing. After all, you need the single string version for backwards compatibility anyway, so just copying the old behaviour is the fast and simple thing to do.

Why is this behaviour bad?

There are two main use cases for invoking processes: human invocations and programmatic invocations. The former happens when human beings type shell commands and pipelines interactively. The latter happens when programs invoke other programs. For the former case a string is the natural representation for the command, but this is not the case for the latter. The native representation there is an array of strings, especially for cross platform code because string splitting rules are different on different platforms. Implementing shell-based process invocation on top of an interface that takes an array of strings is straightforward, but the opposite is not.

Often command lines are not invoked directly but are instead passed from one program to another, stored to files, passed over networks and so on. It is not uncommon to pass a full command line as a command line argument to a different "wrapper" command and so on. An array of string is trivial to pass through arbitrarily deep and nested scenarios without data loss. Plain strings not so much. Many, many, many programs do command string splitting completely wrong. They might split it on spaces because it worksforme on this machine and implementing a full string splitter is a lot of work (thousands of lines of very tricky C at the very least). Some programs don't quote their outputs properly. Some don't unquote their inputs properly. Some do quoting unreliably. Sometimes you need to know in advance how many layers of unquoting your string will go through in advance so you can prequote it sufficiently beforehand (because you can't fix any of the intermediate blobs). Basically every time you pass commands as strings between systems, you get a parsing/quoting problem and a possibility for shell code injection. At the very least the string should carry with it information on whether it is a unix shell command line or a cmd.exe command line. But it doesn't, and can't.

Because of this almost all applications that deal with command invocation kick the can down the road and use strings rather than arrays, even though the latter is the "correct" solution. For example this is what the Ninja build system does. If you go through the rationale for this it is actually understandable and makes sense. The sad downside is that everyone using Ninja (or any such tool) has to do command quoting and parsing manually and then ninja-quote their quoted command lines.

This is the crux of the problem. Because process invocation is broken on Windows, every single program that deals with cross platform command invocation has to deal with commands as strings rather than an array of strings. This leads to every program using commands as strings, because that is the easy and compatible thing to do (not to mention it gives you the opportunity to close bugs with "your quoting is wrong, wontfix"). This leads to a weird kind of quantum entanglement where having things broken on one platform breaks things on a completely unrelated platform.

Can this be fixed?

Conceptually the fix is simple: add a new function, say, CreateProcessCmdArray to Win32 API. It is identical to plain CreateProcess except that it takes an array of strings rather than a shell command string. The latter can be implemented by running Windows' internal string splitter algorithm and calling the former. Seems doable, and with perfect backwards compatibility even? Sadly, there is a hitch.

It has been brought to my attention via unofficial channels [1] that this will never happen. The people at Microsoft who manage the Win32 API have decreed this part of the API frozen. No new functionality will ever be added to it. The future of Windows is WinRT or UWP or whatever it is called this week.

UWP is conceptually similar to Apple's iOS application bundles. There is only one process which is fully isolated from the rest of the system. Any functionality that need process isolation (and not just threads) must be put in its own "service" that the app can then communicate with using RPC. This turned out to be a stupid limitation for a desktop OS with hundreds of thousands of preexisting apps, because it would require every Win32 app using multiple processes to be rewritten to fit this new model. Eventually Microsoft caved under app vendor pressure and added the functionality to invoke processes into UWP (with limitations though). At this point they had a chance to do a proper from-scratch redesign for process invocation with the full wealth of knowledge we have obtained since the original design was written around 1982 or so. So can you guess whether they:
  1. Created a proper process invocation function that takes an array of strings?
  2. Exposed CreateProcess unaltered to UWP apps?
You guessed correctly.

Bonus chapter: msvcrt's execve functions

Some of you might have thought waitaminute, the Visual Studio C runtime does ship with functions that take string arrays so this entire blog post is pointless whining. This is true, it does provide said functions. Here is a pseudo-Python implementation for one of them. It is left as an exercise to the reader to determine why it does not help with this particular problem:

def spawn(cmd_array):
    cmd_string = ' '.join(cmd_array)
    CreateProcess(..., cmd_string, ...)

[1] That is to say, everything from here on may be completely wrong. Caveat lector. Do not quote me on this.

November 24, 2019

2019-11-24 Sunday.

  • All Saints, Simon & Norma back for lunch; played with the babes, quartet bits, ferried kids around the place.

November 22, 2019

A Review of GNOME Shell & Mutter 3.34

The last GNOME release, named “Thessaloniki”, was busy for GNOME Shell and Mutter. Many changes, ranging from code cleanups to architectural changes to performance improvements to new features landed.

Let’s take a look at the major highlights for the GNOME 3.34 release.

GNOME Shell

JavaScript Updates

GNOME Shell uses GJS, the GNOME JavaScript engine, to run. With the latest updates to GJS, such as the JS60 migration, GNOME Shell saw important updates making the codebase use modern tools and code practices.

Implicit Animations

One of the most important improvements that were part of GNOME Shell 3.34 was the transition to Clutter implicit animations. So far, GNOME Shell has been using the Tweener framework, which is written completely in JavaScript and, as a consequence, required extra work to communicate the animations to Clutter.

Using the implicit animations framework provided by Clutter allows GNOME Shell to skip a lot of JavaScript → C jumps, and thus, take less resources to run animations.

CI Integration

Testing GNOME Shell with CI is slightly tricky due to it and Mutter always being in lockstep. It is common that GNOME Shell from the git master branch needs Mutter from git master as well. Finding a good way to handle that prevented CI from landing earlier, but during this cycle we crafted custom container images and wrote the necessary scripts to allow for that.

Now, when running CI, GNOME Shell is tested with Mutter in tandem. Furthermore, on Mutter side, we’ve added a CI step to build and test GNOME Shell as well, catching many errors that would otherwise take some time to be found.

New Extensions Tool

GNOME Shell 3.34 also ships a new gnome-extensions tool, making it easier to create and manage GNOME Shell extensions. Due to the automation this new tool provides, such as extension templates, bash completion, packing, and testing extensions, it should be a significant improvement for extension authors.

Accessibility

With the ongoing transition to Wayland, GNOME Shell is taking ownership of many features that required random applications have full access to windows and their contents. One area where this bad practice is recurrent is the accessibility stack.

With GNOME Shell 3.34, a big coordinated effort was put in filling the missing accessibility bits  when running a Wayland session, such as Locate Pointer, Click Assist, among others.

Folder Management

Last but not least, in GNOME Shell 3.34, it is possible to create, rename, and delete folders using Drag n’ Drop actions.

GNOME Shell 3.34 in Numbers

GNOME Shell 3.34 had 309 files changed, with 50,022 lines added and 26,924 removed. 59 merge requests were merged, and 1 was closed. 71 issues were closed by commits.

Developers with the most changesets:

| 289  | Florian Müllner               |
| 84   | Marco Trevisan (Treviño)      |
| 65   | Jonas Dreßler                 |
| 32   | Georges Basile Stavracas Neto |
| 19   | Carlos Garnacho               |
| 11   | Ray Strode                    |
| 10   | Ryuta Fujii                   |
| 9    | Kukuh Syafaat                 |
| 7    | Cosimo Cecchi                 |
| 7    | Jordi Mas                     |

Mutter

Sysprof

Sysprof is GNOME’s profiling framework, built on top of the `perf` API. With the GNOME 3.34 release, Mutter now integrates with Sysprof and is able to profile a lot of its internals.

Profiling Mutter is an important requirement to detect potential bottlenecks in the algorithms and the codebase, and have a proper understanding of how, where, and why things are the way they are.

GPU Hotplug

Mutter is now able to detect and use GPUs, usually external, that are plugged while the GNOME session is running.

KMS Transactions

In order for Mutter to support more complex hardware capabilities it needs to know how to utilize the atomic KMS kernel API. A first step in this direction was taken this cycle by implementing an internal transactional KMS API. Right now this API is backed by a non-atomic implementation, in order to make sure that we don’t break anywhere we currently fully function on, but in the future having this API will enable us to have mutter utilize more advanced hardware capabilities such as overlay planes and framebuffer modifiers. It is also a big step closer to being able to isolate KMS interaction into a dedicated thread eventually enabling low latency input device feedback once we move input processing into its own dedicated thread as well.

Improved Frame Scheduler

The algorithm that Mutter uses to schedule when to update the contents of the
monitor was fine-tuned and is now able to deliver new frames more smoothly. The new algorithm gives applications more time to draw their contents and notify Mutter about it. As a consequence, Mutter is much more likely to update frames in time for the monitor to display.

NVIDIA on Xorg frame throttling changes

Further in the past, Mutter used glxSwapBuffers() to throttle frame drawing when using the proprietary NVIDIA driver. This caused issues, as it’d block the main thread for long period of times if there was constantly new frames being scheduled. To mitigate this, Mutter developers introduced functionality imitating the GLX_INTEL_swap_event extension used by the Intel driver to make it possible for swapping buffers asynchronously, by receiving a “completion event” (swap event) later on.

This was done using NVIDIA specific GLX extensions combined with running a separate thread, where the “swap event” was generated, in contrast to when using GLX_INTEL_swap_event, where the Intel driver itself did this.

In practice, this caused issues, as it the relevant GLX extension implementation in the NVIDIA driver tended to be CPU heavy. However, due to the changes this cycle to our frame scheduling algorithm, the initial problem is now mitigated by scheduling frames in a way that we will we avoid blocking on glxSwapBuffers() for as long, meaning we could remove the CPU heavy “swap event” imitation code. This should mean less CPU usage when using the NVIDIA driver and displaying an application that continuously renders frames.

Graphene

Graphene is a library implementing complicated math functionality related to vertices, matrices, rectangles and other types usually related to 3D programming. In order to decrease maintenance burden on Mutter, which had its own set of complicated math functionality doing practically the same thing, we went and replaced most of that code, and switched to using Graphene directly. Not only did we decrease the amount of code we have to maintain, we also got all the fancy new improvements done to Graphene that we would otherwise be without!

For GNOME 3.34, all but matrices were converted to the corresponding Graphene types. That’s because Cogl and Graphene matrices have different semantics and, thus, will require more scrutiny.

XWayland on Demand

Mutter and GNOME Shell 3.34 earned the ability to run as a pure Wayland compositor, while still providing seamless compatibility with X11 by starting XWayland when required by applications. This is the culmination of much prior effort at refactoring internals and ironing out indirect X11 dependencies throughout Mutter and GNOME Shell specifically, and the GNOME session generally.

Being able to start Xwayland and other related X11 services on demand for legacy applications came out organically, and demonstrates we are no longer tied by our legacy support.

This feature is disabled by default, and can be enabled by adding ‘autostart-xwayland’ to `org.gnome.mutter experimental-features`.

Real-Time Scheduler

When running as a Wayland compositor and display server, it is important that Mutter keeps its reponsiveness all the time. However, Mutter runs as a regular user application, and is subject to starvation in various scenarios, such as intensive I/O.

Mutter 3.34 now requests real-time priority, which gives it more priority than
regular applications. This avoids stalls and stutterings when running GNOME Shell as a Wayland compositor.

This feature is disabled by default, and can be enabled by adding ‘rt-scheduler’
to `org.gnome.mutter experimental-features`.

MetaShapedTexture as a ClutterContent

MetaShapedTexture represents the visual contents of an application’s surface. It takes care of making sure the content is cropped and scaled properly, and tries to optimize drawing depending on what is opaque, and what is not.

ClutterContent, on the other hand, is Clutter’s way of defining deferred rendering instructions, also known as render nodes. This enables more efficient batching of draw calls, compared to the implicit batching implemented by the Cogl journal. This cycle we took the MetaShapedTexture, which was a ClutterActor subclass manually issuing Cogl drawing instructions, and turned into a set of fully deferred rendering instructions as defined by ClutterContent.

This not only improved batching of draw calls but greatly decreased the size of the Clutter actor tree, giving us a significant performance boost.

Here’s a quick comparison:

Graph showing that MetaShapedTexture as ClutterContent reduces frame times when many windows are open.
Turning MetaShapedTexture into a ClutterContent implementation (green bars) reduces frame times when many windows are open, compared to the previous implementation (yellow bars). Courtesy of Jonas Dreßler.

 

Mutter 3.34 in Numbers

Mutter 3.34 had 766 files changed, with 34,249 lines added and 37,268 removed. 74 issues were closed by commits.

| 122 | Jonas Ådahl                   |
| 109 | Carlos Garnacho               |
| 82  | Marco Trevisan (Treviño)      |
| 52  | Olivier Fourdan               |
| 48  | Georges Basile Stavracas Neto |
| 40  | Florian Müllner               |
| 30  | Adam Jackson                  |
| 27  | Daniel van Vugt               |
| 24  | Hans de Goede                 |
| 24  | Robert Mader                  |
| 19  | Jonas Dreßler                 |
| 16  | Niels De Graef                |
| 16  | Pekka Paalanen                |

November 21, 2019

Screencasting with OBS Studio on Wayland

For the past few months, I’ve been doing live coding sessions on YouTube showing how GNOME development goes. Usually it’s a pair of sessions per week, one in Brazilian Portuguese so that my beloved community can enjoy GNOME in their native language; and one in English, to give other people at least a chance to follow development as well.

We are quite lucky to have OBS Studio available for screencasting and streaming, as it makes our lives a lot easier. It’s really a fantastic application. I learned about it while browsing Flathub, and it’s what actually motivated me to start streaming in the first place. However, I have to switch to X11 in order to use it, since the GNOME screencast plugin never really worked for me.

This is annoying since Mutter supports screencasting for years now, and I really want to showcase the latest and greatest while streaming. We’re still not using the appropriate APIs and methods to screencast, which doesn’t set a high standard on the community.

So I decided to get my hands dirty, bite the bullet, and fix this situation. And so was born the obs-xdg-portal plugin for OBS Studio! The plugin uses the standard ScreenCast portal, which means it should work inside and outside the Flatpak sandbox, in Wayland and X11, and on GNOME and KDE (and perhaps others?).

Selecting a monitor for screencast
Selecting a window for screencast
The screencast in action — working perfectly!

Do notice that OBS Studio itself is not yet compatible with Wayland, as this is a work in progress. In the pictures above, I’m running OBS Studio under XWayland, which really shows how powerful the platform is — it’s able to natively screencast on Wayland even on XWayland clients! If I have time, interest, motivation, and energy, perhaps the OBS Studio Wayland branch can be pushed forward.

This plugin is already available in the Flathub version of OBS Studio.

November 20, 2019

Some GNOME / LAS / Wikimedia love

For some time to know I’ve dedicating some more time to Wikimedia related activities. I love to share this time with other opensource communities I’m related to. This post is just to write down a list of items/resources I’ve created related with events in this domain.

Wikidata

If you don’t know about Wikidata probably you’ll look at it because it’ll be the most important linked data corpora in the world. In the future we will use WD as the cornerstone of many applications. Remember you read this here first.

About GUADEC:

About GNOME.Asia:

About LAS 2019:

And about the previous LAS format:

Wikimedia Commons

Wikimedia Commons is my current favorite place to publish pictures with open licensing these days. To me is the ideal place to publish reusable resources with explicit Creative Commons open licensing. And you can contribute with your own media without intermediaries.

About GUADEC:

About GNOME.Asia:

About LAS:

Epilogue

As you can check the list is not complete neither all items are fully described. I invite you to complete all the information you want. For Wikidata there are many places where ask help. And for WikiCommons you can help uploading your own pictures. If you have doubts just use current examples as references or ask me directly.

Hanging the Red Hat

This is an extract of an email I just sent internally at Red Hat that I wanted to share with the wider GNOME and Freedesktop community.

After 6+ wonderful years at Red Hat, I’ve decided to hang the fedora to go and try new things. For a while I’ve been craving for a new challenge and I’ve felt the urge to try other things outside of the scope of Red Hat so with great hesitation I’ve finally made the jump.

I am extremely proud of the work done by the teams I have had the honour to run as engineering manager, I met wonderful people, I’ve worked with extremely talented engineers and learned lots. I am particularly proud of the achievements of my latest team from increasing the bootloader team and improving our relationship with GRUB upstream, to our wins at teaching Lenovo how to do upstream hardware support to improvements in Thunderbolt, Miracast, Fedora/RHEL VirtualBox guest compatibility… the list goes on and credit goes mostly to my amazing team.

Thanks to this job I have been able to reach out to other upstreams beyond GNOME, like Fedora, LibreOffice, the Linux Kernel, Rust, GRUB… it has been an amazing ride and I’ve met wonderful people in each one of them.

I would also like to make a special mention to my manager, Christian Schaller, who has supported me all the way in several ways both professionally and personally. There is this thing people say: “people do not leave companies, they leave managers”. Well this is certainly not the case, in Christian I have found not only a great manager but a true friend.

images

As for my experience at Red Hat, I have never lasted more than 2 years in the same spot before, I truly found my place there, deep in my heart I know I will always be a Red Hatter, but there are some things I want to try and learn elsewhere. This job switch has been the hardest departure I ever had and in many ways it breaks my heart to leave. If you are considering joining Red Hat, do not hesitate, there is no better place to write and advocate for Free Software.

I will announce what I will be doing next once I start in December.

November 19, 2019

Linux Application Summit 2019

I was lucky enough to be sponsored by the GNOME Foundation to attend the 2019 Linux Application Summit, hosted in Barcelona between November 12th and 15th 2019. It was a great conference with a diverse crew of people who all care about making apps on Linux better. I particularly enjoyed Frank’s keynote on Linux apps from the perspective of Nextcloud, an Actual ISV. Also worth your time is Rob’s talk on how Flathub would like to help more developers earn money from their work; Adrien on GTK and scalable UIs for phones; Robin on tone of voice and copywriting; Emel on Product Management in the context of GNOME Recipes and Paul Brown on direct language and better communication.

Refactoring the Length type

CSS length values have a number and a unit, e.g. 5cm or 6px. Sometimes the unit is a percentage, like 50%, and SVG says that lengths with percentage units should be resolved with respect to a certain rectangle. For example, consider this circle element:

<circle cx="50%" cy="75%" r="4px" fill="black"/>

This means, draw a solid black circle whose center is at 50% of the width and 75% of the height of the current viewport. The circle should have a 4-pixel radius.

The process of converting that kind of units into absolute pixels for the final drawing is called normalization. In SVG, percentage units sometimes need to be normalized with respect to the current viewport (a local coordinate system), or with respect to the size of another object (e.g. when a clipping path is used to cut the current shape in half).

One detail about normalization is that it can be with respect to the horizontal dimension of the current viewport, the vertical dimension, or both. Keep this in mind: at normalization time, we need to be able to distinguish between those three modes.

The original C version

I have talked about the original C code for lengths before; the following is a small summary.

The original C code had this struct to represent lengths:

typedef struct {
    double length;
    char factor;
} RsvgLength;

The parsing code would set the factor field to a character depending on the length's unit: 'p' for percentages, 'i' for inches, etc., and '\0' for the default unit, which is pixels.

Along with that, the normalization code needed to know the direction (horizontal, vertical, both) to which the length in question refers. It did this by taking another character as an argument to the normalization function:

double
_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, char dir)
{
    if (in->factor == '\0')            /* pixels, no need to normalize */
        return in->length;
    else if (in->factor == 'p') {      /* percentages; need to consider direction */
        if (dir == 'h')                                     /* horizontal */
            return in->length * ctx->vb.rect.width;
        if (dir == 'v')                                     /* vertical */
            return in->length * ctx->vb.rect.height;
        if (dir == 'o')                                     /* both */
            return in->length * rsvg_viewport_percentage (ctx->vb.rect.width,
                                                          ctx->vb.rect.height);
    } else { ... }
}

The original post talks about how I found a couple of bugs with how the directions are identified at normalization time. The function above expects one of 'h'/'v'/'o' for horizontal/vertical/both, and one or two places in the code passed the wrong character.

Making the C version cleaner

Before converting that code to Rust, I removed the pesky characters and made the code use proper enums to identify a length's units.

+typedef enum {
+    LENGTH_UNIT_DEFAULT,
+    LENGTH_UNIT_PERCENT,
+    LENGTH_UNIT_FONT_EM,
+    LENGTH_UNIT_FONT_EX,
+    LENGTH_UNIT_INCH,
+    LENGTH_UNIT_RELATIVE_LARGER,
+    LENGTH_UNIT_RELATIVE_SMALLER
+} LengthUnit;
+
 typedef struct {
     double length;
-    char factor;
+    LengthUnit unit;
 } RsvgLength;

Then, do the same for the normalization function, so it will get the direction in which to normalize as an enum instead of a char.

+typedef enum {
+    LENGTH_DIR_HORIZONTAL,
+    LENGTH_DIR_VERTICAL,
+    LENGTH_DIR_BOTH
+} LengthDir;

 double
-_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, char dir)
+_rsvg_css_normalize_length (const RsvgLength * in, RsvgDrawingCtx * ctx, LengthDir dir)

Making the C version easier to get right

While doing the last change above, I found a place in the code that used the wrong direction by mistake, probably due to a cut&paste error. Part of the problem here is that the code was specifying the direction at normalization time.

I decided to change it so that each direction value carried its own direction since initialization, so that subsequent code wouldn't have to worry about that. Hopefully, initializing a width field should make it obvious that it needed LENGTH_DIR_HORIZONTAL.

 typedef struct {
     double length;
     LengthUnit unit;
+    LengthDir dir;
 } RsvgLength;

That is, so that instead of

  /* at initialization time */
  foo.width = _rsvg_css_parse_length (str);

  ...

  /* at rendering time */
  double final_width = _rsvg_css_normalize_length (&foo.width, ctx, LENGTH_DIR_HORIZONTAL);

we would instead do this:

  /* at initialization time */
  foo.width = _rsvg_css_parse_length (str, LENGTH_DIR_HORIZONTAL);

  ...

  /* at rendering time */
  double final_width = _rsvg_css_normalize_length (&foo.width, ctx);

This made the drawing code, which deals with a lot of coordinates at the same time, a lot less noisy.

Initial port to Rust

To recap, this was the state of the structs after the initial refactoring in C:

typedef enum {
    LENGTH_UNIT_DEFAULT,
    LENGTH_UNIT_PERCENT,
    LENGTH_UNIT_FONT_EM,
    LENGTH_UNIT_FONT_EX,
    LENGTH_UNIT_INCH,
    LENGTH_UNIT_RELATIVE_LARGER,
    LENGTH_UNIT_RELATIVE_SMALLER
} LengthUnit;

typedef enum {
    LENGTH_DIR_HORIZONTAL,
    LENGTH_DIR_VERTICAL,
    LENGTH_DIR_BOTH
} LengthDir;

typedef struct {
    double length;
    LengthUnit unit;
    LengthDir dir;
} RsvgLength;

This ported to Rust in a straightforward fashion:

pub enum LengthUnit {
    Default,
    Percent,
    FontEm,
    FontEx,
    Inch,
    RelativeLarger,
    RelativeSmaller
}

pub enum LengthDir {
    Horizontal,
    Vertical,
    Both
}

pub struct RsvgLength {
    length: f64,
    unit: LengthUnit,
    dir: LengthDir
}

It got a similar constructor that took the direction and produced an RsvgLength:

impl RsvgLength {
    pub fn parse (string: &str, dir: LengthDir) -> RsvgLength { ... }
}

(This was before using Result; remember that the original C code did very little error checking!)

The initial Parse trait

It was at that point that it seemed convenient to introduce a Parse trait, which all CSS value types would implement to parse themselves from string.

However, parsing an RsvgLength also needed an extra piece of data, the LengthDir. My initial version of the Parse trait had an associated called Data, through which one could pass an extra piece of data during parsing/initialization:

pub trait Parse: Sized {
    type Data;
    type Err;

    fn parse (s: &str, data: Self::Data) -> Result<Self, Self::Err>;
}

This was explicitly to be able to pass a LengthDir to the parser for RsvgLength:

impl Parse for RsvgLength {
    type Data = LengthDir;
    type Err = AttributeError;

    fn parse (string: &str, dir: LengthDir) -> Result <RsvgLength, AttributeError> { ... }
}

This was okay for lengths, but very noisy for everything else that didn't require an extra bit of data. In the rest of the code, the helper type was Data = () and there was a pair of extra parentheses () in every place that parse() was called.

Removing the helper Data type

Introducing one type per direction

Over a year later, that () bit of data everywhere was driving me nuts. I started refactoring the Length module to remove it.

First, I introduced three newtypes to wrap Length, and indicate their direction at the same time:

pub struct LengthHorizontal(Length);
pub struct LengthVertical(Length);
pub struct LengthBoth(Length);

This was done with a macro because now each wrapper type needed to know the relevant LengthDir.

Now, for example, the declaration for the <circle> element looked like this:

pub struct NodeCircle {
    cx: Cell<LengthHorizontal>,
    cy: Cell<LengthVertical>,
    r: Cell<LengthBoth>,
}

(Ignore the Cell everywhere; we got rid of that later.)

Removing the dir field

Since now the information about the length's direction is embodied in the LengthHorizontal/LengthVertical/LengthBoth types, this made it possible to remove the dir field from the inner Length struct.

 pub struct RsvgLength {
     length: f64,
     unit: LengthUnit,
-    dir: LengthDir
 }

+pub struct LengthHorizontal(Length);
+pub struct LengthVertical(Length);
+pub struct LengthBoth(Length);
+
+define_length_type!(LengthHorizontal, LengthDir::Horizontal);
+define_length_type!(LengthVertical, LengthDir::Vertical);
+define_length_type!(LengthBoth, LengthDir::Both);

Note the use of a define_length_type! macro to generate code for those three newtypes.

Removing the Data associated type

And finally, this made it possible to remove the Data associated type from the Parse trait.

 pub trait Parse: Sized {
-    type Data;
     type Err;

-    fn parse(parser: &mut Parser<'_, '_>, data: Self::Data) -> Result<Self, Self::Err>;
+    fn parse(parser: &mut Parser<'_, '_>) -> Result<Self, Self::Err>;
 }

The resulting mega-commit removed a bunch of stray parentheses () from all calls to parse(), and the code ended up a lot easier to read.

Removing the newtypes

This was fine for a while. Recently, however, I figured out that it would be possible to embody the information for a length's direction in a different way.

But to get there, I first needed a temporary refactor.

Replacing the macro with a trait with a default implementation

Deep in the guts of length.rs, the key function that does something different based on LengthDir is its scaling_factor method:

enum LengthDir {
    Horizontal,
    Vertical,
    Both,
}

impl LengthDir {
    fn scaling_factor(self, x: f64, y: f64) -> f64 {
        match self {
            LengthDir::Horizontal => x,
            LengthDir::Vertical => y,
            LengthDir::Both => viewport_percentage(x, y),
        }
    }
}

That method gets passed, for example, the width/height of the current viewport for the x/y arguments. The method decides whether to use the width, height, or a combination of both.

And of course, the interesting part of the define_length_type! macro was to generate code for calling LengthDir::Horizontal::scaling_factor()/etc. as appropriate depending on the LengthDir in question.

First I made a trait called Orientation with a scaling_factor method, and three zero-sized types that implement that trait. Note how each of these three implementations corresponds to one of the match arms above:

pub trait Orientation {
    fn scaling_factor(x: f64, y: f64) -> f64;
}

pub struct Horizontal;
pub struct Vertical;
pub struct Both;

impl Orientation for Horizontal {
    fn scaling_factor(x: f64, _y: f64) -> f64 {
        x
    }
}

impl Orientation for Vertical {
    fn scaling_factor(_x: f64, y: f64) -> f64 {
        y
    }
}

impl Orientation for Both {
    fn scaling_factor(x: f64, y: f64) -> f64 {
        viewport_percentage(x, y)
    }
}

Now most of the contents of the define_length_type! macro can go in the default implementation of a new trait LengthTrait. Crucially, this trait has an Orientation associated type, which it uses to call into the Orientation trait:

pub trait LengthTrait: Sized {
    type Orientation: Orientation;

    ...

    fn normalize(&self, values: &ComputedValues, params: &ViewParams) -> f64 {
        match self.unit() {
            LengthUnit::Px => self.length(),

            LengthUnit::Percent => {
                self.length() *
                <Self::Orientation>::scaling_factor(params.view_box_width, params.view_box_height)
            }

            ...
    }
}

Note that the incantation is <Self::Orientation>::scaling_factor(...) to call that method on the associated type.

Now the define_length_type! macro is shrunk a lot, with the interesting part being just this:

macro_rules! define_length_type {
    {$name:ident, $orient:ty} => {
        pub struct $name(Length);

        impl LengthTrait for $name {
            type Orientation = $orient;
        }
    }
}

define_length_type! { LengthHorizontal, Horizontal }
define_length_type! { LengthVertical, Vertical }
define_length_type! { LengthBoth, Both }

We moved from having three newtypes of length-with-LengthDir to three newtypes with dir-as-associated-type.

Removing the newtypes and the macro

After that temporary refactoring, we had the Orientation trait and the three zero-sized types Horizontal, Vertical, Both.

I figured out that one can use PhantomData as a way to carry around the type that Length needs to normalize itself, instead of using an associated type in an extra LengthTrait. Behold!

pub struct Length<O: Orientation> {
    pub length: f64,
    pub unit: LengthUnit,
    orientation: PhantomData<O>,
}

impl<O: Orientation> Length<O> {
    pub fn normalize(&self, values: &ComputedValues, params: &ViewParams) -> f64 {
        match self.unit {
            LengthUnit::Px => self.length,

            LengthUnit::Percent => {
                self.length 
                    * <O as Orientation>::scaling_factor(params.view_box_width, params.view_box_height)
            }

            ...
        }
    }
}

Now the incantation is <O as Orientation>::scaling_factor() to call the method on the generic type; it is no longer an associated type in a trait.

With that, users of lengths look like this; here, our <circle> element from before:

pub struct Circle {
    cx: Length<Horizontal>,
    cy: Length<Vertical>,
    r: Length<Both>,
}

I'm very happy with the readability of all the code now. I used to think of PhantomData as a way to deal with wrapping pointers from C, but it turns out that it is also useful to keep a generic type around should one need it.

The final Length struct is this:

pub struct Length<O: Orientation> {
    pub length: f64,
    pub unit: LengthUnit,
    orientation: PhantomData<O>,
}

And it only takes up as much space as its length and unit fields; PhantomData is zero-sized after all.

(Later, we renamed Orientation to Normalize, but the code's structure remained the same.)

Summary

Over a couple of years, librsvg's type that represents CSS lengths went from a C representation along the lines of "all data in the world is an int", to a Rust representation that uses some interesting type trickery:

  • C struct with char for units.

  • C struct with a LengthUnits enum.

  • C struct without an embodied direction; each place that needs to normalize needs to get the orientation right.

  • C struct with a built-in direction as an extra field, done at initialization time.

  • Same struct but in Rust.

  • An ugly but workable Parse trait so that the direction can be set at parse/initialization time.

  • Three newtypes LengthHorizontal, LengthVertical, LengthBoth with a common core. A cleaned-up Parse trait. A macro to generate those newtypes.

  • Replace the LengthDir enum with an Orientation trait, and three zero-sized types Horizontal/Vertical/Both that implement the trait.

  • Replace most of the macro with a helper trait LengthTrait that has an Orientation associated type.

  • Replace the helper trait with a single Length<T: Orientation> type, which puts the orientation as a generic parameter. The macro disappears and there is a single implementation for everything.

Refactoring never ends!

Growing the fwupd ecosystem

Yesterday I wrote a blog about what hardware vendors need to provide so I can write them a fwupd plugin. A few people contacted me telling me that I should make it more generic, as I shouldn’t be the central point of failure in this whole ecosystem. The sensible thing, of course, is growing the “community” instead, and building up a set of (paid) consultants that can help the OEMs and ODMs, only getting me involved to review pull requests or for general advice. This would certainly reduce my current feeling of working at 100% and trying to avoid burnout.

As a first step, I’ve created an official page that will list any consulting companies that I feel are suitable to recommend for help with fwupd and the LVFS. The hardware vendors would love to throw money at this stuff, so they don’t have to care about upstream project release schedules and dealing with a gumpy maintainer like me. I’ve pinged the usual awesome people like Igalia, and hopefully more companies will be added to this list during the next few days.

If you do want your open-source consultancy to be added, please email me a two paragraph corporate-friendly blurb I can include on that new page, also with a link I can use for the “more details” button. If you’re someone I’ve not worked with before, you should be in a position to explain the difference between a capsule update and a DFU update, and be able to tell me what a version format is. I don’t want to be listing companies that don’t understand what fwupd actually is :)

November 18, 2019

Extending proprietary PC embedded controller firmware

I'm still playing with my X210, a device that just keeps coming up with new ways to teach me things. I'm now running Coreboot full time, so the majority of the runtime platform firmware is free software. Unfortunately, the firmware that's running on the embedded controller (a separate chip that's awake even when the rest of the system is asleep and which handles stuff like fan control, battery charging, transitioning into different power states and so on) is proprietary and the manufacturer of the chip won't release data sheets for it. This was disappointing, because the stock EC firmware is kind of annoying (there's no hysteresis on the fan control, so it hits a threshold, speeds up, drops below the threshold, turns off, and repeats every few seconds - also, a bunch of the Thinkpad hotkeys don't do anything) and it would be nice to be able to improve it.

A few months ago someone posted a bunch of fixes, a Ghidra project and a kernel patch that lets you overwrite the EC's code at runtime for purposes of experimentation. This seemed promising. Some amount of playing later and I'd produced a patch that generated keyboard scancodes for all the missing hotkeys, and I could then use udev to map those scancodes to the keycodes that the thinkpad_acpi driver would generate. I finally had a hotkey to tell me how much battery I had left.

But something else included in that post was a list of the GPIO mappings on the EC. A whole bunch of hardware on the board is connected to the EC in ways that allow it to control them, including things like disabling the backlight or switching the wifi card to airplane mode. Unfortunately the ACPI spec doesn't cover how to control GPIO lines attached to the embedded controller - the only real way we have to communicate is via a set of registers that the EC firmware interprets and does stuff with.

One of those registers in the vendor firmware for the X210 looked promising, with individual bits that looked like radio control. Unfortunately writing to them does nothing - the EC firmware simply stashes that write in an address and returns it on read without parsing the bits in any way. Doing anything more with them was going to involve modifying the embedded controller code.

Thankfully the EC has 64K of firmware and is only using about 40K of that, so there's plenty of room to add new code. The problem was generating the code in the first place and then getting it called. The EC is based on the CR16C architecture, which binutils supported until 10 days ago. To be fair it didn't appear to actually work, and binutils still has support for the more generic version of the CR16 family, so I built a cross assembler, wrote some assembly and came up with something that Ghidra was willing to parse except for one thing.

As mentioned previously, the existing firmware code responded to writes to this register by saving it to its RAM. My plan was to stick my new code in unused space at the end of the firmware, including code that duplicated the firmware's existing functionality. I could then replace the existing code that stored the register value with code that branched to my code, did whatever I wanted and then branched back to the original code. I hacked together some assembly that did the right thing in the most brute force way possible, but while Ghidra was happy with most of the code it wasn't happy with the instruction that branched from the original code to the new code, or the instruction at the end that returned to the original code. The branch instruction differs from a jump instruction in that it gives a relative offset rather than an absolute address, which means that branching to nearby code can be encoded in fewer bytes than going further. I was specifying the longest jump encoding possible in my assembly (that's what the :l means), but the linker was rewriting that to a shorter one. Ghidra was interpreting the shorter branch as a negative offset, and it wasn't clear to me whether this was a binutils bug or a Ghidra bug. I ended up just hacking that code out of binutils so it generated code that Ghidra was happy with and got on with life.

Writing values directly to that EC register showed that it worked, which meant I could add an ACPI device that exposed the functionality to the OS. My goal here is to produce a standard Coreboot radio control device that other Coreboot platforms can implement, and then just write a single driver that exposes it. I wrote one for Linux that seems to work.

In summary: closed-source code is more annoying to improve, but that doesn't mean it's impossible. Also, strange Russians on forums make everything easier.

comment count unavailable comments

LAS 2019: A GNOME + KDE conference

Thanks to the sponsorship of GNOME, I was able to attend the Linux App Summit 2019 held in Barcelona. This conference was hosted by two free desktop communities such as GNOME and KDE. Usually the technologies used to create applications are GTK and QT, and the objective of this conference was to present ongoing application projects that run in many Linux platforms and beyond on both, desktops and on mobiles. The ecosystem involved, the commercial part and the U project manager perspective were also presented in three core days. I had the chance to hear some talks as pictured: Adrien Plazas, Jordan and Tobias, Florian are pictured in the first place. The keynote was in charge of Mirko Boehm with the title “The Economics of FOSSâ€�, Valentin and Adam Jones from Freedesktop SDK and Nick Richards where he pointed out the “write” strategy. You might see more details on Twitter.

Women’s presence was very noticeable in this conference. As it is shown in the following picture, a UX designer such as Robin presented a communication approach to understand what the users want, the developer Heather Ellsworth explained also her work at Ubuntu making GNOME Snap apps, the enthusiastic Aniss from OpenStreetMap community also did a lightning talk about her experiences to make a FOSS community stronger. At the bottom of the picture we see the point of view of the database admin: Shola, the KDE developer: Hannah, and the closing ceremony presented by Muriel (local team organizer).

On Friday, some BoFs were set. The engagement Bof leading by Nuritzi is pictured first, followed by the KDE team. The Snap Packaging Workshop happened in the meeting room.

Lighting talks were part also of this event at the end of the day, every day. Nuritzi was prized for her effort to run the event. Thanks Julian & Tobias for joining to see Park Güell.Social events were also arranged, we started a tour from the Casa Batlló and we walked towards the Gothic quarter. The tours happened at nigth after the talks, and lasted 1.5 h.Food expenses were covered by GNOME in my case as well as for other members. Thanks!

My participation was basically done a talk in the unconference part, I organized the GNOME games with Jorge (a local organizer) and I wrote a GTK code in C with Matthias.The games started with the “Glotones” where we used flans to eat quickly, the “wise man” where lots of Linux questions were asked, and the “Sing or die” game where the participants changed the lyrics of sticky songs using the words GNOME, KDE and LinuxAppSummit. Some of the moments were pictured as follows:The imagination of the teams were fantastic, they sang and created “geek” choreographies as we requested:One of the games that lasted until the very end was “Guessing the word”. The words depicted in the photo:LAS, root, and GPL played by Nuritzi, Neil, and Jordan, respectively.It was lovely to see again several-years GNOME’s members as Florian, who is always supporting my ideas for the GNOME games 🙂 the generous, intelligent and funny Javier Jardon, and the famous GNOME developer Zeeshan who also loves Rust and airplanes.

It was also delightful to meet new people. I met GNOME people such as Ismael, and Daniel who helped me to debug my mini GTK code. I also met KDE people such as Albert and Muriel. In the last photo, we are in the photo with the “wise man” and the “flan man”

Special Thanks to the local organizers Jorge and Aleix, Ismael for supporting me for almost the whole conference with my flu, and Nuritzi for the sweet chocolates she gave me.The photo group was a success, and generally, I liked the event LAS 2019 in Barcelona.

Barcelona is a place with novel architectures and I enjoyed the walking time there…

Thanks again GNOME, I will finish my reconstruction image GTK code I started in this event to make it also in parallel using HPC machines in the near future.

November 15, 2019

Creating an USB3 OTG cable for the Thinkpad 8

The Lenovo Thinkpad 8 and also the Asus T100CHI both have an USB3 micro-B connector, but using a standard USB3 OTG (USB 3 micro-B to USB3-A receptacle) cable results in only USB2 devices working. USB3 devices are not recognized.

Searching the internet learns that many people have this problem and that the solution is to find a USB3 micro-A to USB3-A receptacle cable. This sounds like nonsense to me as micro-B really is micro-AB and is supposed to use the ID pin to automatically switch between modes dependent on the used cable.; and this does work for the USB-2 parts of the micro-B connector on the Thinkpad. Yet people do claim success with such cables (with a more square micro-A connector, instead of the trapezoid micro-B connector). The only problem is such cables are not for sale anywhere.

So I guessed that this means is that they have swapped the Rx and Tx superspeed pairs on the USB3 only part of the micro-B connector, and I decided to cut open one of my USB3 micro-A to USB3-A receptacle cables and swap the superspeed pairs. Here is what the cable looks like when it it cut open:



If you are going to make such a cable yourself, to get this picture I first removed the outer plastic isolation (part of it is still there on the right side in this picture). Then I folded away the shield wires till they are all on one side (wires at the top of the picture). After this I removed the metal foil underneath the shield wires.

Having removed the first 2 layers of shielding this reveals 4 wires in the standard USB2 colors: red, black, green and white. and 2 separately shielded cable pairs. On the picture above the separately shielded pairs have been cut, giving us 4 pairs, 2 on each end of cable; and the shielding has been removed from 3 of the 4 pairs, you can still see the shielding on the 4th pair.

A standard USB3 cable uses the following color codes:

  • Red: Vbus / 5 volt

  • White:  USB 2.0 Data -

  • Green: USB 2.0 Data +

  • Black: Ground

  • Purple: Superspeed RX -

  • Orange: Superspeed RX +

  • Blue: Superspeed TX -

  • Yellow: Superspeed TX -

So to swap RX and TX we need to connect purple to blue / blue to purple and orange to yellow / yellow to orange, resulting in:



Note the wires are just braided together here, not soldered yet. This is a good moment to carefully test the cable. Also note that the superspeed wire pairs must be length matched, so you need to cut and strip all 8 cables at the same length! If everything works you can put some solder on those braided together wires, re-test after soldering, and then cover them with some heat-shrink-tube:



And then cover the entire junction with a bigger heat-shrink-tube:



And you have a superspeed capable cable even though no one will sell you one.

Note that the Thinkpad 8 supports ACA mode so if you get an ACA capable "Y" cable or an ACA charging HUB then you can charge and use the Thinkpad 8 USB port at the same time. Typically ACA "Y" cables or hubs are USB2 only. So the superspeed mod from this blogpost will not help with those. The Asus T100CHI has a separate USB2 micro-B just for charging, so you do not need anything special there to charge + connect an USB device.