March 25, 2019

Even more fun with SuperIO

My fun with SuperIO continues, and may be at the logical end. I’ve now added the required code to the superio plugin to flash IT89xx embedded controllers. Most of the work was working out how to talk to the hardware on ports 0x62 and 0x66, although the flash “commands” are helpfully JEDEC compliant. The actual flashing process is the typical:

  • Enter into a bootloader mode (which disables your keyboard, fans and battery reporting)
  • Mark the internal EEPROM as writable
  • Erase blocks of data
  • Write blocks of data to the device
  • Read back the blocks of data to verify the write
  • Mark the internal EEPROM as read-only
  • Return to runtime mode

There were a few slight hickups, in that when you read the data back from the device just one byte is predictably wrong, but nothing that can’t be worked around in software. Working around the wrong byte means we can verify the attestation checksum correctly.

Now, don’t try flashing your EC with random binaries. The binaries look unsigned, don’t appear to have any kind of checksum, and flashing the wrong binary to the wrong hardware has the failure mode of “no I/O devices appear at boot” so unless you have a hardware programmer handy it’s probably best to wait for an update from your OEM.

We also do the EC update from a special offline-update mode where nothing else than fwupd is running, much like we do the system updates in Fedora. All this work was supported by the people at Star Labs, and now basically everything in the LapTop Mk3 is updatable in Linux. EC updates for Star Labs hardware should appear on the LVFS soon.

March 22, 2019

Parental controls & metered data hackfest: days 3 & 4

no nice picture this time, sorry 😭

Following on from the first two days, we in days 3 & 4 we moved on to talking about metered data. There’s some debate about whether this is the correct terminology to use, but that’s the title of the hackfest so I’ll stick with it for now.

This is a set of features to handle internet connections that have limited amounts of data in some way. For example, I’ve personally got a “MiFi” mobile hotspot that provides internet over the 4G mobile network. I bought a pay-as-you-go SIM card that provides 32 GB of data, and when that’s used up I’ll have to recharge it to be able to get online.

Philip provided a summary of the current implementation in Endless. You can also watch a YouTube video of a talk Philip gave on this topic at GUADEC in 2018. Briefly, this is an opt-in system. The system knows  details of your tariff, some of which you can provide in Settings, but which may also come with the system – for example if it’s sold with a mobile network connection already provisioned. Applications ask a new component for permission to begin a large data transfer, and this component tells the application when it can begin the transfer. This could be immediately or later on.

We reviewed prior art (other OS implementations), and had some discussion about types of metered connection that you might have. This turns out to be very complex! Some providers offer plans which come with “N hours” or “N bytes” restrictions. Others exempt certain websites from metering entirely.

Another topic is logging of used data across different applications. systemd has features to log the network traffic of units, so once we have proper support for using systemd to start desktop applications, we can begin to track this and then think about how to expose it in the UI.

I was personally arguing for somehow tracking the global amount of network traffic (tx/rx_bytes), so that the shell can tell you when you are approaching / over your cap. My feeling is that this would require some integration with NetworkManager, but we would need to work out the details of what to track – total bytes isn’t good enough; you would at least need bytes per day/week/… to implement support for some types of metered connection.

Finally, if you’re reading this through Planet GNOME then you’ve already seen, but Philip created a survey that we would like to use to help guide future developments. Go fill it out please.

Parental Controls and Metered Data Hackfest

This week I participated in the Parental Controls and Metered Data Hackfest, which was held at Red Hat’s London office.

Parental controls and metered data already exist in Endless and/or elementary OS in some shape or form. The goal of the hackfest was to plan how to upstream the features to GNOME. It’s great to see this kind of activity from downstreams so I was very happy to contribute in my capacity as an upstream UX designer.

There have been a fair few blog posts about the event already, so I’m going to try and avoid repeating what’s already been written…

Parental controls

Parental controls sound like a niche feature, but they actually have wider applicability than limiting what the kids can do with your laptop. This is because the same features that are used by parental controls can be useful for other types of functionality, particularly around “digital well-being”. For example, a parent might want to limit how much time their child spends using the computer, but someone might want to self-impose this same limit on themselves, in order to try and lead a healthier lifestyle.

Furthermore, outside of parental controls, the same functionality can be pitched in different ways. A feature like limiting the use of particular apps to certain times of the day could either be presented as a “digital well-being” feature, where the goal is to be happier and healthier, or as a “productivity” feature, where the goal is to help someone get more out of their time in front of the screen.

There are some interesting user experience questions that need to be answered here, such as to what extent we should focus on particular use cases, as well as what those use cases should be.

We discussed these questions a bit during the hackfest, but more thought is going to be necessary. The other next step will be to figure out what the initial MVP should be for these features, since they could potentially be quite extensive.

Metered data

Metered network connections are those that either have usage limits attached to them, or those which have financial costs for usage. In both cases this requires that we limit automatic/background network usage, as well as potentially showing warnings if the user is doing something that could result in high data usage.

My main interest in this area is to ensure that GNOME behaves correctly when people use mobile broadband, either by tethering their phone or when using a dedicated mobile broadband connection. (There’s nothing more frustrating than your laptop silently chewing through your data plan.)

The first target for this work is to make sure that automatic software updates behave well, but there’s some other interesting work that could come out of it, particularly around controls for whether unfocused or backgrounded apps are allowed to use the network.

Philip Withnall has created a survey to find out about peoples’ experiences using metered data. Please fill it out if you haven’t already!

Credits

The hackfest was a great event, and I’d like to thank the following people and organisations for making it possible:

  • Philip Withnall for organising the event
  • The GNOME Foundation for sponsoring me to attend
  • Red Hat for providing the venue

March 21, 2019

Metered data hackfest

tl;dr: Please fill out this survey about metered data connections, regardless of whether you run GNOME or often use metered data connections.

We’re now into the second day of the metered data hackfest in London. Yesterday we looked at Endless’ existing metered data implementation, which is restricted to OS and application updates, and discussed how it could be reworked to fit in with the new control centre design, and which applications would benefit from scheduling their large downloads to avoid using metered data unnecessarily (and hence costing the user money).

The conclusion was that the first step is to draw up a design for the control centre integration, which determines when to allow downloads on metered connections, and which connections are actually metered. Then to upstream the integration of metered data with gnome-software, so that app and OS updates adhere to the policy. Integration with other applications which do large downloads (such as podcasts, file syncing, etc.) can then follow.

While looking at metered data, however, we realised we don’t have much information about what types of metered data connections people have. For example, do connections commonly limit people to a certain amount of downloads per month, or per day? Do they have a free period in the middle of the night? We’ve put together a survey for anyone to take (not just those who use GNOME, or who use a metered connection regularly) to try and gather more information. Please fill it out!

Today, the hackfest is winding down a bit, with people quietly working on issues related to parental controls or metered data, or on upstream development in general. Richard and Kalev are working on gnome-software issues. Georges and Florian are working on gnome-shell issues.

Using hexdump to print binary protocols

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:


$ hexdump -v -e '1/2 "%d\n"'
-11643
23698
0
0
-5013
6
0
0
This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.

$ hexdump -v -e '1/2 "%d "' -e '1/2 "%x\n"'
-11568 d2d0
23698 5c92
0 0
0 0
6355 18d3
1 1
0 0
This prints the same 2-byte input value, once as decimal signed integer, once as lowercase hex. If we have multiple identical things to print, we can do this:

$ hexdump -v -e '2/2 "%6d "' -e '" hex:"' -e '4/1 " %x"' -e '"\n"'
-10922 23698 hex: 56 d5 92 5c
0 0 hex: 0 0 0 0
14879 1 hex: 1f 3a 1 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
Which prints two elements, each size 2 as integers, then the same elements as four 1-byte hex values, followed by a linebreak. %6d is a standard printf instruction and documented in the manual.

Let's go and print our protocol. The struct representing the protocol is this one:


struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) && !defined(__KERNEL__)
struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
#else
__kernel_ulong_t __sec;
#if defined(__sparc__) && defined(__arch64__)
unsigned int __usec;
#else
__kernel_ulong_t __usec;
#endif
#define input_event_sec __sec
#define input_event_usec __usec
#endif
__u16 type;
__u16 code;
__s32 value;
};
So we have two longs for sec and usec, two shorts for type and code and one signed 32-bit int. Let's print it:

$ hexdump -v -e '"E: " 1/8 "%u." 1/8 "%06u" 2/2 " %04x" 1/4 "%5d\n"' /dev/input/event22
E: 1553127085.097503 0002 0000 1
E: 1553127085.097503 0002 0001 -1
E: 1553127085.097503 0000 0000 0
E: 1553127085.097542 0002 0001 -2
E: 1553127085.097542 0000 0000 0
E: 1553127085.108741 0002 0001 -4
E: 1553127085.108741 0000 0000 0
E: 1553127085.118211 0002 0000 2
E: 1553127085.118211 0002 0001 -10
E: 1553127085.118211 0000 0000 0
E: 1553127085.128245 0002 0000 1
And voila, we have our structs printed in the same format evemu-record prints out. So with nothing but hexdump, I can generate output I can then parse with my existing scripts on another box.

March 20, 2019

2019-03-20 Wednesday

  • Mail chew, admin; team all-hands over lunch midday. Built ESC agenda, poked financials again. Looked at SPADE with the babes: perhaps avoiding Art GCSE was indeed a smart move.

GNOME Bugzilla closed for new bug entry

As part of GNOME’s ongoing migration from Bugzilla to Gitlab, from today on there are no products left in GNOME Bugzilla which allow the creation of new tickets.
The ID of the last GNOME Bugzilla ticket is 797430 (note that there are gaps between 173191–200000 and 274555–299999 as the 2xxxxx ID range was used for tickets imported from Ximian Bugzilla).

Since the year 2000, the Bugzilla software had served as GNOME’s issue tracking system. As forges emerged which offer tight and convenient integration of issue tracking, code review of proposed patches, automated continuous integration testing, code repository browsing and hosting and further functionality, Bugzilla’s shortcomings became painful obstacles for modern software development practices.

Nearly all products which used GNOME Bugzilla have moved to GNOME Gitlab to manage issues. A few projects (Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy) have moved to other places (such as freedesktop.org Gitlab, self-hosted Bugzilla instances, or Github) to track their issues.

Reaching this milestone required finding, contacting and discussing over the last months with project maintainers of mostly less active projects which had used GNOME Bugzilla for their issue tracking.
For convenience, there are redirects in place (for those websites out there which still directly link to Bugzilla’s ticket creation page) to guide them to the new issue tracking venues.

Note that closing only refers to creating new tickets: There are still 189 products with 21019 open tickets in GNOME Bugzilla. IMO these tickets should either get migrated to Gitlab or mass-closed on a per-product basis, depending on maintainers’ preferences. The long-term goal should be making GNOME Bugzilla completely read-only.

I also fixed the custom “Browse” product pages in GNOME Bugzilla to get displayed (the previous code expected products to be open for new bug entry). Should make it easier again for maintainers to potentially triage and clean up their old open tickets in Bugzilla.

Thanks to Carlos and Andrea and everyone involved for all their help!

PS: Big Thanks to Lenka and everyone who signed the postcard for me at FOSDEM 2019. Missed you too! :)

Parental controls hackfest

Various of us have been meeting in the Red Hat offices in London this week (thanks Red Hat!) to discuss parental controls and digital wellbeing. The first two days were devoted to this; today and tomorrow will be dedicated to discussing metered data (which is unrelated to parental controls, but the hackfests are colocated because many of the same people are involved in both).

Parental controls discussions went well. We’ve worked out a rough scope of what features we are interested in integrating into GNOME, and how parental controls relates to digital wellbeing. In this context, we’re considering parental controls to be allowing parents to limit what their children can do on a computer, in terms of running different applications or games, or spending certain amounts of time on the computer.

Digital wellbeing is many of the same concepts – limiting time usage of the computer or applications, or access to certain websites – but applied in a way to give yourself ‘speed bumps’ to help your productivity by avoiding distractions at work.

Allan produced some initial designs for the control centre UI for parental controls and digital wellbeing, and we discussed various minor issues around them, and how to deal with the problem of allowing people to schedule times when apps, or whole groups of apps, are to be blocked; without making the UI too complex. There’s some more work to do there.

On Tuesday evening, we joined some of the local GNOME developers in London for beers, celebrating the 3.32 GNOME release. ?

We’re now looking at metered data, which is the idea that large downloads should be limited and scheduled according to the user’s network tariff, which might limit what can be downloaded during a certain time period, or provide certain periods of the night when downloads are unmetered. More to come on that later.

For other write ups of what we’ve been doing, see Iain’s detailed write up of the first two days, or the raw hackfest notes.

March 19, 2019

GNOME ED Update – February

Another update is now due from what we’ve been doing at the Foundation, and we’ve been busy!

As you may have seen, we’ve hired three excellent people over the past couple of months. Kristi Progri has joined us as Program Coordinator, Bartłomiej Piorski as a devops sysadmin, and Emmanuele Bassi as our GTK Core developer. I hope to announce another new hire soon, so watch this space…

There’s been quite a lot of discussion around the Google API access, and GNOME Online Accounts. The latest update is that I submitted the application to Google to get GOA verified, and we’ve got a couple of things we’re working through to get this sorted.

Events all round!

Although the new year’s conference season is just kicking off, it’s been a busy one for GNOME already. We were at FOSDEM in Brussels where we had a large booth, selling t-shirts, hoodies and of course, the famous GNOME socks. I held a meeting of the Advisory Board, and we had a great GNOME Beers event – kindly sponsored by Codethink.

We also had a very successful GTK Hackfest – moving us one step closer to GTK 4.0.

Coming up, we’ll have a GNOME booth at:

  • SCALEx17 – Pasadena, California (7th – 10th March)
  • LibrePlanet – Boston Massachusetts (23rd – 24th March)
  • FOSS North – Gothenburg, Sweden (8th – 9th April)
  • Linux Fest North West – Bellingham, Washington (26th – 28th April)

If you’re at any of these, please come along and say hi! We’re also planning out events for the rest of the year. If anyone has any particularly exciting conferences we may not have heard of, please let us know.

Discourse

It hasn’t yet been announced, but we’re trialling an instance of Discourse for the GTK and Engagement teams. It’s hopeful that this may replace mailman, but we’re being quite careful to make sure that email integration continues to work. Expect more information about this in the coming month. If you want to go have a look, the instance is available at discourse.gnome.org

2019-03-19 Tuesday

  • Mail chew, slideware, product management bits. Sync with Tracie, monthly mgmt call, debugging weird online corner-case.

Epiphany Technology Preview Upgrade Requires Manual Intervention

Jan-Michael has recently changed Epiphany Technology Preview to use a separate app ID. Instead of org.gnome.Epiphany, it will now be org.gnome.Epiphany.Devel, to avoid clashing with your system version of Epiphany. You can now have separate desktop icons for both system Epiphany and Epiphany Technology Preview at the same time.

Because flatpak doesn’t provide any way to rename an app ID, this means it’s the end of the road for previous installations of Epiphany Technology Preview. Manual intervention is required to upgrade. Fortunately, this is a one-time hurdle, and it is not hard:

$ flatpak uninstall org.gnome.Epiphany

Uninstall the old Epiphany…

$ flatpak install gnome-apps-nightly org.gnome.Epiphany.Devel org.gnome.Epiphany.Devel.Debug

…install the new one, assuming that your remote is named gnome-apps-nightly (the name used locally may differ), and that you also want to install debuginfo to make it possible to debug it…

$ mv ~/.var/app/org.gnome.Epiphany ~/.var/app/org.gnome.Epiphany.Devel

…and move your personal data from the old app to the new one.

Then don’t forget to make it your default web browser under System Settings -> Details -> Default Applications. Thanks for testing Epiphany Technology Preview!

Parental controls & metered data hackfest: days 1 & 2

I’m currently at the Parental Controls & Metered Data hackfest at Red Hat’s office in London. A bunch of GNOME people from various companies (Canonical, Endless, elementary, and Red Hat) have gathered to work out a plan to start implementing these two features in GNOME. The first two days have been dedicated to the parental control features. This is the ability for parents to control what children can do on the computer. For example, locking down access to certain applications or websites.

Day one began with presentations of the Endless OS implementation by Philip, followed by a demonstration of the Elementary version by Cassidy. Elementary were interested in potentially expanding this feature set to include something like Digital Wellbeingwe explored the distinction between this and parental controls. It turns out that these features are relatively similar – the main differences are whether you are applying restrictions to yourself or to someone else, and whether you have the ability to lift/ignore the restrictions. We’ve started talking about the latter of these as “speed bumps”: you can always undo your own restrictions, so the interventions from the OS should be intended to nudge you towards the right behaviour.

After that we looked at some prior art (Android, iOS), and started to take the large list of potential features (in the image above) down to the ones we thought might be feasible to implement. Throughout all of this, one topic we kept coming back to was app lockdown. It’s reasonably simple to see how this could be applied to containerised 📦 apps (e.g. Snap or Flatpak), but system applications that come from a deb or an rpm are much more difficult. It would probably be possible – but still difficult – to use an LSM like AppArmor or SELinux to do this by denying execute access to the application’s binary. One obvious problem with that is that GNOME doesn’t require one of these and different distributions have made different choices here… Another tricky topic is how to implement website white/blacklisting in a robust way. We discussed using DNS (systemd-resolved?) and ip/nftables implementations, but it might turn out that the most feasible way is to use a browser extension for this.

Adam Bieńkowski joined us to discuss the technical details of Elementary’s implementation and some potential ideas for future improvements there. Thanks for that!

Today we’ve spent a fair bit of time discussing the technical details about how some of this might be implemented. Given that this is about locking down other users’ accounts, the data ought to be stored somewhere at the system level – both so the admin can query/set it, and so that the user can’t modify it. Endless’s current implementation stores this in AccountsService, which feels reasonable to us, but doesn’t extend well to storing the information required to implement activity tracking. Georges and Florian have been discussing writing a system daemon to do this, which the shell and (maybe) browser(s) would feed into.

More detailed notes taken by Philip are available here.

For the next two days we will move to talking about the second subject for this hackfest – data metering.

Introducing flat-manager

A long time ago I wrote a blog post about how to maintain a Flatpak repository.

It is still a nice, mostly up to date, description of how Flatpak repositories work. However, it doesn’t really have a great answer to the issue called syncing updates in the post. In other words, it really is more about how to maintain a repository on one machine.

In practice, at least on a larger scale (like e.g. Flathub) you don’t want to do all the work on a single machine like this. Instead you have an entire build-system where the repository is the last piece.

Enter flat-manager

To support this I’ve been working on a side project called flat-manager. It is a service written in rust that manages Flatpak repositories. Recently we migrated Flathub to use it, and its seems to work quite well.

At its core, flat-manager serves and maintains a set of repos, and has an API that lets you push updates to it from your build-system. However, the way it is set up is a bit more complex, which allows some interesting features.

Core concept: a build

When updating an app, the first thing you do is create a new build, which just allocates an id that you use in later operations. Then you can upload one or more builds to this id.

This separation of the build creation and the upload is very powerful, because it allows you to upload the app in multiple operations, potentially from multiple sources. For example, in the Flathub build-system each architecture is built on a separate machine. Before flat-manager we had to collect all the separate builds on one machine before uploading to the repo. In the new system each build machine uploads directly to the repo with no middle-man.

Committing or purging

An important idea here is that the new build is not finished until it has been committed. The central build-system waits until all the builders report success before committing the build. If any of the builds fail, we purge the build instead, making it as if the build never happened. This means we never expose partially successful builds to users.

Once a build is committed, flat-manager creates a separate repository containing only the new build. This allows you to use Flatpak to test the build before making it available to users.

This makes builds useful even for builds that never was supposed to be generally available. Flathub uses this for test builds, where if you make a pull request against an app it will automatically build it and add a comment in the pull request with the build results and a link to the repo where you can test it.

Publishing

Once you are satisfied with the new build you can trigger a publish operation, which will import the build into the main repository and do all the required operations, like:

  • Sign builds with GPG
  • Generate static deltas for efficient updates
  • Update the appstream data and screenshots for the repo
  • Generate flatpakref files for easy installation of apps
  • Update the summary file
  • Call out out scripts that let you do local customization

The publish operation is actually split into two steps, first it imports the build result in the repo, and then it queues a separate job to do all the updates needed for the repo. This way if multiple builds are published at the same time the update can be shared. This saves time on the server, but it also means less updates to the metadata which means less churn for users.

You can use whatever policy you want for how and when to publish builds. Flathub lets individual maintainers chose, but by default successful builds are published after 3 hours.

Delta generation

The traditional way to generate static deltas is to run flatpak build-update-repo --generate-static-deltas. However, this is a very computationally expensive operation that you might not want to do on your main repository server. Its also not very flexible in which deltas it generates.

To minimize the server load flat-manager allows external workers that generate the deltas on different machines. You can run as many of these as you want and the deltas will be automatically distributed to them. This is optional, and if no workers connect the deltas will be generated locally.

flat-manager also has configuration options for which deltas should be generated. This allows you to avoid generating unnecessary deltas and to add extra levels of deltas where needed. For example, Flathub no longer generates deltas for sources and debug refs, but we have instead added multiple levels of deltas for runtimes, allowing you to go efficiently to the current version from either one or two versions ago.

Subsetting tokens

flat-manager uses JSON Web Tokens to authenticate API clients. This means you can assign different permission to different clients. Flathub uses this to give minimal permissions to the build machines. The tokens they get only allow uploads to the specific build they are currently handling.

This also allows you to hand out access to parts of the repository namespace. For instance, the Gnome project has a custom token that allows them to upload anything in the org.gnome.Platform namespace in Flathub. This way Gnome can control the build of their runtime and upload a new version whenever they want, but they can’t (accidentally or deliberately) modify any other apps.

Rust

I need to mention Rust here too. This is my first real experience with using Rust, and I’m very impressed by it. In particular, the sense of trust I have in the code when I got it past the compiler. The compiler caught a lot of issues, and once things built I saw very few bugs at runtime.

It can sometimes be a lot of work to express the code in a way that Rust accepts, which makes it not an ideal language for sketching out ideas. But for production code it really excels, and I can heartily recommend it!

Future work

Most of the initial list of features for flat-manager are now there, so I don’t expect it to see a lot of work in the near future.

However, there is one more feature that I want to see; the ability to (automatically) create subset versions of the repository. In particular, we want to produce a version of Flathub containing only free software.

I have the initial plans for how this will work, but it is currently blocking on some work inside OSTree itself. I hope this will happen soon though.

March 16, 2019

Maps and GNOME 3.32

So, a couple of days ago the GNOME 3.32 release came out and I thought I should share something about the news on the Maps side of things, although I think most of this has been covered in previous posts.

First up we have gotten a new application icon as part of the major overhaul of the icon style.


Furthermore the application menu has been moved into a “hamburger menu” inside the main window, in-line with the other applications in the desktop. This goes hand-in-hand with the gnome-shell top bar application menu not showing this application-specified menu anymore, since it was considered not too intuitive and also few third-party apps utilized it. But I'm pleased to see that the icon of the currently focused app is still shown in the topbar, as I think this is a good visual cue there.






And the other notable UI-wise fix is showing live-updated thumbnails for the in the layer selection menu for the buttons to switch between map and aerial view (contributed by James Westman).


These screenshots also shows some glimpses of the new GTK theme, which I think is pretty sleek, so well done the designers!

There's also been some under-the-hood fixes for silencing some compiler warnings (for the C glue library) contributed by Debarshi Ray.

Looking forward I started work on an issue that has been laying around in the bug tracker since I registered it around two years ago (tagged with the “newcomers” tag in the hopes someone would take it on :) ) that is about the we use a GtkOffscreenWindow to render the output when generating printouts of a routing search. This was done by instantiating the same widgets used to render the route instructions in the routing sidebar and the attaching this to an offscreen window to render them to bitmaps. But as this method will not work with GTK 4 (due to a different rendering architecture) this has to eventually be rewritten. So I started rewriting this code to directly use Cairo and Pango to render the icons and text strings for the printed instructions. And there's some gotchas with layouting and right-to-left locales, but this far I think it's working out right for the turn-based routes as shown by the these screenshots.




The latter screenshot showing a rendition using a Farsi locale (being RTL, using the Arabic script).

That's it for now!

March 15, 2019

GNOME 3.32 and other ramblings

GNOME 3.32 was released this week. For all intents and purposes, it is a fantastic release, and I am already using the packages provided by Arch Linux’s gnome-unstable repository. Congratulations for everyone involved (including the Arch team for the super quick packaging!)

I have a few highlights, comments and thoughts I would like to share about this release, and since I own this blog, well, let me do it publicly! 🙂

Fast, Furiously Fast

The most promoted improvement in this release is the improved performance. Having worked or reviewed some these improvements myself, I found it a bit weird that some people were reporting enormous changes on performance. Of course, you should notice that GNOME Shell is smoother, and applications as well (when the compositor reliably sends frame ticks to applications, they also draw on time, and feel smoother as well.)

But people were telling me that these changes were game changing.

There is a grey line between the actual improvements, and people just happy and overly excited about it. And I thought the latter was the case.

But then I installed the non-debug packages from Arch repositories and this is actually a game changer release. I probably got used to using Mutter and GNOME Shell manually compiled with all the debug and development junk, and didn’t really notice how better it became.

Better GNOME Music

Sweet, sweet GNOME Music

One of the applications that I enjoy the most in the GNOME core apps ecosytem is GNOME Music. In the past, I have worked on landing various performance improvements on it. Unfortunately, my contributions ceased last year, but I have been following the development of this pretty little app closely

A lot of effort was put into modernizing GNOME Music, and it is absolutely paying off. It is more stable, better, and I believe it has reached the point where adding new features won’t drive contributors insane.

GNOME Web – a gem

In the past, I have tried making Web my main browser. Unfortunately, that did not work out very well, due to 2 big reasons:

  • WordPress would not work, and as such, I couldn’t write blog posts using Web;
  • Google Drive (and a few other Google websites) would be subtly broken.

Both issues seem to be fixed now! In fact, as you can see from the previous screenshot, I am writing this post from Web. Which makes me super happy.

Even though I cannot use it 100% of the time (mainly due to online banking and Google Meets), I will experiment making it my main browser for a few weeks and see how it goes.

GNOME Terminal + Headebars = 💖

Do I even need to say something?

Hackfests

As I write this, I am getting ready for next week’s Parental Controls & Metered Data Hackfest in London. We will discuss and try to land in GNOME some downstream features available at Endless OS.

I’m also mentally preparing for the Content Apps Hackfest. And GUADEC. It is somewhat hard once you realize you have travel anxiety, and every week before traveling is a psychological war.

Other Thoughts

This was a peculiar release to me.

This is actually the first release where I spent serious time on Mutter and GNOME Shell. As I said in the past, it’s a new passion of mine. Both are complex projects that encompasses many aspects of the user experience, and cleaning the code and improving it has been fantastic so far. As such, it was and still is a challenge to split my time in such a fragmented way (it’s not like I don’t maintain GNOME Settings, GNOME Calendar, and GNOME To Do already.)

Besides that, I am close to finishing moving to a new home! This is an ongoing process, slow and steady, it is becoming something I am growing to love and feel like home.

Entries in GTK 4

One of the larger refactorings that recently landed in GTK master is re-doing the entry hierarchy. This post is summarizing what has changed, and why we think things are better this way.

Entries in GTK 3

Lets start by looking at how things are in GTK 3.

GtkEntry is the basic class here. It implements the GtkEditable interface. GtkSpinButton is a subclass of GtkEntry. Over the years, more things were added. GtkEntry gained support for entry completion, and for embedding icons, and for displaying progress. And we added another subclass, GtkSearchEntry.

Some problems with this approach are immediately apparent. gtkentry.c is more than 11100 lines of code. It it not only very hard to add more features to this big codebase, it is also hard to subclass it – and that is the only way to create your own entries, since all the single-line text editing functionality is inside GtkEntry.

The GtkEditable interface is really old – it has been around since before GTK 2. Unfortunately, it has not really been successful as an interface – GtkEntry is the only  implementation, and it uses the interface functions internally in a confusing way.

Entries in GTK 4

Now lets look at how things are looking in GTK master.

The first thing we’ve done is to move the core text editing functionality of GtkEntry into a new widget called GtkText. This is basically an entry minus all the extras, like icons, completion and progress.

We’ve made the GtkEditable interface more useful, by adding some more common functionality (like width-chars and max-width-chars) to it, and made GtkText implement it. We also added helper APIs to make it easy to delegate a GtkEditable implementation to another object.

The ‘complex’ entry widgets (GtkEntry, GtkSpinButton, GtkSearchEntry) are now all composite widgets, which contain a GtkText child, and delegate their GtkEditable implementation to this child.

Finally, we added a new GtkPasswordEntry widget, which takes over the corresponding functionality that GtkEntry used to have, such as showing a Caps Lock warning

or letting the user peek at the content.

Why is this better?

One of the main goals of this refactoring was to make it easier to create custom entry widgets outside GTK.

In the past, this required subclassing GtkEntry, and navigating a complex maze of vfuncs to override. Now, you can just add a GtkText widget, delegate your GtkEditable implementation to it, and have a functional entry widget with very little effort.

And you have a lot of flexibility in adding fancy things around the GtkText component. As an example, we’ve added a tagged entry to gtk4-demo that can now be implemented easily outside GTK itself.

Will this affect you when porting from GTK 3?

There are a few possible gotcha’s to keep in mind while porting code to this new style of doing entries.

GtkSearchEntry and GtkSpinButton are no longer derived from GtkEntry. If you see runtime warnings about casting from one of these classes to GtkEntry, you most likely need to switch to using GtkEditable APIs.

GtkEntry and other complex entry widgets are no longer focusable – the focus goes to the contained GtkText instead. But gtk_widget_grab_focus() will still work, and move the focus the right place. It is unlikely that you are affected by this.

The Caps Lock warning functionality has been removed from GtkEntry. If you were using a GtkEntry with visibility==FALSE for passwords, you should just switch to GtkPasswordEntry.

If you are using a GtkEntry for basic editing functionality and don’t need any of the extra entry functionality, you should consider using a GtkText instead.

A Rust API for librsvg

After the librsvg team finished the rustification of librsvg's main library, I wanted to start porting the high-level test suite to Rust. This is mainly to be able to run tests in parallel, which cargo test does automatically in order to reduce test times. However, this meant that librsvg needed a Rust API that would exercise the same code paths as the C entry points.

At the same time, I wanted the Rust API to make it impossible to misuse the library. From the viewpoint of the C API, an RsvgHandle has different stages:

  • Just initialized
  • Loading
  • Loaded, or in an error state after a failed load
  • Ready to render

To ensure consistency, the public API checks that you cannot render an RsvgHandle that is not completely loaded yet, or one that resulted in a loading error. But wouldn't it be nice if it were impossible to call the API functions in the wrong order?

This is exactly what the Rust API does. There is a Loader, to which you give a filename or a stream, and it will return a fully-loaded SvgHandle or an error. Then, you can only create a CairoRenderer if you have an SvgHandle.

For historical reasons, the C API in librsvg is not perfectly consistent. For example, some functions which return an error will actually return a proper GError, but some others will just return a gboolean with no further explanation of what went wrong. In contrast, all the Rust API functions that can fail will actually return a Result, and the error case will have a meaningful error value. In the Rust API, there is no "wrong order" in which the various API functions and methods can be called; it tries to do the whole "make invalid states unrepresentable".

To implement the Rust API, I had to do some refactoring of the internals that hook to the public entry points. This made me realize that librsvg could be a lot easier to use. The C API has always forced you to call it in this fashion:

  1. Ask the SVG for its dimensions, or how big it is.
  2. Based on that, scale your Cairo context to the size you actually want.
  3. Render the SVG to that context's current transformation matrix.

But first, (1) gives you inadequate information because rsvg_handle_get_dimensions() returns a structure with int fields for the width and height. The API is similar to gdk-pixbuf's in that it always wants to think in whole pixels. However, an SVG is not necessarily integer-sized.

Then, (2) forces you to calculate some geometry in almost all cases, as most apps want to render SVG content scaled proportionally to a certain size. This is not hard to do, but it's an inconvenience.

SVG dimensions

Let's look at (1) again. The question, "how big is the SVG" is a bit meaningless when we consider that SVGs can be scaled to any size; that's the whole point of them!

When you ask RsvgHandle how big it is, in reality it should look at you and whisper in your ear, "how big do you want it to be?".

And that's the thing. The HTML/CSS/SVG model is that one embeds content into viewports of a given size. The software is responsible for scaling the content to fit into that viewport.

In the end, what we want is a rendering function that takes a Cairo context and a Rectangle for a viewport, and that's it. The function should take care of fitting the SVG's contents within that viewport.

There is now an open bug about exactly this sort of API. In the end, programs should just have to load their SVG handle, and directly ask it to render at whatever size they need, instead of doing the size computations by hand.

When will this be available?

I'm in the middle of a rather large refactor to make this viewport concept really work. So far this involves:

  • Defining APIs that take a viewport.

  • Refactoring all the geometry computation to support the semantics of the C API, plus the new with_viewport semantics.

  • Fixing the code that kept track of an internal offset for all temporary images.

  • Refactoring all the code that mucks around with the Cairo context's affine transformation matrix, which is a big mutable mess.

  • Tests, examples, documentation.

I want to make the Rust API available for the 2.46 release, which is hopefully not too far off. It should be ready for the next GNOME release. In the meantime, you can check out the open bugs for the 2.46.0 milestone. Help is appreciated; the deadline for the first 3.33 tarballs is approximately one month from now!

libinput's internal building blocks

Ho ho ho, let's write libinput. No, of course I'm not serious, because no-one in their right mind would utter "ho ho ho" without a sufficient backdrop of reindeers to keep them sane. So what this post is instead is me writing a nonworking fake libinput in Python, for the sole purpose of explaining roughly how libinput's architecture looks like. It'll be to the libinput what a Duplo car is to a Maserati. Four wheels and something to entertain the kids with but the queue outside the nightclub won't be impressed.

The target audience are those that need to hack on libinput and where the balance of understanding vs total confusion is still shifted towards the latter. So in order to make it easier to associate various bits, here's a description of the main building blocks.

libinput uses something resembling OOP except that in C you can't have nice things unless what you want is a buffer overflow\n\80xb1001af81a2b1101. Instead, we use opaque structs, each with accessor methods and an unhealthy amount of verbosity. Because Python does have classes, those structs are represented as classes below. This all won't be actual working Python code, I'm just using the syntax.

Let's get started. First of all, let's create our library interface.


class Libinput:
@classmethod
def path_create_context(cls):
return _LibinputPathContext()

@classmethod
def udev_create_context(cls):
return _LibinputUdevContext()

# dispatch() means: read from all our internal fds and
# call the dispatch method on anything that has changed
def dispatch(self):
for fd in self.epoll_fd.get_changed_fds():
self.handlers[fd].dispatch()

# return whatever the next event is
def get_event(self):
return self._events.pop(0)

# the various _notify functions are internal API
# to pass things up to the context
def _notify_device_added(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.append(device)

def _notify_device_removed(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.remove(device)

def _notify_pointer_motion(self, x, y):
self._events.append(LibinputEventPointer(x, y))



class _LibinputPathContext(Libinput):
def add_device(self, device_node):
device = LibinputDevice(device_node)
self._notify_device_added(device)

def remove_device(self, device_node):
self._notify_device_removed(device)


class _LibinputUdevContext(Libinput):
def __init__(self):
self.udev = udev.context()

def udev_assign_seat(self, seat_id):
self.seat_id = seat.id

for udev_device in self.udev.devices():
device = LibinputDevice(udev_device.device_node)
self._notify_device_added(device)


We have two different modes of initialisation, udev and path. The udev interface is used by Wayland compositors and adds all devices on the given udev seat. The path interface is used by the X.Org driver and adds only one specific device at a time. Both interfaces have the dispatch() and get_events() methods which is how every caller gets events out of libinput.

In both cases we create a libinput device from the data and create an event about the new device that bubbles up into the event queue.

But what really are events? Are they real or just a fidget spinner of our imagination? Well, they're just another object in libinput.


class LibinputEvent:
@property
def type(self):
return self._type

@property
def context(self):
return self._libinput

@property
def device(self):
return self._device

def get_pointer_event(self):
if instanceof(self, LibinputEventPointer):
return self # This makes more sense in C where it's a typecast
return None

def get_keyboard_event(self):
if instanceof(self, LibinputEventKeyboard):
return self # This makes more sense in C where it's a typecast
return None


class LibinputEventPointer(LibinputEvent):
@property
def time(self)
return self._time/1000

@property
def time_usec(self)
return self._time

@property
def dx(self)
return self._dx

@property
def absolute_x(self):
return self._x * self._x_units_per_mm

@property
def absolute_x_transformed(self, width):
return self._x * width/ self._x_max_value
You get the gist. Each event is actually an event of a subtype with a few common shared fields and a bunch of type-specific ones. The events often contain some internal value that is calculated on request. For example, the API for the absolute x/y values returns mm, but we store the value in device units instead and convert to mm on request.

So, what's a device then? Well, just another I-cant-believe-this-is-not-a-class with relatively few surprises:


class LibinputDevice:
class Capability(Enum):
CAP_KEYBOARD = 0
CAP_POINTER = 1
CAP_TOUCH = 2
...

def __init__(self, device_node):
pass # no-one instantiates this directly

@property
def name(self):
return self._name

@property
def context(self):
return self._libinput_context

@property
def udev_device(self):
return self._udev_device

@property
def has_capability(self, cap):
return cap in self._capabilities

...
Now we have most of the frontend API in place and you start to see a pattern. This is how all of libinput's API works, you get some opaque read-only objects with a few getters and accessor functions.

Now let's figure out how to work on the backend. For that, we need something that handles events:


class EvdevDevice(LibinputDevice):
def __init__(self, device_node):
fd = open(device_node)
super().context.add_fd_to_epoll(fd, self.dispatch)
self.initialize_quirks()

def has_quirk(self, quirk):
return quirk in self.quirks

def dispatch(self):
while True:
data = fd.read(input_event_byte_count)
if not data:
break

self.interface.dispatch_one_event(data)

def _configure(self):
# some devices are adjusted for quirks before we
# do anything with them
if self.has_quirk(SOME_QUIRK_NAME):
self.libevdev.disable(libevdev.EV_KEY.BTN_TOUCH)


if 'ID_INPUT_TOUCHPAD' in self.udev_device.properties:
self.interface = EvdevTouchpad()
elif 'ID_INPUT_SWITCH' in self.udev_device.properties:
self.interface = EvdevSwitch()
...
else:
self.interface = EvdevFalback()


class EvdevInterface:
def dispatch_one_event(self, event):
pass

class EvdevTouchpad(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevTablet(EvdevInterface):
def dispatch_one_event(self, event):
...


class EvdevSwitch(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevFallback(EvdevInterface):
def dispatch_one_event(self, event):
...
Our evdev device is actually a subclass (well, C, *handwave*) of the public device and its main function is "read things off the device node". And it passes that on to a magical interface. Other than that, it's a collection of generic functions that apply to all devices. The interfaces is where most of the real work is done.

The interface is decided on by the udev type and is where the device-specifics happen. The touchpad interface deals with touchpads, the tablet and switch interface with those devices and the fallback interface is that for mice, keyboards and touch devices (i.e. the simple devices).

Each interface has very device-specific event processing and can be compared to the Xorg synaptics vs wacom vs evdev drivers. If you are fixing a touchpad bug, chances are you only need to care about the touchpad interface.

The device quirks used above are another simple block:


class Quirks:
def __init__(self):
self.read_all_ini_files_from_directory('$PREFIX/share/libinput')

def has_quirk(device, quirk):
for file in self.quirks:
if quirk.has_match(device.name) or
quirk.has_match(device.usbid) or
quirk.has_match(device.dmi):
return True
return False

def get_quirk_value(device, quirk):
if not self.has_quirk(device, quirk):
return None

quirk = self.lookup_quirk(device, quirk)
if quirk.type == "boolean":
return bool(quirk.value)
if quirk.type == "string":
return str(quirk.value)
...
A system that reads a bunch of .ini files, caches them and returns their value on demand. Those quirks are then used to adjust device behaviour at runtime.

The next building block is the "filter" code, which is the word we use for pointer acceleration. Here too we have a two-layer abstraction with an interface.


class Filter:
def dispatch(self, x, y):
# converts device-unit x/y into normalized units
return self.interface.dispatch(x, y)

# the 'accel speed' configuration value
def set_speed(self, speed):
return self.interface.set_speed(speed)

# the 'accel speed' configuration value
def get_speed(self):
return self.speed

...


class FilterInterface:
def dispatch(self, x, y):
pass

class FilterInterfaceTouchpad:
def dispatch(self, x, y):
...

class FilterInterfaceTrackpoint:
def dispatch(self, x, y):
...

class FilterInterfaceMouse:
def dispatch(self, x, y):
self.history.push((x, y))
v = self.calculate_velocity()
f = self.calculate_factor(v)
return (x * f, y * f)

def calculate_velocity(self)
for delta in self.history:
total += delta
velocity = total/timestamp # as illustration only

def calculate_factor(self, v):
# this is where the interesting bit happens,
# let's assume we have some magic function
f = v * 1234/5678
return f
So libinput calls filter_dispatch on whatever filter is configured and passes the result on to the caller. The setup of those filters is handled in the respective evdev interface, similar to this:

class EvdevFallback:
...
def init_accel(self):
if self.udev_type == 'ID_INPUT_TRACKPOINT':
self.filter = FilterInterfaceTrackpoint()
elif self.udev_type == 'ID_INPUT_TOUCHPAD':
self.filter = FilterInterfaceTouchpad()
...
The advantage of this system is twofold. First, the main libinput code only needs one place where we really care about which acceleration method we have. And second, the acceleration code can be compiled separately for analysis and to generate pretty graphs. See the pointer acceleration docs. Oh, and it also allows us to easily have per-device pointer acceleration methods.

Finally, we have one more building block - configuration options. They're a bit different in that they're all similar-ish but only to make switching from one to the next a bit easier.


class DeviceConfigTap:
def set_enabled(self, enabled):
self._enabled = enabled

def get_enabled(self):
return self._enabled

def get_default(self):
return False

class DeviceConfigCalibration:
def set_matrix(self, matrix):
self._matrix = matrix

def get_matrix(self):
return self._matrix

def get_default(self):
return [1, 0, 0, 0, 1, 0, 0, 0, 1]
And then the devices that need one of those slot them into the right pointer in their structs:

class EvdevFallback:
...
def init_calibration(self):
self.config_calibration = DeviceConfigCalibration()
...

def handle_touch(self, x, y):
if self.config_calibration is not None:
matrix = self.config_calibration.get_matrix

x, y = matrix.multiply(x, y)
self.context._notify_pointer_abs(x, y)

And that's basically it, those are the building blocks libinput has. The rest is detail. Lots of it, but if you understand the architecture outline above, you're most of the way there in diving into the details.

March 14, 2019

A little testing

Years ago I started writing Graphene as a small library of 3D transformation-related math types to be used by GTK (and possibly Clutter, even if that didn’t pan out until Georges started working on the Clutter fork inside Mutter).

Graphene’s only requirement is a C99 compiler and a decent toolchain capable of either taking SSE builtins or support vectorization on appropriately aligned types. This means that, unless you decide to enable the GObject types for each Graphene type, Graphene doesn’t really need GLib types or API—except that’s a bit of a lie.

As I wanted to test what I was doing, Graphene has an optional build time dependency on GLib for its test suite; the library itself may not use anything from GLib, but if you want to build and run the test suite then you need to have GLib installed.

This build time dependency makes testing Graphene on Windows a lot more complicated than it ought to be. For instance, I need to install a ton of packages when using the MSYS2 toolchain on the CI instance on AppVeyor, which takes roughly 6 minutes each for the 32bit and the 64bit builds; and I can’t build the test suite at all when using MSVC, because then I’d have to download and build GLib as well—and just to access the GTest API, which I don’t even like.


What’s wrong with GTest

GTest is kind of problematic—outside of Google hijacking the name of the API for their own testing framework, which makes looking for it a pain. GTest is a lot more complicated than a small unit testing API needs to be, for starters; it was originally written to be used with a specific harness, gtester, in order to generate a very brief HTML report using gtester-report, including some timing information on each unit—except that gtester is now deprecated because the build system gunk to make it work was terrible to deal with. So, we pretty much told everyone to stop bothering, add a --tap argument when calling every test binary, and use the TAP harness in Autotools.

Of course, this means that the testing framework now has a completely useless output format, and with it, a bunch of default behaviours driven by said useless output format, and we’re still deciding if we should break backward compatibility to ensure that the supported output format has a sane default behaviour.

On top of that, GTest piggybacks on GLib’s own assertion mechanism, which has two major downsides:

  • it can be disabled at compile time by defining G_DISABLE_ASSERT before including glib.h, which, surprise, people tend to use when releasing; thus, you can’t run tests on builds that would most benefit from a test suite
  • it literally abort()s the test unit, which breaks any test harness in existence that does not expect things to SIGABRT midway through a test suite—which includes GLib’s own deprecated gtester harness

To solve the first problem we added a lot of wrappers around g_assert(), like g_assert_true() and g_assert_no_error(), that won’t be disabled depending on your build options and thus won’t break your test suite—and if your test suite is still using g_assert(), you’re strongly encouraged to port to the newer API. The second issue is still standing, and makes running GTest-based test suite under any harness a pain, but especially under a TAP harness, which requires listing the amount of tests you’ve run, or that you’re planning to run.

The remaining issues of GTest are the convoluted way to add tests using a unique path; the bizarre pattern matching API for warnings and errors; the whole sub-process API that relaunches the test binary and calls a single test unit in order to allow it to assert safely and capture its output. It’s very much the GLib test suite, except when it tries to use non-GLib API internally, like the command line option parser, or its own logging primitives; it’s also sorely lacking in the GObject/GIO side of things, so you can’t use standard API to create a mock GObject type, or a mock GFile.

If you want to contribute to GLib, then working on improving the GTest API would be a good investment of your time; since my project does not depend on GLib, though, I had the chance of starting with a clean slate.


A clean slate

For the last couple of years I’ve been playing off and on with a small test framework API, mostly inspired by BDD frameworks like Mocha and Jasmine. Behaviour Driven Development is kind of a buzzword, like test driven development, but I particularly like the idea of describing a test suite in terms of specifications and expectations: you specify what a piece of code does, and you match results to your expectations.

The API for describing the test suites is modelled on natural language (assuming your language is English, sadly):

  describe("your data type", function() {
    it("does something", () => {
      expect(doSomething()).toBe(true);
    });
    it("can greet you", () => {
      let greeting = getHelloWorld();
      expect(greeting).not.toBe("Goodbye World");
    });
  });

Of course, C is more verbose that JavaScript, but we can adopt a similar mechanism:

static void
something (void)
{
  expect ("doSomething",
    bool_value (do_something ()),
    to_be, true,
    NULL);
}

static void
{
  const char *greeting = get_hello_world ();

  expect ("getHelloWorld",
    string_value (greeting),
    not, to_be, "Goodbye World",
    NULL);
}

static void
type_suite (void)
{
  it ("does something", do_something);
  it ("can greet you", greet);
}


  describe ("your data type", type_suite);

If only C11 got blocks from Clang, this would look a lot less clunkier.

The value wrappers are also necessary, because C is only type safe as long as every type you have is an integer.

Since we’re good C citizens, we should namespace the API, which requires naming this library—let’s call it µTest, in a fit of unoriginality.

One of the nice bits of Mocha and Jasmine is the output of running a test suite:

$ ./tests/general 

  General
    contains at least a spec with an expectation
      ✓ a is true
      ✓ a is not false

      2 passing (219.00 µs)

    can contain multiple specs
      ✓ str contains 'hello'
      ✓ str contains 'world'
      ✓ contains all fragments

      3 passing (145.00 µs)

    should be skipped
      - skip this test

      0 passing (31.00 µs)
      1 skipped


Total
5 passing (810.00 µs)
1 skipped

Or, with colors:

Using colors means immediately taking this more seriously

The colours go automatically away if you redirect the output to something that is not a TTY, so your logs won’t be messed up by escape sequences.

If you have a test harness, then you can use the MUTEST_OUTPUT environment variable to control the output; for instance, if you’re using TAP you’ll get:

$ MUTEST_OUTPUT=tap ./tests/general
# General
# contains at least a spec with an expectation
ok 1 a is true
ok 2 a is not false
# can contain multiple specs
ok 3 str contains 'hello'
ok 4 str contains 'world'
ok 5 contains all fragments
# should be skipped
ok 6 # skip: skip this test
1..6

Which can be passed through to prove to get:

$ MUTEST_OUTPUT=tap prove ./tests/general
./tests/general .. ok
All tests successful.
Files=1, Tests=6,  0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
Result: PASS

I’m planning to add some additional output formatters, like JSON and XML.


Using µTest

Ideally, µTest should be used as a sub-module or a Meson sub-project of your own; if you’re using it as a sub-project, you can tell Meson to build a static library that won’t get installed on your system, e.g.:

mutest_dep = dependency('mutest-1',
  fallback: [ 'mutest', 'mutest_dep' ],
  default_options: ['static=true'],
  required: false,
  disabler: true,
)

# Or, if you're using Meson < 0.49.0
mutest_dep = dependency('mutest-1', required: false)
if not mutest_dep.found()
  mutest = subproject('mutest',
    default_options: [ 'static=true', ],
    required: false,
  )

  if mutest.found()
    mutest_dep = mutest.get_variable('mutest_dep')
  else
    mutest_dep = disabler()
  endif
endif

Then you can make the tests conditional on mutest_dep.found().

µTest is kind of experimental, and I’m still breaking its API in places, as a result of documenting it and trying it out, by porting the Graphene test suite to it. There’s still a bunch of API that I’d like to land, like custom matchers/formatters for complex data types, and a decent want to skip a specification or a whole suite; plus, as I said above, some additional formatted output.

If you have feedback, feel free to open an issue—or a pull request wink wink nudge nudge.

March 13, 2019

Understanding LF's New “Community Bridge”

[ This blog post was co-written by me and Karen M. Sandler, with input from Deb Nicholson, for our Conservancy blog, and that its canonical location. I'm reposting here just for the convenience of those who are subscribed to my RSS feed but not get Conservancy's feed. ]

Yesterday, the Linux Foundation (LF) launched a new service, called “Community Bridge” — an ambitious platform that promises a self-service system to handle finances, address security issues, manage CLAs and license compliance, and also bring mentorship to projects. These tasks are difficult work that typically require human intervention, so we understand the allure of automating them; we and our peer organizations have long welcomed newcomers to this field and have together sought collaborative assistance for these issues. Indeed, Community Bridge's offerings bear some similarity to the work of organizations like Apache Software Foundation, the Free Software Foundation (FSF), the GNOME Foundation (GF), Open Source Initiative (OSI), Software in the Public Interest (SPI) and Conservancy. People have already begun to ask us to compare this initiative to our work and the work of our peer organizations. This blog post hopefully answers those questions and anticipated similar questions.

The first huge difference (and the biggest disappointment for the entire FOSS community) is that LF's Community Bridge is a proprietary software system. §4.2 of their Platform Use Agreement requires those who sign up for this platform to agree to a proprietary software license, and LF has remained silent about the proprietary nature of the platform in its explanatory materials. The LF, as an organization dedicated to Open Source, should release the source for Community Bridge. At Conservancy, we've worked since 2012 on a Non-Profit Accounting Software system, including creating a tagging system for transparently documenting ledger transactions, and various support software around that. We and SPI both now use these methods daily. We also funded the creation of a system to manage mentorship programs, which we now runs the Outreachy mentorship program. We believe fundamentally that the infrastructure we provide for FOSS fiscal sponsorship (including accounting, mentorship and license compliance) must itself be FOSS, and developed in public as a FOSS project. LF's own research already shows that transparency is impossible for systems that are not FOSS. More importantly, LF's new software could directly benefit so many organizations in our community, including not only Conservancy but also the many others (listed above) who do some form of fiscal sponsorship. LF shouldn't behave like a proprietary software company like Patreon or Kickstarter, but instead support FOSS development. Generally speaking, all Conservancy's peer organizations (listed above) have been fully dedicated to the idea that any infrastructure developed for fiscal sponsorship should itself be FOSS. LF has deviated here from this community norm by unnecessarily requiring FOSS developers to use proprietary software to receive these services, and also failing to collaborate over a FOSS codebase with the existing community of organizations. LF Executive Director Jim Zemlin has said that he “wants more participation in open source … to advance its sustainability and … wants organizations to share their code for the benefit of their fellow [hu]mankind”; we ask him to apply these principles to his own organization now.

The second difference is that LF is not a charity, but a trade association — designed to serve the common business interest of its paid members, who control its Board of Directors. This means that donations made to projects through their system will not be tax-deductible in the USA, and that the money can be used in ways that do not necessarily benefit the public good. For some projects, this may well be an advantage: not all FOSS projects operate in the public good. We believe charitable commitment remains a huge benefit of joining a fiscal sponsor like Conservancy, FSF, GF, or SPI. While charitable affiliation means there are more constraints on how projects can spend their funds, as the projects must show that their spending serves the public benefit, we believe that such constraints are most valuable. Legal requirements that assure behavior of the organization always benefits the general public are a good thing. However, some projects may indeed prefer to serve the common business interest of LF's member companies rather than the public good, but projects should note such benefit to the common business interest is mandatory on this platform — it's explicitly unauthorized to use LF's platform to engage in activities in conflict with LF’s trade association status). Furthermore, (per the FAQ) only one maintainer can administer a project's account, so the platform currently only supports the “BDFL” FOSS governance model, which has already been widely discredited. No governance check exists to ensure that the project's interests align with spending, or to verify that the maintainer acts with consent of a larger group to implement group decisions. Even worse, (per §2.3 of the Usage Agreement) terminating the relationship means ceasing use of the account; no provision allows transfer of the money somewhere else when projects' needs change.

Finally, the LF offers services that are mainly orthogonal and/or a subset of the services provided by a typical fiscal sponsor. Conservancy, for example, does work to negotiate contracts, assist in active fundraising, deal with legal and licensing issues, and various other hands-on work. LF's system is similar to Patreon and other platforms in that it is a hands-off system that takes a cut of the money and provides minimal financial services. Participants will still need to worry about forming their own organization if they want to sign contracts, have an entity that can engage with lawyers and receive legal advice for the project, work through governance issues, or the many other things that projects often want from a fiscal sponsor.

Historically, fiscal sponsors in FOSS have not treated each other as competitors. Conservancy collaborates often with SPI, FSF, and GF in particular. We refer applicant projects to other entities, including explaining to applicants that a trade association may be a better fit for their project. In some cases, we have even referred such trade-association-appropriate applicants to the LF itself, and the LF then helped them form their own sub-organizations and/or became LF Collaborative Projects. The launch of this platform, as proprietary software, without coordination with the rest of the FOSS organization community, is unnecessarily uncollaborative with our community and we therefore encourage some skepticism here. That said, this new LF system is probably just right for FOSS projects that (a) prefer to use single-point-of-failure, proprietary software rather than FOSS for their infrastructure, (b) do not want to operate in a way that is dedicated to the public good, and (c) have very minimal fiscal sponsorship needs, such as occasional reimbursements of project expenses.

Too many cores

Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!

NetworkManager 1.16 released, adding WPA3-Personal and WireGuard support

NetworkManager needs no introduction. In fifteen years since its initial release, it has reached the status of the standard Linux network configuration daemon of choice of all major Linux distributions. What, on the other hand, may need some introduction, are the features of its 28th major release.

Ladies and gentlemen, please welcome: NetworkManager-1.16.

Guarding the Wire

Unless you’ve been living under a rock for the last year, there’s a good chance you’ve heard of WireGuard. It is a brand new secure protocol for creating IPv4 and IPv6 Virtual Private Networks. It aims to be much simpler than IPsec, a traditional protocol for the job, hoping to accelerate the adoption and maintainability of the code base.

Unlike other VPN solutions NetworkManager supports, WireGuard tunnelling will be entirely handled by the Linux kernel. This has an advantages in terms of performance, and also removes the needs of a VPN plugin. We’ve started work on supporting WireGuard tunnels as first-class citizens and once the kernel bits settle, we’ll be ready.

More detail in Thomas’ article.

Wi-Fi goodies

Good Wi-Fi support is probably why many users choose NetworkManager on their laptops, and as always there are improvements in this area too. When wpa_supplicant is new enough, we’re now able use SAE authentication, as specified by the recent WPA3-Personal standard. This results in better security for password-protected home networks.

New NetworkManager adds support for pairing with Wi-Fi Direct (also known as Wi-Fi P2P) capable devices. Read more in an article by Benjamin Berg, author of GNOME Screencast, who also contributed he functionality to NetworkManager.

As usual, there’s also improvements to the IWD backend, an alternative to the venerable wpa_supplicant. With NetworkManager 1.16, users of IWD will be able to create Wi-Fi hot spots or take part in Ad-Hoc networks.

Network booting

Since the new version, it will be possible to run NetworkManager early in boot, prior to mounting the root filesystem. A dracut module will be able to convert network configuration provided on the kernel command line into keyfiles ready to be used by NetworkManager. Once NetworkManager succeeds in bringing up the network, it will terminate, leaving a state file for the real NetworkManager instance to pick up once the system is booted up.

This removes some redundancy and makes the network boot both more capable and robust.

Connectivity checks

Finally, new NetworkManager is able to be more precise in assessing connectivity status. Under the right conditions (that basically means systemd-resolved being available, not necessarily default), we’re now able to assess connectivity status on per-device basis and check IPv4 and IPv6 separately.

This will make it possible to prioritize default routes on internet-connected interfaces.

What’s next?

NetworkManager 1.18 is likely to see support for new Wi-Fi features; perhaps DPP and meshing. We’re also removing libnm-glib, since we no longer love it and nobody uses it anymore. Such is life.

What else? You decide! As always, even though patch submissions is what makes us the happiest, we also gladly take suggestions. Our issue tracker is open.

Acknowledgements

NetworkManager wouldn’t be what it is without contributions of hundreds of developers and translators worldwide. Here are the brave ones who contributed to NetworkManager since the last stable release: Aleksander Morgado, Andrei Dziahel, Andrew Zaborowski, AsciiWolf, Beniamino Galvani, Benjamin Berg, Corentin Noël, Damien Cassou, Dennis Brakhane, Evgeny Vereshchagin, Francesco Giudici, Frederic Danis, Frédéric Danis, garywill, Iñigo Martínez, Jan Alexander Steffens, Jason A. Donenfeld, Jonathan Kang, Kristjan SCHMIDT, Kyle Walker, Lennart Poettering, Li Song, Lubomir Rintel, luz.paz, Marco Trevisan, Michael Biebl, Patrick Talbert, Piotr Drąg, Rafael Fontenelle, scootergrisen, Sebastien Fabre, Soapux, Sven Schwermer, Taegil Bae, Thomas Haller, Yuri Chornoivan and Yu Watanabe.

Thank you!

March 12, 2019

Reporting problems in Flatpak applications

(Repost from https://abrt.github.io/, pardon any terrible formatting)

 

If you’ve ever experienced a crash in a Flatpak application, you might have noticed that there is no notification coming from ABRT for it, and maybe you even noticed some strange messages in the system journal:

abrt-server[…]: Unsupported container technology

The above appears when ABRT attempts to collect information about the container (currently only Docker and LXC), if the binary has been detected to have been run in one. For Flatpak applications, we probably get enough information already, so we can just special-case and do nothing instead.

Unfortunately, getting things like stack traces gets a bit more complicated than that.

For core dumps, I’ve experimented with a quick hack that reads mountinfo from the dump directory[0], creates a new user namespace, mounts the app and runtime directories, and, finally, generates a trace, which is workable enough to be used in a report, even without debug extensions installed. However, this is all until the user decides to update, which would likely invalidate the paths in mountinfo. For that, I should probably explore utilizing OSTree to check out the known commits first.

As for Python exceptions, things don’t really work at all. But, to put things in perspective, this is how it looks right now:

  •  A path configuration file is installed; it contains only an import line for the actual handler
  • Python opens it during initialization, executes all lines that start with import, which, conveniently, causes our code to run and override sys.excepthook

It’s not even close to being elegant, and people are already arguing about removing support (or, rather, coming up with a better alternative) for such things: https://bugs.python.org/issue33944. To support such an implementation, we would likely need to provide a runtime extension, or ask anyone shipping Python to include our handler.

To make things worse, the handler communicates with ABRT using a UNIX socket, which would at the very least require all apps to have host access. A better way would probably be utilizing a D-Bus API, and that would require having a portal, so that, again, we don’t have to punch holes in the sandbox.

Finally, there are still a couple unanswered questions:

What about other kinds of problems? At the moment, I can only think of Java programs as being supported and possibly coming with their own set of challenges.

How do we handle this from the server side? Everything is built around traditional packaging, where components belong to a specific release of a specific operating system (distro). In the end, it could require some rearchitecting in FAF (now ABRT Analytics) and ABRT itself.



[0] – From that we can only infer the OSTree ref and commit. There does exist Flatpak instance data in XDG_RUNTIME_DIR/.flatpak, but we cannot rely on it, due to its garbage-collected nature.

A fwupd client side certificate

In the soon-to-be-released fwupd 1.2.6 there’s a new feature that I wanted to talk about here, if nothing else to be the documentation when people find these files and wonder what they are. The fwupd daemon now creates a PKCS-7 client self-signed certificate at startup (if GnuTLS is enabled and new enough) – which creates the root-readable /var/lib/fwupd/pki/secret.key and world-readable /var/lib/fwupd/pki/client.pem files.

These certificates are used to sign text data sent to a remote server. At the moment, this is only useful for vendors who also have accounts on the LVFS, so that when someone in their QA team tests the firmware update on real hardware, they can upload the firmware report with the extra --sign argument to sign the JSON blob with the certificate. This allows the LVFS to be sure the report upload comes from the vendor themselves, and will in future allow the trusted so-called attestation DeviceChecksums a.k.a. the PCR0 to be set automatically from this report. Of course, the LVFS user needs to upload the certificate to the LVFS to make this work, although I’ve written this functionality and am just waiting for someone to review it.

It’ll take some time for the new fwupd to get included in all the major distributions, but when practical I’ll add instructions for companies using the LVFS to use this feature. I’m hoping that by making it easier to securely set the PCR0 more devices will have the attestation metadata needed to verify if the machine is indeed running the correct firmware and secure.

Of course, fwupd doesn’t care if the certificate is self-signed or is issued from a corporate certificate signing request. The files in /var/lib/fwupd/pki/ can be set to whatever policy is in place. We can also use this self-signed certificate for any future agent check-in which we might need for the enterprise use cases. It allows us send data from the client to a remote server and prove who the client is. Comments welcome.

March 11, 2019

Scale 17x – Slides

Talk went well today, you can grab a copy of the slides here.

March 10, 2019

Inspire me, Nautilus!

When I have some free time I like to be creative but sometimes I need a push of inspiration to take me in the right direction.

Interior designers and people who are about to get married like to create inspiration boards by gluing magazine cutouts to the wall.

6907272105_b47a5ca31a_b

‘Mood board for a Tuscan Style Interior’ by Design Folly on Flickr

I find a lot of inspiration online, so I want a digital equivalent. I looked for one, and I found various apps for iOS and Mac which act like digital inspiration boards, but I didn’t find anything I can use with GNOME. So I began planning an elaborate new GTK+ app, but then I remembered that I get tired of such projects before they actually become useful. In fact, there’s already a program that lets you manage a collection of images and text! It’s known as Files (Nautilus), and for me it only lacks the ability to store web links amongst the other content.

Then, I discovered that you can create .desktop files that point to web locations, the equivalent of .url files on Microsoft Windows. Would a folder full of URL links serve my needs? I think so!

Nautilus had some crufty code paths to deal with these shortcut files, which was removed in 2018. Firefox understands them directly, so if you set Firefox as the default application for the application/x-desktop file type then they work nicely: click on a shortcut and it opens in Firefox.

There is no convenient way to create these .desktop files: dragging and dropping a tab from Epiphany will create a text file containing the URL, which is tantalisingly close to what I want, but the resulting file can’t be easily opened in a browser. So, I ended up writing a simple extension that adds a ‘Create web link…’ dialog to Nautilus, accessed from the right-click menu.

Now I can use Nautilus to easily manage collections of links and I can mix in (or link to) any local content easily too. Here’s me beginning my ‘inspiration board’ for recipes …

Screenshot from 2019-03-04 22-05-13.png

<

March 08, 2019

Videos and Books in GNOME 3.32

GNOME 3.32 will very soon be released, so I thought I'd go back on a few of the things that happened with some of our content applications.

Videos
First, many thanks to Marta Bogdanowicz, Baptiste Mille-Mathias, Ekaterina Gerasimova and Andre Klapper who toiled away at updating Videos' user documentation since 2012, when it was still called “Totem”, and then again in 2014 when “Videos” appeared.

The other major change is that Videos is available, fully featured, from Flathub. It should play your Windows Movie Maker films, your circular wafers of polycarbonate plastic and aluminium, and your Devolver indie films. No more hunting codecs or libraries!

In the process, we also fixed a large number of outstanding issues, such as accommodating for the app menu's planned disappearance, moving the audio/video properties tab to nautilus proper, making the thumbnailer available as an independent module, making the MPRIS plugin work better and loads, loads mo.


Download on Flathub

Books

As Documents was removed from the core release, we felt it was time for Books to become independent. And rather than creating a new package inside a distribution, the Flathub version was updated. We also fixed a bunch of bugs, so that's cool :)
Download on Flathub

Weather

I didn't work directly on Weather, but I made some changes to libgweather which means it should be easier to contribute to its location database.

Adding new cities doesn't require adding a weather station by hand, it would just pick the closest one, and weather stations also don't need to be attached to cities either. They were usually attached to villages, sometimes hamlets!

The automatic tests are also more stringent, and test for more things, which should hopefully mean less bugs.

And even more Flatpaks

On Flathub, you'll also find some applications I packaged up in the last 6 months. First is Teo Thomson emulator, GBE+, a Game Boy emulator focused on accessories emulation, and a way to run your old Flash games offline.

Bootstrapping RHEL 8 support on mono-project.com

Preamble

On mono-project.com, we ship packages for Debian 8, Debian 9, Raspbian 8, Raspbian 9, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, RHEL/CentOS 6, and RHEL/CentOS 7. Because this is Linux packaging we’re talking about, making one or two repositories to serve every need just isn’t feasible – incompatible versions of libgif, libjpeg, libtiff, OpenSSL, GNUTLS, etc, mean we really do need to build once per target distribution.

For the most part, this level of “LTS-only” coverage has served us reasonably well – the Ubuntu 18.04 packages work in 18.10, the RHEL 7 packages work in Fedora 28, and so on.

However, when Fedora 29 shipped, users found themselves running into installation problems.

I was not at all keen on adding non-LTS Fedora 29 to our build matrix, due to the time and effort required to bootstrap a new distribution into our package release system. And, as if in answer to my pain, the beta release of Red Hat Enterprise 8 landed.

Cramming a square RPM into a round Ubuntu

Our packaging infrastructure relies upon a homogenous pool of Ubuntu 16.04 machines (x64 on Azure, ARM64 and PPC64el on-site at Microsoft), using pbuilder to target Debian-like distributions (building i386 on the x64 VMs, and various ARM flavours on the ARM64 servers); and mock to target RPM-like distributions. So in theory, all I needed to do was drop a new RHEL 8 beta mock config file into place, and get on with building packages.

Just one problem – between RHEL 7 (based on Fedora 19) and RHEL 8 (based on Fedora 28), the Red Hat folks had changed package manager, dropping Yum in favour of DNF. And mock works by using the host distribution’s package manager to perform operations inside the build root – i.e. yum.deb from Ubuntu.

It’s not possible to install RHEL 8 beta with Yum. It just doesn’t work. It’s also not possible to update mock to $latest and use a bootstrap chroot, because reasons. The only options: either set up Fedora VMs to do our RHEL 8 builds (since they have DNF), or package DNF for Ubuntu 16.04.

For my sins, I opted for the latter. It turns out DNF has a lot of dependencies, only some of which are backportable from post-16.04 Ubuntu. The dependency tree looked something like:

  •  Update mock and put it in a PPA
    •  Backport RPM 4.14+ and put it in a PPA
    •  Backport python3-distro and put it in a PPA
    •  Package dnf and put it in a PPA
      •  Package libdnf and put it in a PPA
        •  Backport util-linux 2.29+ and put it in a PPA
        •  Update libsolv and put it in a PPA
        •  Package librepo and put it in a PPA
          •  Backport python3-xattr and put it in a PPA
          •  Backport gpgme1.0 and put it in a PPA
            •  Backport libgpg-error and put it in a PPA
        •  Package modulemd and put it in a PPA
          •  Backport gobject-introspection 1.54+ and put it in a PPA
          •  Backport meson 0.47.0+ and put it in a PPA
            •  Backport googletest and put it in a PPA
        •  Package libcomps and put it in a PPA
    •  Package dnf-plugins-core and put it in a PPA
  •  Hit all the above with sticks until it actually works
  •  Communicate to community stakeholders about all this, in case they want it

This ended up in two PPAs – the end-user usable one here, and the “you need these to build the other PPA, but probably don’t want them overwriting your system packages” one here. Once I convinced everything to build, it didn’t actually work – a problem I eventually tracked down and proposed a fix for here.

All told it took a bit less than two weeks to do all the above. The end result is, on our Ubuntu 16.04 infrastructure, we now install a version of mock capable of bootstrapping DNF-requiring RPM distributions, like RHEL 8.

RHEL isn’t CentOS

We make various assumptions about package availability, which are true for CentOS, but not RHEL (8). The (lack of) availability of the EPEL repository for RHEL 8 was a major hurdle – in the end I just grabbed the relevant packages from EPEL 7, shoved them in a web server, and got away with it. The second is structural – for a bunch of the libraries we build against, the packages are available in the public RHEL 8 repo, but the corresponding -devel packages are in a (paid, subscription required) repository called “CodeReady Linux Builder” – and using this repo isn’t mock-friendly. In the end, I just grabbed the three packages I needed via curl, and transferred them to the same place as the EPEL 7 packages I grabbed.

Finally, I was able to begin the bootstrapping process.

RHEL isn’t Fedora

After re-bootstrapping all the packages from the CentOS 7 repo into our “””CentOS 8″”” repo (we make lots of naming assumptions in our control flow, so the world would break if we didn’t call it CentOS), I tried installing on Fedora 29, and… Nope. Dependency errors. Turns out there are important differences between the two distributions. The main one is that any package with a Python dependency is incompatible, as the two handle Python paths very differently. Thankfully, the diff here was pretty small.

The final, final end result: we now do every RPM build on CentOS 6, CentOS 7, and RHEL 8. And the RHEL 8 repo works on Fedora 29

MonoDevelop 7.7 on Fedora 29.

The only errata: MonoDevelop’s version control addin is built without support for ssh+git:// repositories, because RHEL 8 does not offer a libssh2-devel. Other than that, hooray!

March 07, 2019

Purism's PureOS is convergent

Three years ago on April 2, 2016, I wrote an article about A brief history of user interfaces. In that article, I followed up on another essay of mine about visual brand and user experience, where I introduced the concept of breaking down a user interface into component parts, as a way to identify the distinctive features that create a "visual brand."

At the end of my article, I made a comment that desktop and mobile operating systems were converging:
Today, computers are more than a box with a monitor, keyboard, and mouse. We use smartphones and tablets alongside our desktop and laptop computers. In many cases, "mobile" (phones and tablets) displace the traditional computer for many tasks. I think it's clear that the mobile and desktop interfaces are merging. Before too long, we will use the same interface for both desktop and mobile.

The key to making this work is a user interface that truly unifies the platforms and their unique use cases. We aren't quite there yet, but GNOME 3 and Windows 10 seem well positioned to do so. I think MacOS X and iOS (Apple's mobile platform) feature similar interfaces without uniting the two. Perhaps Apple's is a better strategy, to provide a slightly different user interface based on platform. I think it will be interesting to see this area develop and improve.
This is a similar sentiment to a comment I made on another blog in 2013 about the future of technology. In that article, I theorized how technology might change over time, proposing that our phones might soon substitute for a desktop; just plug in your phone to a keyboard and display, and you can continue your work:
What about five years from now? How will technology inherit the future? What devices will we use at that time? The convergence of mobile devices and laptops seems likely. Some vendors have experimented in this space, with mixed success. It seems a matter of time until someone strikes the right balance, and this new device becomes the next "must-have" technology that displaces even the iPad. …

While the market seems unwilling to adopt this device today, we may in five years consider it obvious that our computer fits in our pocket, as a phone, ready to be docked to a keyboard and monitor for more traditional "desktop" computing.
Well, the future is now. The convergence of mobile devices and laptops is happening. Jeremiah Foster, Director PureOS at Purism, writes in Many Devices, One OS that "Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release."

PureOS (which is really GNOME apps on top of the Linux kernel) now sports an Adaptive Design. That means the application can rearrange the user interface to suit the display device it is run from. Take a web browser, for example. On a desktop, you might place UI controls at the top - typical for most desktop UI designs. But on a mobile device, such as a phone, it may be better to move the UI controls elsewhere, such as to the bottom.

Think of Adaptive Design as the application equivalent to Responsive Web Design. A website that uses Responsive Web Design might collapse a list of navigation links to a menu when viewed on a narrow display device, such as a phone. Or the website might rearrange or resize some content (such as images) to better suit a smaller display. Almost every modern website leverages Responsive Web Design.

Congratulations to the folks at Purism for their work in Adaptive Design. Foster comments that Adaptive Design means Purism can re-use PureOS currently used on their 13" and 15" laptops and leverage it to run their 5" phones, coming soon. That's good news for Purism, but it's also a great step forward for technology innovation.

Flicker Free Boot FAQ

There have been questions about the Fedora 30 Flicker Free Boot Change in various places, here is a FAQ which hopefully answers most questions:

1) I get a black screen for a couple of seconds during boot?

1.1) If you have an AMD or Nvidia GPU driving your screen, then this is normal. The graphics drivers for AMD and Nvidia GPUs reset the hardware when loading, this will cause the display to temporarily go black. There is nothing which can be done about this.

1b) If you have a somewhat older Intel GPU (your CPU is pre Skylake) then the i915 driver's support to skip the mode-reset is disabled by default (for now) to fix this add "i915.fastboot=1" to your kernel commandline. For more info on modifying the kernel cmdline, see question 6. .

1c) Do "ls /sys/firmware/efi/efivars" if you get a "No such file or directory" error then your system is booting in classic BIOS mode instead of UEFI mode, to fix this you need to re-install and boot the livecd/installer in UEFI mode when installing. Alternatively you can try to convert your existing install, note this is quite tricky, make backups first!

1d) Your system may be using the classic VGA BIOS during boot despite running in UEFI mode. Often you can select BIOS mode compatility in your BIOS settings aka the CSM setting. If you can select this on a per component level, set the VIDEO/VGA option to "UEFI only" or "UEFI first", alternatively you can try completely disabling the CSM mode.

2) I get a grey-background instead of the firmware splash while Fedora is booting?

Do "ls /sys/firmware/acpi/bgrt" if you get a "No such file or directory" error then try answers 1c and 1d . If you do have a /sys/firmware/acpi/bgrt directory, but you are still getting the Fedora logo + spinner on a grey background instead of on top of the firmware-splash, please file a bug about this and drop me a mail with a link to the bug.

3) Getting rid of the vendor-logo/firmware-splash being shown while Fedora is booting?

If you don't want the firmware-splash to be used as background during boot, you can switch plymouth to the spinner theme, which is identical to the new bgrt theme, except that it does not use the firmware-splash as background, to do this execute the following command from a terminal:
"sudo plymouth-set-default-theme -R spinner"

4) Keeping the firmware-splash as background while unlocking the disk?

If you prefer this, it is possible to keep the firmware-splash as background while the diskcrypt password is shown. To do this do the following:

  1. "sudo mkdir /usr/share/plymouth/themes/mybgrt"

  2. "sudo cp /usr/share/plymouth/themes/bgrt/bgrt.plymouth /usr/share/plymouth/themes/mybgrt/mybgrt.plymouth"

  3. edit /usr/share/plymouth/themes/mybgrt/mybgrt.plymouth, change DialogClearsFirmwareBackground=true to DialogClearsFirmwareBackground=false, change DialogVerticalAlignment=.382 to DialogVerticalAlignment=.6

  4. "sudo plymouth-set-default-theme -R mybgrt"

Note if you do this the disk-passphrase entry dialog may be partially drawn over the vendor-logo part of the firmware-splash, if this happens then try increasing DialogVerticalAlignment to e.g. 0.7 .

5) Get detailed boot progress instead of the boot-splash ?

To get detailed boot progress info press ESC during boot.

6) Always get detailed boot progress instead of the boot-splash ?

To always get detailed boot progress instead of the boot-splash, remove "rhgb" from your kernel commandline:

Edit /etc/default/grub and remove rhgb from GRUB_CMDLINE_LINUX and then if you are booting using UEFI (see 1c) run:
"grub2-mkconfig -o /etc/grub2-efi.cfg"
else (if you are booting using classic BIOS boot) run:
"grub2-mkconfig -o /etc/grub2.cfg".

March 06, 2019

Nvidia drivers in Fedora Silverblue

I really like how Fedora Silverblue combines the best of atomic, image-based updates and local tweaking with its package layering idea.

However, one major issue many people has had with it is support for the NVIDIA drivers. Given they ares not free software they can’t be shipped with the image, so one imagines using package layering to would be a good way to install it. In theory this works, but unfortunately it often runs into issues, because frequent kernel updates cause there to be no pre-built nvidia module for your particular kernel/driver version.

In a normal Fedora installation this is handled by something called akmods. This is a system where the kernel modules ship as sources which get automatically rebuilt on the target system itself when a new kernel is installed.

Unfortunately this doesn’t quite work on Silverblue, because the system image is immutable. So, I’ve been working recently on making akmods work in silverblue. The approach I’ve taken is having the modules being built during the rpm-ostree update command (in the %post script) and the output of that being integrated into the newly constructed image.

Last week the final work landed in the akmods and kmodtools packages (currently available in updates-testing), which means that anyone can easily experiment with akmods, including the nvidia drivers.

Preparing the system

First we need the latest of everything:

$ sudo rpm-ostree update

The required akmods packages are in updates-testing at the moment, so we’ll enable that for now:

$ sudo vi /etc/yum.repos.d/fedora-updates-testing.repo
... Change enabled to 1 ..

Then we add the rpmfusion repository:

$ sudo rpm-ostree install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-29.noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-29.noarch.rpm

At this point you need to reboot into the new ostree image to enable installation from the new repositories.

$ systemctl reboot

Installing the driver

The akmod-nvidia package in the current rpm-fusion is not built against the new kmodtools, so until it is rebuilt it will not work. This is a temporary issue, but  I built a new version we can use until it is fixed.

To install it, and the driver itself we do:

$ sudo rpm-ostree install http://people.redhat.com/alexl/akmod-nvidia-418.43-1.1rebuild.fc29.x86_64.rpm xorg-x11-drv-nvidia

Once the driver in rpm-fusion is rebuilt the custom rpm should not be necessary.

We also need to blacklist the built-in nouveau driver so to avoid driver conflicts:

$ sudo rpm-ostree kargs --append=rd.driver.blacklist=nouveau --append=modprobe.blacklist=nouveau --append=nvidia-drm.modeset=1

Now you’re ready to boot into your fancy new silverblue nvidia experience:

$ systemctl reboot

What about Fedora 30/Rawhide?

All the changes necessary for this to work have landed, but there is no Fedora 30 Silverblue image yet (only a rawhide one), and the rawhide kernel is built with mutex debugging which is not compatible with the nvidia driver.

However, the second we have a Fedora 30 Silverblue image with a non-debug kernel the above should work there too.

Scale 17x

A reminder that I’ll be speaking at Scale 17x in Pasadena on Sunday, March 10th about all the cool stuff we’ve been doing in Builder and how that plays into the role of modernizing our development stack.

Also, Sri is speaking. Matthias too.

March 05, 2019

Outreachy GNOME usability testing wrap-up

The December-March cycle of Outreachy has finished, and I wanted to do a quick recap of the work from our intern, Clarissa.

As I mentioned when we started week 1 GNOME usability testing, most of our work in the internship was testing designs that haven't gone "live" yet (this is called "prototype usability testing"). Allan and Jakub created mock-ups of new designs, and Clarissa did usability testing on them.

That means a lot of the applications were still being worked on, and were often delivered in Flatpak format since these "in development" versions were not part of a systemwide release (which would have likely been included natively in a Linux distribution somewhere).

As a result, we ran this cycle of usability testing in a more "loose" fashion that we have in previous cycles. We didn't have a long run-up to usability testing, where we could take our time learning about usability testing and carefully constructing new scenario tasks. Rather, Allan provided an overview of what he was hoping to get out of each usability test (how do users respond to this new feature, do they interact with the user interface differently, etc?) and Clarissa had to quickly assemble scenario tasks based on that. I helped with focus and wording on the scenario tasks.

We didn't expect anyone in the internship to come with previous experience, but we needed someone who could learn quickly. We expected that the intern would "learn as you go" and Clarissa did a great job with that!

Clarissa ran three usability tests, which she describes in her Final internship report on her blog. The three tests were:
  1. A new GNOME Sound Settings design (see results)
  2. New designs for GNOME Files and GNOME Notes (see results)
  3. Updated design for GNOME gedit (see results)
In Clarissa's Final internship report, she also shares some thoughts and lessons learned from the internship. Please read Clarissa's blog for her takeaways, but I wanted to highlight this one:
A bigger number of testers does not always give you more precise results. When I applied to the internship, I tried to look for the bigger amount of volunteers as I could because I thought it would bring me better results and I would have a better contribution, and, consequently, I would be chosen for the internship (hehe :P). On the first week of the internship I studied with the help of some articles that Jim sent me and I discovered that the time you spend running tests with more testers than you actually need can be spent with writing results faster and giving the design team more time to work on new designs. I discovered that it was true on the first round, when I ran tests with 7 volunteers, when I needed only 5. I wrote about how many testers do we need here.
I've highlighted the important bit.

You don't need very many testers to get useful results. This is especially important if you are doing iterative testing: create a design, test it, tweak the design based on results, test it again, tweak the design again, test it again, final tweaks based on results. You can learn enough about the design by using only five testers. If each tester exercises about 31% of usability problems, after five testers you've uncovered 85% of the issue. That's enough to make changes to the design.

Doing more tests with more testers doesn't really get you much further down the road. Do each round of usability testing with about five testers and you'll be fine. And that's exactly what Clarissa found.

I'm very proud of Clarissa for her work in this cycle of Outreachy. She great work and provided many useful results that I'm sure the Design team will be able to use to tweak future designs. That's why we do usability testing.

On a personal note, it was great to stay involved in GNOME and be part of usability testing. I believe being a mentor is a valuable experience. If you haven't mentored an intern as part of your work in open source software, I encourage you to do so. Outreachy is a wonderful opportunity because it provides paid internships for women and other underrepresented groups to work in open source software.

But there are other ways to mentor someone. Find an outlet that works for you, and bring someone under your wing. By helping others get involved in open source software, we make the entire open source community stronger.

Testing Discourse for GTK

For the past 20 years or so, GTK used IRC and mailing lists for discussions related to the project. Over the years, use of email for communication has declined, and the overhead of maintaining the infrastructure has increased; sending email to hundreds or thousands of people has become increasingly indistinguishable from spam, in the eyes of service providers, and GNOME had to try and ask for exceptions—which are not easy to get, and are quite easy to be revoked. On top of that, the infrastructure in use for managing mailing lists is quite old and crumbly, and it’s unnecessarily split into various sub-categories that make following discussions harder than necessary.

After discussions among the GTK team, with the GNOME infrastructure maintainers, and with the GTK community at large, we decided to start a trial run of Discourse as a replacement for mailing lists, first and foremost, and as a way to provide an official location for the GTK community to discuss the development of, and with, GTK—as well as the rest of the core GNOME platform: GLib, Pango, GdkPixbuf, etc.

You can find the Discourse instance on discourse.gnome.org. On it, you can use the Platform and Core categories for discussions about the core GNOME platform; you can use the appropriate tags for your topics, and subscribe to the ones you’re interested in.

We’re planning to move some of the pages on the wiki to Discourse as well, especially the ones where we expect feedback from the community.

We’re still working on how to migrate users of the various mailing lists related to GTK, in order to close the lists and have a single venue instead of splitting the community; in the meantime, if you’re subscribed to one or more of these lists:

  • gtk-devel-list
  • gtk-app-devel-list
  • gtk-list
  • gtk-i18n-list

then you may want to have a look at Discourse, and join the discussions there.

User account fallback images in GNOME 3.32

Your face might resemble this one in the left (avatar-default) as much as it could be pretty much everyone else using the same computer as you. With this in mind, we introduced a small feature in GNOME 3.32 that intends to make it easier for users to identify themselves in a list of system users, such as in the login screen or in Settings.

From now on, GNOME won’t set the “avatar-default” icon for users created in the Initial Setup or in Setting. It will create a colourful image with the user’s initials on it.

The colour palette is the same used in the new icon guidelines (if you haven’t heard yet, we are living now a Big App Icon Revolution in GNOME!). User names (full names) are mapped to colours in the palette, and therefore are consistent everywhere you enter the exact full user name. So get used to your colour!

Nothing else about the user image setup is going to change. You still can:

  1. Select a picture with a file chooser.
  2. Take a picture with your webcam.
  3. Select one of the GNOME stock avatars.

Another detail that came with these changes is that now user images will be rounded everywhere in GNOME. These efforts are part of the “Consistent user images across GNOME” initiative.

User Accounts panel in Settings
GNOME Initial Setup
Login screen

GNOME 3.32.0 is coming out next week! o/

March 04, 2019

Fedora in Fediverse

fedora+fediverse

I really like how Fediverse is shaping up and its federation is starting to make sense to me. It’s not federation for the sake of federation and running different instances of the same service, but about variety of different services that focus on different things, cater different users and yet being to able to talk to each other.

There is Mastodon for microblogging, Friendica for a Facebook-style social network, PeerTube for videos, PixelFed for pictures, Nextcloud Social for making a social network out of your private cloud etc.

The number of users is also growing, it’s already in millions, so it’s becoming an interesting platform for promotion. There are quite a few open source projects already present: GNOME, KDE, openSUSE, Ubuntu, Nextcloud, Debian, F-Droid… And I’ve seen quite a few Fedora contributors scattered across different instances.

So I was like why not to have a Fedora instance there. That’s where Fedora contributors and enthusiasts can have their fediverse homes and create a community within a community, and where the Project can have official accounts for posting just like on Twitter or Facebook. The Fedora Code of Conduct would be enforced there, so that people can feel safe there.

I think the most suitable service for that would be Mastodon, microblogging is generally the most popular in Fediverse, Mastodon has a very active development and seems to be the most mature. There is also “Mastodon as a Service” hosting called masto.host. The domain could be e.g. fedora.social.

After proposing it on Fedora Discourse and talking to several people I think the best approach would be starting it as a personal initiative and if it gets traction handing it over to the Project. It would mean that I’d have to cover the hosting costs and rely on donations for the time being because that’s what having a social network where you’re not the product that is being sold takes.

But it would all go in vain if there was no demand for it. So I wonder:

  • Would it make you start using Fediverse/Mastodon?
  • Would it make you switch from another instance? (Mastodon allows you to transfer your data)
  • Would you be willing to take a more active role (admin, moderator…)?
  • Would you make a small contribution to help cover costs of running the service?

If you answer yes to any of the questions, let me know. I’d love to know if there is some demand for this or even people willing to help with it.

Resource Scale for Fractional Scaling support in GNOME Shell 3.32

Fractional Scaling TestThe fractional scaling era for GNOME shell has finally arrived!

The news spread out quite quickly, once last Friday Jonas pressed the button and that triggered the last-second merge for the relevant proposals we prepared for Mutter and GNOME Shell in order to get this available for GNOME 3.32.

As someone might recall, we started this work some years ago (ouch!) and lead to an Hackfest in Taipei, but in between other work to do and priorities which caused this to be delayed a bit. While the first iteration was ready for some time now. But at every review we improved things fixing bugs (like missing scaled widgets) and optimizing some code paths, so hopefully this time helped in serving better quality :).

We’ve still quite a lot of work to do (see these issues for mutter and shell) and some fixes that we have in queue already, but the main task is there. So starting from now the shell will paint all its elements properly and in good visual quality at any fractional scaled value, and independently for every monitor.

Multi-monitor fractional scaling

Monitors with different scaling values, and a window drawn in between the two

As you might have noticed in the screenshot above, the X11 apps are still not really scaled in quality, while it’s not possible for them all (like xterm there), we need to work for a solution that will cover the legacy applications which does support scaling, and at the same time those which doesn’t want to be scaled at all (games!).

As per what said above, this feature is still considered experimental and then you need to enable it via:

gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"

Doing this will allow you to set more a wider set of scaling values under Control Center, display panel.

For what concerns Extensions, most of them should work with no change, but not those using directly the St.TextureCache, as we changed the methods signature by adding a resource_scale parameter.
We discussed weather adding another method instead (as gir doesn’t support default values), but since 3.32 will need anyway a rewrite of most extensions anyways, and since it’s better to have them to behave properly with resource scale since the beginning (instead of blurred contents), we decided not to do it.
So, please sync with this change (and sorry :)).

Putting my Ubuntu hat now, this won’t change much things for default ubuntu experience, since it’s still using X11 (although I’ve something in the works for this too), but people who want to take advantage of this can easily just login using the Ubuntu on Wayland session, enable the experimental setting and profit.

As final world, thanks to people who helped having this in by reviewing and testing the code.

BuildStream news and 2.0 planning

Hi all,

It has been a very long time since my last BuildStream related post, and there has been a huge amount of movement since then, including the initial inception of our website, the 1.2 release, the beginnings of BuildGrid, and several hackfests including one in manchester hosted by Codethink, and another one in London followed by the Build Meetup which were hosted by Bloomberg at their London office.

BuildStream Gathering (BeaverCon) January 2019 group photo

As a newyears resolution, I should make a commitment to blogging about upcoming hackfests as soon as we decide on dates, sorry I have been failing at this !

Build Meetup

While the other BuildStream related hackfests were very eventful (each of them with at least 20 attendees and jam packed with interesting technical and planning sessions), the Build Meetup in London is worth a special mention as it marks the beginnings of a collaborative community of projects in the build space.

Developers of Bazel, BuildStream & BuildGrid and Buildbarn (and more!) united in London to discuss our wide variety of requirements for the common remote execution API specification, had various discussions on how we can improve performance and interoperability of our tooling in distributed build scenarios and most importantly; had an opportunity to meet in person and drink beer together !

BuildStream 2.0 planning

As some may have noticed, we did not release any BuildStream 1.4 in the expected timeframes, and the big news is that we won’t be.

After much deliberation and hair pulling, we have decided to focus on a new BuildStream 2.0 API.

Why ?

The reasons for this are multifold, but mainly:

  • The project has received much more interest and funding than anticipated, as such we currently have a staggering amount of contributors working full time on BuildStream & BuildGrid combined. Considering the magnitude of features and changes we want to make, committing to a 6 month stable release cycle for this development is causing too much friction.
  • A majority of contributors would prefer to refine and perfect the command line interface than to commit to the interface we initially had, and indeed the result after adding new functionality will be more comprehensive and intuitive as a result. The result of these changes will not however be backwards compatible.
  • With a long term plan to make it easier for external plugins to be developed, along with a plan to split out plugins into more sensible separate repositories, we also anticipate some API breakages to the format related to plugin loading which cannot be avoided.

To continue with the 1.x API while making the changes we want to make would be dishonest and just introduce unexpected breakages.

What are we planning ?

Some highlights of what we are working on for future BuildStream 2.x include:

Remote execution support

A huge amount of the ongoing work is towards supporting distributed build scenarios and interoperability with other tooling on server clusters which implement the remote execution API.

This means:

  • Element builds which occur on distributed worker machines
  • Ability to implement a Bazel element which operates on the same build servers and share the same caching mechanics
  • Ability to use optimizations such as RECC which interact with the same servers and caching mechanics

Optimizating for large scale projects

While performance of BuildStream 1.x is fairly acceptable when working with smaller projects with 500-1000 elements, it falls flat on its face when processing larger projects with ~50K elements or more.

New commands for interacting with artifacts

For practical purposes, we have been getting along well enough without the ability to list the artifacts in your cache or examine details about specific artifacts, but the overall experience will be greatly enriched with the ability to specify and manipulate artifacts (rather than elements) directly on the command line.

Stability of artifact cache keys

This is a part of BuildStream which was never stable (yet), meaning that any minor point upgrade (e.g. from 1.0 -> 1.2) would result in a necessary rebuild of all of a given project’s artifacts.

While this is not exactly on the roadmap yet, there is a growing interest in all contributing parties to make these keys stable, so I have an expectation that this will stabilize before any 2.0 release.

Compatibility concerns

BuildStream 2.x will be incompatible with 1.x, as such we will need to guarantee safety to users of either 1.x or 2.x, probably in the form of ensuring parallel installability of BuildStream itself and ensuring that the correct plugins are always loaded for the correct version, and that 1.x and 2.x clients do not cross contaminate eachother in any way.

While there are no clear commitments about just how much the 2.x API will resemble the 1.x API, we are committed to providing a clear migration guide for projects which need to migrate, and we have a good idea what is going to change.

  • The command line interface will change a lot. We consider this to be the least painful for users to adapt to, and as there are a lot of things which we want to enhance, we expect to take a lot of liberties in changing the command line interface.
  • The Plugin facing Python API will change, we cannot be sure how much. One important change is that plugins will not interact directly with the filesystem but instead must use an abstract API. There are not that many plugins out in the wild so we expect that this migration will not be very painful to end users.
  • The YAML format will change the least. We recognize that this is the most painful API surface to change and as such we will try our best to not change it too much. Not only will we ensure that a migration process is well documented, but we will make efforts to ensure that such migrations require minimal effort.

We have also made some commitment to testing BuildStream 2.x on a continuous basis and ensuring that we can build the freedesktop-sdk project with the new BuildStream at all times.

Schedule

We have not committed to any date for the 2.0 release as of yet, and we explicitly do not have any intention to commit to a date until such a time that we are comfortable with the new API and that our distributed build story has sufficiently stabilized. I can say with confidence that this will certainly not be in 2019.

We will be releasing development snapshots regularly while working towards 2.0, and these will be released in the 1.90.x range.

In the mean time, as far as I know there are no features which are desperately needed by users at this time, and as long as bugs are identified and fixed in the 1.2.x line, we will continue to issue bugfix releases.

We expect that this will provide the safety required for users while also granting us the freedom we need to develop towards the new 2.0.

FOSSASIA 2019

Often, a conference is an opportunity to visit exotic and distant locations like Europe, and maybe we forget to check out the exciting conferences closer to home.

In this spirit, I will be giving another BuildStream talk at FOSSASIA this year. This conference has been running strong since 2009, takes place in Singapore and lasts four days including talks, workshops, an exhibition and a hackathon.

I’m pretty excited to finally see this for myself and would like to encourage people to join me there !

February 28, 2019

Memories of #LinuxInEdinburgh

1. Prior to the event

1.1. Organization

It was about a month of preparation for running the event. Thanks to the support of classmates of the HPC master that accepted to share their knowledge using Linux with supercomputers and all their experiences in the scientific background acquired for years. 1.2. Helpers

Efforts were done to do some decoration work, as well as taking care of the technical part for the event (e.g. copying the ISO images, VirtualBox for macs / Windows, code snippets).

1.3. Publicity

It was needed to have the official authorization to post the advertising related to the event. Thanks to the Edinburgh University Students Association for the help with this.

2. During the event

2.1. Lunch 

Speakers and volunteers gathered together at noon to share lunch at Mosque Kitchen 😀
2.2. Talks

As had been planned, the event happened on February 27th, in Appleton Tower Lecture Theatre 4. I started by presenting two free software projects such as Fedora OS project and the Linux UI: GNOME. How Linux was born, the beginning of Fedora and GNOME were also explained, what is GUADEC, and the community of these projects. Then, we installed Fedora 29 in a virtual machine, later Jim Walker installed some applications that students usually used such as  Atom,  Spotify, and Latex.Python and C were the programming languages chosen for this Linux event. We had a last minute speaker from Red Hat Czech Republic, with only 19 years old Marcel Plch took his plane, came to do a great talk about Python on Fedora and he brought a lot of gifts 🙂Andreas Hadjigeorgiou from Cyprus was in charge of the C programming talk for around an hour. He started by explaining basic concepts like libraries and loops to more complex structures such as arrays, pointers, structs and containing functions.

2.3. Fedora and GNOME party

We shared for an hour a cake and coke. This was sponsored by Fedora and GNOME!

2.4. Picture frame 

These are some of the photos we took with the frame. I am glad that I was able to introduce both projects, Fedora and GNOME, that were not well known for some people. 

3. After the event

3.1. Thoughts 

Attendees were able to express their feelings about this event. We are going to take them into consideration to improve upcoming events.  It was great to meet new amazing people.My motivation for doing this event was to retribute what Linux did for me in my personal and professional life. Acknowledge the work of Richard Stallman with his ideology of sharing the code for the benefit of everyone, as well as the Linus Torvalds thought: “Linux succeeded thanks to selfishness and trust”. Special thanks to Fedora and GNOME for sponsoring this part.

3.2. Networking

I would like to thank again to Marcel to travel to Edinburgh, the effort and time he did, were productive to inspire others with his talk! Also thanks to the speakers and volunteers like Andreas, Holly, Jim, Alexey, Javiera, Ana, Ruwaida and Jasmin for the effort and time.I would also like to thank to Alberto Fanjul, I hope we can manage better time and efforts for running an event in Edinburgh. To Leyla Marcelo for the design of the poster, and Alexey Riepenhausen for the videos and professional photos! Thanks to the University of Edinburgh, Fedora, and GNOME for supporting this event.

DSC_0551

Testing Flicker Free Boot on Fedora 29

For those of you who want to give the new Flicker Free Boot enhancements for Fedora 30 a try on Fedora 29, this is possible now since the latest F29 bugfix update for plymouth also includes the new theme used in Fedora 30.

If you want to give this a try, add "plymouth.splash_delay=0 i915.fastboot=1" to your kernel commandline:

  1. Edit /etc/default/grub, add "plymouth.splash_delay=0 i915.fastboot=1" to GRUB_CMDLINE_LINUX

  2. Run "sudo grub2-mkconfig -o /etc/grub2-efi.cfg"

Note that i915.fastboot=1 causes the backlight to not work on Haswell CPUs (e.g. i5-42xx CPUs), this is fixed in the 5.0 kernels which are currently available in rawhide/F30.

Run the following commands to get the updated plymouth and the new theme and to select the new theme:

  1. "sudo dnf update plymouth*"

  2. "sudo dnf install plymouth-theme-spinner"

  3. "sudo cp /usr/share/pixmaps/fedora-gdm-logo.png /usr/share/plymouth/themes/spinner/watermark.png"

  4. "sudo plymouth-set-default-theme -R bgrt"

Now on the next boot / installing of offline-updates you should get the new theme.