June 23, 2019

Something to show for

Since my last blog post I have kept working on my GSoC project with guidance from my mentor and we finally have some UI to show: a very primitive version of the Savestates Menu.

Primitive Savestates Menu

The current capabilities of this menu are as follows:

  • List the available savestates (they are currently named according to their creation date)
  • Load any of the savestates on click
  • Manually create a savestate of the current game using the “Save” button

Unfortunately along with the progress that was made we also encountered a bug with the NintendoDS core that causes Games to crash if we attempt to load a savestate. We are not yet 100% sure if the bug is caused by my changes or by the NintendoDS core itself.

I hope we are able to fix it by the end of the summer although I am not even sure where to start since savestates are working perfectly fine with other cores. Another confusing matter about this is that the Restart/Resume Dialog works fine with the NintendoDS core and it also uses savestates. This led me to believe that perhaps cores can be used to load savestates only once, but this can’t be the problem since we re-instantiate the core every time we load a savestate.

In the worst case we might just have to make a special case for the NintendoDS core and not use savestates with it, except for the Resume/Restart dialog. This would sadden me deeply since there are plenty of NintendoDS games which could benefit from this feature.

On the flip side of the coin, I have been successfully able to use the new menu in order to get a better high-score at Asteroids by loading to a previous savestate everytime I lost a life 🙂 This was one of the first use-cases I thought about when I decided to work on this project and it makes me very happy to see that our work has made it possible.

Labor of love

While this may not be the way the game was meant to be played initially and some might argue it’s considered “cheating”, it was still a very fun way to spend an evening after a long day of work.

In the end that’s why we play games: to have fun 🙂

June 21, 2019

Rebasing downstream translations

At Endless, we maintain downstream translations for an number of GNOME projects, such as gnome-software, gnome-control-center and gnome-initial-setup. These are projects where our (large) downstream modifications introduce new user-facing strings. Sometimes, our translation for a string differs from the upstream translation for the same string. This may be due to:

  • a deliberate downstream style choice – such as tú vs. usted in Spanish
  • our fork of the project changing the UI so that the upstream translation does not fit in the space available in our UI – “Suspend” was previously translated into German as „In Bereitschaft versetzen“, which we changed to „Bereitschaft“ for this reason
  • the upstream translation being incorrect
  • the whim of a translator

Whenever we update to a new version of GNOME, we have to reconcile our downstream translations with the changes from upstream. We want to preserve our intentional downstream changes, and keep our translations for strings that don’t exist upstream; but we also want to pull in translations for new upstream strings, as well as improved translations for existing strings. Earlier this year, the translation-rebase baton was passed to me. My predecessor would manually reapply our downstream changes for a set of officially-supported languages, but unlike him, I can pretty much only speak English, so I needed something a bit more mechanical.

I spoke to various people from other distros about this problem.1 A common piece of advice was to not maintain downstream translation changes: appealing, but not really an option at the moment. I also heard that Ubuntu follows a straightforward rule: once the translation for a string has been changed downstream, all future upstream changes to the translation for that string are ignored. The assumption is that all downstream changes to a translation must have been made for a reason, and should be preserved. This is essentially a superset of what we’ve done manually in the past.

I wrote a little tool to implement this logic, pomerge translate-o-tron 3000 (or “t3k” for short).2 Its “rebase” mode takes the last common upstream ancestor, the last downstream commit, and a working copy with the newest downstream code. For each locale, for each string in the translation in the working copy, it compares the old upstream and downstream translations – if they differ, it merges the latter into the working copy. For example, Endless OS 3.5.x was based on GNOME 3.26; Endless OS 3.6.x is based on GNOME 3.32. I might rebase the translations for a module with:

$ cd src/endlessm/gnome-control-center

# The eos3.6 branch is based on the upstream 3.32.1 tag
$ git checkout eos3.6

# Update the .pot file
$ ninja -C build meson-gnome-control-center-2.0-pot

# Update source strings in .po files
$ ninja -C build meson-gnome-control-center-2.0-update-po

# The eos3.6 branch is based on the upstream 3.26.1 tag;
# merge downstream changes between those two into the working copy
$ t3k rebase `pwd` 3.26.1 eos3.5

# Optional: Python's polib formats .po files slightly differently to gettext;
# reformat them back. This has no semantic effect.
$ ninja -C build meson-gnome-control-center-2.0-update-po

$ git commit -am 'Rebase downstream translations'

It also has a simpler “copy” mode which copies translations from one .po file to another, either when the string is untranslated in the target (the default) or for all strings. In some cases, we’ve purchased translations for strings which have not been translated upstream; I’ve used this to submit some of those upstream, such as Arabic translations of the myriad OARS categories, and hope to do more of that in the future now I can make a computer do the hard work.

$ t3k copy \
    ~/src/endlessm/gnome-software/po/ar.po \
    ~/src/gnome/gnome-software/po/ar.po
  1. I’d love to credit the individuals I spoke to but my memory is awful. Please let me know if you remember being on the other side of these conversations and I’ll update this post!
  2. Thanks to Alexandre Franke for pointing out the existence of at least one existing tool called “pomerge”. In my defence, I originally wrote this script on a Eurostar with no internet connection, so couldn’t search for conflicts at the time.

June 20, 2019

Preparing the bzip2-1.0.7 release

ATTENTION ALL DISTRIBUTIONS: this is for you. THE SONAME MAY CHANGE!

I am preparing a bzip2-1.0.7 release. You can see the release notes, which should be of interest:

  • Many historical patches from various distributions are integrated now.

  • We have a new fix for the just-published CVE-2019-12900, courtesy of Albert Astals Cid.

  • Bzip2 has moved to Meson for its preferred build system, courtesy of Dylan Baker. For special situations, a CMake build system is also provided, courtesy of Micah Snyder.

What's with the soname?

From bzip2-1.0.1 (from the year 2000), until bzip2-1.0.6 (from 2010), release tarballs came with a special Makefile-libbz2_so to generate a shared library instead of a static one.

This never used libtool or anything; it specified linker flags by hand. Various distributions either patched this special makefile, or replaced it by another one, or outright replaced the complete build system for a different one.

Some things to note:

  • This hand-written Makefile-libbz2_so used a link line like $(CC) -shared -Wl,-soname -Wl,libbz2.so.1.0 -o libbz2.so.1.0.6. This means, make the DT_SONAME field inside the ELF file be libbz2.so.1.0 (note the two digits in 1.0), and make the filename of the shared library be libbz2.so.1.0.6.

  • Fedora patched the soname in a patch called "saneso" to just be libbz2.so.1.

  • Stanislav Brabec, from openSUSE, replaced the hand-written makefiles with autotools, which meant using libtool. It has this interesting note:

Incompatible changes:

soname change. Libtool has no support for two parts soname suffix (e. g. libbz2.so.1.0). It must be a single number (e. g. libbz2.so.1). That is why soname must change. But I see not a big problem with it. Several distributions already use the new number instead of the non-standard number from Makefile-libbz2_so.

(In fact, if I do objdump -x /usr/lib64/*.so | grep SONAME, I see that most libraries have single-digit sonames.)

In my experience, both Fedora and openSUSE are very strict, and correct, about obscure things like library sonames.

With the switch to Meson, bzip2 no longer uses libtool. It will have a single-digit soname — this is not in the meson.build yet, but expect it to be there within the next couple of days.

I don't know what distros which decided to preserve the 1.0 soname will need to do; maybe they will need to patch meson.build on their own.

Fortunately, the API/ABI are still exactly the same. You can preserve the old soname which your distro was using and linking libbz2 will probably keep working as usual.

(This is a C-only release as usual. The Rust branch is still experimental.)

June 19, 2019

Midsomer Maps

Since it's been kindof a tradition for me to do some blogging around midsomer, I thought we might as well keep with that tradition this year as well… And there's been some nice news in latest beta release of Maps, 3.33.3.

First James Westman has implemented a new improved “Send to” dialog, as the old one had some problems. The way it interacted with the Clocks and Weather apps was a bit strange, adding the exact place (let's say a shop or a pub) as an entry in e.g. Weather, which is most likely not what the user intended. So the new dialog will offer to add the nearest city (or rather METAR weather station):

  


It also includes a summary with the name and address of the place, it's raw coordinates, and a link to the corresponding raw object in the OpenStreetMap database  as well as buttons to copy this information to the clipboard to be able to paste it elsewhere and also a button to initiate an e-mail message using the default e-mail client with this information, and the title of the place as the subject. Furhermore, along with the entries to add the nearby location to the Weather and Clocks apps, additionally if you have other apps capable of opening geo: URIs, they will appear at the end. In the case above I have JOSM (an OpenStreetMap editor written in Java, allowing nitty-gritty editing of OpenStreetMap data), so selecting that would open an area centered around this location in that app (however I also discovered a couple of bugs in JOSM's geo: handling, so you'll need their latest snapshot release for it to work).

Along with this, I started tinkering with allowing to enter the OpenStreetMap URL as a search term in Maps, so that you could open such URLs directly, without resorting to a browser. Ideally I would have liked if it was somehow possible to register as some kind of “partial” URL handler for http(s) restricted to certain patterns, but this is not currently possible with the mime support we have. So that would seems like a distant dream for now… Oh, and a somewhat crazy idea might be to attempt grocking some (subset) of Google Maps URLs.

The other big thing is that I completely rewrote the search engine, so now it uses either GraphHopper's API, or the search API of the Photon project. GraphHopper also uses Photon, but using their legacy API layer. The reason I implemented support for both is that GraphHopper was fine with using their service. The good thing is that quite a lot of the JSON parsing could be handle by a common module. And I also made it so that the search provider is auto-configured through the service file, so when/if GraphHopper switches to standard Photon, we can switch, and existing Maps clients will automagically use the new endpoint. Or if we want to change provider, that could also be done seamlessly this way.

What's more is that this finally gives us “search-as-you-type”, and this is something that calls for a video:



Not only those things, I also got around to fix an old, actually crashing bug when a user has more than some 250 contacts with addresses associated in their Evolution address book (such as when coupling with an enterprize Exchange server.

That's it for now, I guess… :-)

Gthree update, It moves!

Recently I have been backporting some missing three.js features and fixing some bugs. In particular, gthree supports:

  • An animation system based on keyframes and interpolation.
  • Skinning, where a model can have a skeleton and modifying a bone affects the whole model.
  • Support in the glTF loader for the above.

This is pretty cool as it enables us to easily load and animate character models. Check out this video:

GNOME ED Update – April/May

It’s time for another update on what the GNOME Foundation has been up to, this time in April and May.

Events

We’ve been to a number of events in the last couple of months. April saw myself, Kristi, Bastian, Anisa and Stefano at FOSS-North in Sweden. Zeeshan Ali presented a talk on Open Source Geolocation.

At the end of April, Molly de Blanc and Sri Ramkrishna were at Linux Fest North West. Additionally, Molly delivered a talk related to community guideline enforcement, which was featured on the LFNW web page.

We also had a couple of hackfests in may – Rust+GNOME Hackfest #5 in Berlin at the start of the month, and the GStreamer Spring Hackfest 2019 in Oslo at the end of May.

Coming up in July, we’ll be attending OSCON and having a West Coast Hackfest – a combined 3-in-1 hackfest bringing in GTK, Documentation and Engagement teams!

GUADEC and GNOME.Asia

GUADEC and GNOME.Asia planning is now very much underway, and they’ve now both announced their venues and dates – GUADEC will be in Thessaloniki, Greece at the end of August, and GNOME.Asia will be in Gresik, Indonesia mid October! As always, we’re after sponsors for both of these, so if you know of any organisations who can help, please pass along our sponsorship brochure.

Staffing

For those that didn’t see my announcement, Molly de Blanc joined the Foundation as our Strategic Initiatives Manager! Molly comes from the Free Software Foundation where she was the Campaigns Manager, working on community organising around digital rights issues.
She’s also the President of the Board of Directors of the Open Source Initiative, and on the Debian Outreach and Anti-harassment teams. Regularly speaking at conferences around the world, she has represented multiple projects in community and corporate contexts.

Discourse

We’ve also been trying something new – we moved the gtk lists away from Mailman and over to discourse.gnome.org. The uptake has been rather impressive – we’re now seeing more topics on Discourse then all gtk-* lists grouped together, and more and more people engaged. We also moved over builder, and are looking at other lists, with a possible goal of eventually retiring mailman all together for general purpose discussions. If you’re interested, let me know!

Google Summer of Code

The Google Summer of Code internships are now underway, and we have a total of 10 students working for GNOME:

  • Sajeer Ahamed Riyaf – Converting GStreamer plugins to Rust
  • Srestha Srivastava – Improve GNOME Boxes express-installations by adding support to tree-based installations
  • AJ Jordan – Implement a Migration Assistant
  • Andrei Lişiţă – Add a saved states manager for GNOME Games
  • Xiang Fan – Update of gtk-rs to support GTK4
  • Stefanos Dimos – Adding the ability to preview links in Polari
  • Mayank_Sharma – Improve Google-Drive support for Gvfs
  • Ravgeet Dhillon – Rework the GTK website
  • Sumaid – Full stack MusicBrainz integration for GNOME Music
  • Gaurav Agrawal – Implement side by side diffs in Gitg

This is a fantastic set of projects, and I’m sure all students will be welcomed warmly!

F30 release parties in Prague and Brno

As it’s our tradition since Fedora 15 we’ve organized Fedora 30 release parties in the Czech Republic. Normally the Brno one is earlier, but this time the Prague took place one week before the Brno one – on June 4.

The Prague party was again held in offices of Etnetera, a Fedora-friendly software company. I was worried about the attendance because at the same time the biggest demonstration since 1989 (100+k people) was taking place in Prague. A lot of our friends went there. A lot of old faces didn’t show up, but they were replaced by quite a few new faces (which I think it’s partly due to posting an invitation to the biggest Czech Linux group on Facebook), and in the end the attendance was the same or a bit higher than last time – around 30 ppl.

photo_2019-06-19_11-46-15F30 release party in Prague

We’ve prepared 5 talks for visitors. I started with news in Fedora Workstation and also added a pack of news in Fedora Silverblue. We try to make the release parties as informal as possible, so the talks should not be lectures where one is talking and the rest is listening in silence. My talk was again mixed with a lot of discussion and instead of 30-40 min, it took 1h20m.

Then Petr Hráček introduced the project he’s working on Packit. As someone who maintains packages in Fedora I find the idea interesting because in package maintenance there is a lot of work that can be automated and if there is a tool that can help you with that, great! The only thing that limits my enthusiasm about Packit is that it relies on having YAML files in the upstream repo. And you know how some upstream projects are dismissive to hosting any downstream-specific files…

The next two talks were delivered by Fedora QA guys – František Zatloukal and Lukáš Růžička. František talked on how they test Fedora, what tools they use and how you can help them. Lukáš talked on how to report bugs the useful way.

The 5th talk that was supposed to be on GNOME Builder was cancelled because we were considerably over time, but its author – Ondřej Kolín – promised that he’d change it into an article on mojefedora.cz.

To continue with the bad luck, the release party in Brno a week later had a time conflict with another demonstration against our prime minister. And this time it had an impact on the attendance, around 40 people showed up and we normally get twice more. I hope he will go away, so that there are no longer any demonstrations against him that lower attendance of our release parties 🙂

The party took place in the offices of Red Hat and the program of talks was exactly the same as in Prague.

photo_2019-06-19_11-46-22F30 release party in Brno
photo_2019-06-19_11-46-26F30 release party in Brno

At both parties we also had cool swag for participants. Especially brand-new Fedora hadbook that arrived from a printing-shop just before the Prague party.

photo_2019-06-19_11-46-07New Fedora handbooks

libinput and tablet proximity handling

This is merely an update on the current status quo, if you read this post in a year's time some of the details may have changed

libinput provides an API to handle graphics tablets, i.e. the tablets that are used by artists. The interface is based around tools, each of which can be in proximity at any time. "Proximity" simply means "in detectable range". libinput promises that any interaction is framed by a proximity in and proximity out event pair, but getting to this turned out to be complicated. libinput has seen a few changes recently here, so let's dig into those. Remember that proverb about seeing what goes into a sausage? Yeah, that.

In the kernel API, the proximity events for pens are the BTN_TOOL_PEN bit. If it's 1, we're in proximity, if it's 0, we're out of proximity. That's the theory.

Wacom tablets (or rather the kernel driver) always reset all axes on proximity out. So libinput needs to take care not to send a 0 value to the caller, lest you want a jump to the top left corner every time you move the pen away from the tablet. Some Wacom pens have serial numbers and we use those to uniquely identify a tool. But some devices start sending proximity and axis events before we get the serial numbers which means we can't identify the tool until several ms later. In that case we simply discard the serial. This means we cannot uniquely identify those pens but so far no-one has complained.

A bunch of tablets (HUION) don't have proximity at all. For those, we start getting events and then stop getting events, without any other information. So libinput has a timer - if we don't get events for a given time, we force a proximity out. Of course, this means we also need to force a proximity in when the next event comes in. These tablets are common enough that recently we just enabled the proximity timeout for all tablets. Easier than playing whack-a-mole, doubly so because HUION re-uses USD ids so you can't easily identify them anyway.

Some tablets (HP Spectre 13) have proximity but never send it. So they advertise the capability, just don't generate events for it. Same handling as the ones that don't have proximity at all.

Some tablets (HUION) have proximity, but only send it once per plug-in, after that it's always in proximity. Since libinput may start after the first pen interaction, this means we have to a) query the initial state of the device and b) force proximity in/out based on the timer, just like above.

Some tablets (Lenovo Flex 5) sometimes send proximity out events, but sometimes do not. So for those we have a timer and forced proximity events, but only when our last interaction didn't trigger a proximity event.

The Dell Active Pen always sends a proximity out event, but with a delay of ~200ms. That timeout is longer than the libinput timeout so we'll get a proximity out event, but only after we've already forced proximity out. We can just discard that event.

The Dell Canvas pen (identifies as "Wacom HID 4831 Pen") can have random delays of up to ~800ms in its event reporting. Which would trigger forced proximity out events in libinput. Luckily it always sends proximity out events, so we could quirk out to specifically disable the timer.

The HP Envy x360 sends a proximity in for the pen, followed by a proximity in from the eraser in the next event. This is still an unresolved issue at the time of writing.

That's the current state of things, I'm sure it'll change in a few months time again as more devices decide to be creative. They are artist's tools after all.

The lesson to take away here: all of the above are special cases that need to be implemented but this can only be done on demand. There's no way any one person can test every single device out there and testing by vendors is often nonexistent. So if you want your device to work, don't complain on some random forum, file a bug and help with debugging and testing instead.

June 18, 2019

libinput and the Dell Canvas Totem

We're on the road to he^libinput 1.14 and last week I merged the Dell Canvas Totem support. "Wait, what?" I hear you ask, and "What is that?". Good question - but do pay attention to random press releases more. The Totem (Dell.com) is a round knob that can be placed on the Dell Canvas. Which itself is a pen and touch device, not unlike the Wacom Cintiq range if you're familiar with those (if not, there's always lmgtfy).

The totem's intended use is as secondary device - you place it on the screen while you're using the pen and up pops a radial menu. You can rotate the totem to select items, click it to select something and bang, you're smiling like a stock photo model eating lettuce. The radial menu is just an example UI, there are plenty others. I remember reading papers about bimanual interaction with similar interfaces that dated back to the 80s, so there's a plethora to choose from. I'm sure someone at Dell has written Totem-Pong and if they have not, I really question their job priorities. The technical side is quite simple, the totem triggers a set of touches in a specific configuration, when the firmware detects that arrangement it knows this isn't a finger but the totem.

Pen and touch we already handle well, but the totem required kernel changes and a few new interfaces in libinput. And that was the easy part, the actual UI bits will be nasty.

The kernel changes went into 4.19 and as usual you can throw noises of gratitude at Benjamin Tissoires. The new kernel API basically boils down to the ABS_MT_TOOL_TYPE axis sending MT_TOOL_DIAL whenever the totem is detected. That axis is (like others of the ABS_MT range) an odd one out. It doesn't work as an axis but rather an enum that specifies the tool within the current slot. We already had finger, pen and palm, adding another enum value means, well, now we have a "dial". And that's largely it in terms of API - handle the MT_TOOL_DIAL and you're good to go.

libinput's API is only slightly more complicated. The tablet interface has a new tool type called the LIBINPUT_TABLET_TOOL_TYPE_TOTEM and a new pair of axes for the tool, the size of the touch ellipse. With that you can get the position of the totem and the size (so you know how big the radial menu needs to be). And that's basically it in regards to the API. The actual implementation was a bit more involved, especially because we needed to implement location-based touch arbitration first.

I haven't started on the Wayland protocol additions yet but I suspect they'll look the same as the libinput API (the Wayland tablet protocol is itself virtually identical to the libinput API). The really big changes will of course be in the toolkits and the applications themselves. The totem is not a device that slots into existing UI paradigms, it requires dedicated support. Whether this will be available in your favourite application is likely going to be up to you. Anyway, christmas in July [1] is coming up so now you know what to put on your wishlist.

[1] yes, that's a thing. Apparently christmas with summery temperature, nice weather, sandy beaches is so unbearable that you have to re-create it in the misery of winter. Explains everything you need to know about humans, really.

June 16, 2019

How to debug react-native app on Android Avd using clojurescript

To get started with react-native and clojurescript please refer to my previous blogs(https://anish-patil.blogspot.com/2019/02/how-to-create-react-native-project-with.html


For debugging react-native, I am using https://github.com/jhen0409/react-native-debugger, on Fedora(Linux) desktop. React native debugger is available on binary format and can be found on (https://github.com/jhen0409/react-native-debugger/releases


However for RN-debugger, one needs to turn on debugger options on Android emulator, to do that run following command in terminal 

adb shell input keyevent 82

This will start debugging options menu on android emulator as follows


Choose "Debug JS Remotely" for RN-debugger. 

Note if you are not using android-emulator and using phone directly then you need to shake your phone to above options!

Happy Lisping/Clojuring!

June 15, 2019

Calendar management dialog, archiving task lists, Every Detail Matters on Settings (Sprint 2)

Th Sprint series comes out every 3 weeks or so. Focus will be on the apps I maintain (Calendar, To Do, and Settings), but it may also include other applications that I contribute to.

The second sprint was great for GNOME To Do and GNOME Settings! GNOME Calendar is going through a big rewrite, and there is nothing demo-material this week.

GNOME To Do: Archiving task lists

Let’s start with GNOME To Do, since it received an exciting new feature.

The focus during this sprint was on archiving task lists on GNOME To Do.

This was a long-time request, and something that I myself was missing when using To Do. Since it fits well with the product vision of the app, there was nothing preventing it from being implemented.

Selecting this feature to be implemented during the week was a great choice – the task was self contained, had a clear end, and was just difficult just enough to be challenging but not more than that.

However, I found a few issues with the implementation, and want to use the next round to polish the feature. Using the entire week to polish the feature might be too much, but it will give me some time to really make it great.

GNOME Settings: Every Detail Matters

Thanks to the help of Allan, Tobias and Sam, we ran a successful session of Every Detail Matters, focused on GNOME Settings. The tasks ranged from small to big ones, but we tried to scope it to fit a single week of work. Here are some quick highlights of the resulting work:

I’m happy to say, it worked very well! In fact, way better than what I was expecting. We had two new contributors helping out, two long-term contributors leading the development efforts, and three designers involved.

It was not perfect, though; after doing a retrospective, we identified various points where we could have done differently in order to increase engagement, reduce stress and improve how design is made before development starts.

GNOME Calendar: Improving the calendar management dialog

This time, Calendar is going through another massive rewrite. This is not a single-week task, and I believe I correctly assessed that it would take 2 weeks to complete it.

What is being rewritten is the calendar management dialog. This is some code I wrote back in 2015, and my older self is deeply dissatisfied with the decisions my younger self made. Like, not breaking up the code in sweet small digestible chunks. Really, if you plan to maintain an app, pay attention to how your code is organized.

This is still ongoing work and I will leave the demo for when it is ready.

Still Open Questions…

Running the Every Detail Matters session was a fantastic experience, but also made me wonder about ways to deal with dealing with contributors assuming tasks in free and open source communities.

Suppose you are the maintainer of an app, and a first time contributor claims a task; you know you would finish that task in less than an hour, but you still want to give the contributor the chance to, well, contribute. Turns out, the contributor takes a long time to finish the task, blocking development.

What’s the best way to handle this scenario?

The only alternative that comes to my mind is: do not allow first time contributors to have tasks assigned. Instead, ask the new contributor to simply fix the issue they want to fix and send the contribution for review; don’t try and task pre-assigned.

I like this alternative because it matches my experience as a maintainer; the vast, massive majority of new contributors do not follow up with their contributions, leaving maintainers and other contributors hanging. Thus, I think that not allowing new contributors to block development is positive.

Do you, dear reader, have any other suggestions or ideas about it? If you do, I would very much like to know about it.

June 14, 2019

An OpenJPEG Surprise

My previous blog post seems to have resolved most concerns about my requests for Ubuntu stable release updates, but I again received rather a lot of criticism for the choice to make WebKit depend on OpenJPEG, even though my previous post explained clearly why there are are not any good alternatives.

I was surprised to receive a pointer to ffmpeg, which has its own JPEG 2000 decoder that I did not know about. However, we can immediately dismiss this option due to legal problems with depending on ffmpeg. I also received a pointer to a resurrected libjasper, which is interesting, but since libjasper was removed from Ubuntu, its status is not currently better than OpenJPEG.

But there is some good news! I have looked through Ubuntu’s security review of the OpenJPEG code and found some surprising results. Half the reported issues affect the library’s companion tools, not the library itself. And the other half of the issues affect the libmj2 library, a component of OpenJPEG that is not built by Ubuntu and not used by WebKit. So while these are real security issues that raise concerns about the quality of the OpenJPEG codebase, none of them actually affect OpenJPEG as used by WebKit. Yay!

The remaining concern is that huge input sizes might cause problems within the library that we don’t yet know about. We don’t know because OpenJPEG’s fuzzer discards huge images instead of testing them. Ubuntu’s security team thinks there’s a good chance that fixing the fuzzer could uncover currently-unknown multiplication overflow issues, for instance, a class of vulnerability that OpenJPEG has clearly had trouble with in the past. It would be good to see improvement on this front. I don’t think this qualifies as a security vulnerability, but it is certainly a security problem that would facilitate discovering currently-unknown vulnerabilities if fixed.

Still, on the whole, the situation is not anywhere near as bad as I’d thought. Let’s hope OpenJPEG can be included in Ubuntu main sooner rather than later!

June 13, 2019

2019-06-13 Thursday.

  • B&B Hotel - lots of traffic noise through the closed window, ambulances to dream of; up lateish, train, Eurostar, train and so on. Reasonable connectivity - built ESC agenda a tad late while on the train encouraging using Collabora Online rather effectively; good.

libhandy 0.0.10

libhandy 0.0.10 just got released, and it comes with a few new adaptive widgets for your GTK app.

You can get this new version here.

The View Switcher

GNOME applications typically use a GtkStackSwitcher to switch between their views. This design works fine on a desktop, but not so well on really narrow devices like mobile phones, so Tobias Bernard designed a more modern and adaptive replacement — now available in libhandy as the HdyViewSwitcher.

In many ways, the HdyViewSwitcher functions very similarly to a GtkStackSwitcher: you assign it a GtkStack containing your application's pages, and it will display a row of side-by-side, homogeneously-sized buttons, each representing a page. It differs in that it can display both the title and the icon of your pages, and that the layout of the buttons automatically adapts to a narrower version, depending on the available width.

We have also added a view switcher bar, designed to be used at he bottom of the window: HdyViewSwitcherBar.

Thanks a lot to Zander Brown for their prototypes!

The view switchers in action in a modified GNOME Clocks.

The Squeezer

To complete the view switcher design, we needed a way to automatically switch between having a view switcher in the header bar and a view switcher bar at the bottom of the window.

For that we added HdySqueezer: give it widgets, and it will show the first one that fits the available space. A common way to use it would be:


<object class="GtkHeaderBar">
<property name="title">Application</property>
<child type="title">
<object class="HdySqueezer">
<property name="transition-type">crossfade</property>
<signal name="notify::visible-child" handler="on_child_changed"/>
<child>
<object class="HdyViewSwitcher" id="view_switcher">
<property name="stack">pages</property>
</object>
</child>
<child>
<object class="GtkLabel" id="title_label">
<property name="label">Application</property>
<style>
<class name="title"/>
</style>
</object>
</child>
</object>
</child>
</object>

In the example above, if there is enough space the view switcher will be visible in the header bar; if not, a widget mimicking the window's title will be displayed. Additionally, you can reveal or conceal a HdyViewSwitcherBar at the bottom of your window, depending on which widget is presented by the squeezer to only show a single view switcher at a time.

Another Header Bar?

To make the view switcher work as intended, we need to make sure it is always strictly centered; we also need to make sure the view switcher fills all the height of the header bar. Both of these are unfortunately not possible with GtkHeaderBar in GTK 3, so I forked it as HdyHeaderBar to, first, make it not force its title widget to be vertically centered, and hence to allow it to fill all the available height, and second, to allow for choosing between strictly centering its title widget or loosely centering it (similar to GtkHeaderBar).

The Preferences Window

To simplify writing modern, adaptive and featureful applications, I wrote a generic preferences window you can use to implement your application's preferences window: HdyPreferencesWindow.

It is organized this way:

  • the window contains pages implemented via HdyPreferencesPage;
  • pages have a title, and contain preferences groups implemented via HdyPreferencesGroup;
  • groups can have a title, description, and preferences implemented via rows (HdyPreferencesRow) or any other widget;
  • preferences implemented via HdyPreferencesRow have a name, and can be searched via their page title, group title or name;
  • HdyActionRow is a derivative of HdyPreferencesRow, so you can use it (and its derivatives) to easily implement your preferences.

GNOME Web's preferences window re-implemented as a HdyPreferencesWindow.

\ˈfjuːtʃə\

The next expected version of libhandy is libhandy 1.0. It will come with quite a few API fixes, which is why a major version number bump is required. libhandy's API has been stable since many version, and we will guarantee that stability starting from version 1.0.

June 12, 2019

2019-06-12 Wednesday.

  • Up early, a couple of dismal, late trains to Kings Cross, and the Eurostar to OW2Con / Paris. Dodgy taxi ride, arrived somewhat late for the session. Gave a talk on LibreOffice Online and then C'bras role in it briefly.
  • Caught up with Sigmund, Philippe from Arawa, Simon, and enjoyed a fine evening together with friends old & new.

June 11, 2019

Bzip2 in Rust: porting the randomization table

Here is a straightforward port of some easy code.

randtable.c has a lookup table with seemingly-random numbers. This table is used by the following macros in bzlib_private.h:

extern Int32 BZ2_rNums[512];

#define BZ_RAND_DECLS                          \
   Int32 rNToGo;                               \
   Int32 rTPos                                 \

#define BZ_RAND_INIT_MASK                      \
   s->rNToGo = 0;                              \
   s->rTPos  = 0                               \

#define BZ_RAND_MASK ((s->rNToGo == 1) ? 1 : 0)

#define BZ_RAND_UPD_MASK                       \
   if (s->rNToGo == 0) {                       \
      s->rNToGo = BZ2_rNums[s->rTPos];         \
      s->rTPos++;                              \
      if (s->rTPos == 512) s->rTPos = 0;       \
   }                                           \
   s->rNToGo--;

Here, BZ_RAND_DECLS is used to declare two fields, rNToGo and rTPos, into two structs (1, 2). Both are similar to this:

typedef struct {
   ...
   Bool     blockRandomised;
   BZ_RAND_DECLS
   ...
} DState;

Then, the code that needs to initialize those fields calls BZ_RAND_INIT_MASK, which expands into code to set the two fields to zero.

At several points in the code, BZ_RAND_UPD_MASK gets called, which expands into code that updates the randomization state, or something like that, and uses BZ_RAND_MASK to get a useful value out of the randomization state.

I have no idea yet what the state is about, but let's port it directly.

Give things a name

It's interesting to see that no code except for those macros uses the fields rNToGo and rTPos, which are declared via BZ_RAND_DECLS. So, let's make up a type with a name for that. Since I have no better name for it, I shall call it just RandState. I added that type definition in the C code, and replaced the macro-which-creates-struct-fields with a RandState-typed field:

-#define BZ_RAND_DECLS                          \
-   Int32 rNToGo;                               \
-   Int32 rTPos                                 \
+typedef struct {
+   Int32 rNToGo;
+   Int32 rTPos;
+} RandState;

...

-      BZ_RAND_DECLS;
+      RandState rand;

Since the fields now live inside a sub-struct, I changed the other macros to use s->rand.rNToGo instead of s->rNToGo, and similarly for the other field.

Turn macros into functions

Now, three commits (1, 2, 3) to turn the macros BZ_RAND_INIT_MASK, BZ_RAND_MASK, and BZ_RAND_UPD_MASK into functions.

And now that the functions live in the same C source file as the lookup table they reference, the table can be made static const to avoid having it as read/write unshared data in the linked binary.

Premature optimization concern: doesn't de-inlining those macros cause performance problems? At first, we will get the added overhead from a function call. When the whole code is ported to Rust, the Rust compiler will probably be able to figure out that those tiny functions can be inlined (or we can #[inline] them by hand if we have proof, or if we have more hubris than faith in LLVM).

Port functions and table to Rust

The functions are so tiny, and the table so cut-and-pasteable, that it's easy to port them to Rust in a single shot:

#[no_mangle]
pub unsafe extern "C" fn BZ2_rand_init() -> RandState {
    RandState {
        rNToGo: 0,
        rTPos: 0,
    }
}

#[no_mangle]
pub unsafe extern "C" fn BZ2_rand_mask(r: &RandState) -> i32 {
    if r.rNToGo == 1 {
        1
    } else {
        0
    }
}

#[no_mangle]
pub unsafe extern "C" fn BZ2_rand_update_mask(r: &mut RandState) {
    if r.rNToGo == 0 {
        r.rNToGo = RAND_TABLE[r.rTPos as usize];
        r.rTPos += 1;
        if r.rTPos == 512 {
            r.rTPos = 0;
        }
    }
    r.rNToGo -= 1;
}

Also, we define the RandState type as a Rust struct with a C-compatible representation, so it will have the same layout in memory as the C struct. This is what allows us to have a RandState in the C struct, while in reality the C code doesn't access it directly; it is just used as a struct field.

// Keep this in sync with bzlib_private.h:
#[repr(C)]
pub struct RandState {
    rNToGo: i32,
    rTPos: i32,
}

See the commit for the corresponding extern declarations in bzlib_private.h. With those functions and the table ported to Rust, we can remove randtable.c. Yay!

A few cleanups

After moving to another house one throws away useless boxes; we have to do some cleanup in the Rust code after the initial port, too.

Rust prefers snake_case fields rather than camelCase ones, and I agree. I renamed the fields to n_to_go and table_pos.

Then, I discovered that the EState struct doesn't actually use the fields for the randomization state. I just removed them.

Exegesis

What is that randomization state all about?

And why does DState (the struct used during decompression) need the randomization state, but EState (used during compression) doesn't need it?

I found this interesting comment:

      /*-- 
         Now a single bit indicating (non-)randomisation. 
         As of version 0.9.5, we use a better sorting algorithm
         which makes randomisation unnecessary.  So always set
         the randomised bit to 'no'.  Of course, the decoder
         still needs to be able to handle randomised blocks
         so as to maintain backwards compatibility with
         older versions of bzip2.
      --*/
      bsW(s,1,0);

Okay! So compression no longer uses randomization, but decompression has to support files which were compressed with randomization. Here, bsW(s,1,0) always writes a 0 bit to the file.

However, the decompression code actually reads the blockRandomised bit from the file so that it can see whether it is dealing with an old-format file:

GET_BITS(BZ_X_RANDBIT, s->blockRandomised, 1);

Later in the code, this s->blockRandomised field gets consulted; if the bit is on, the code calls BZ2_rand_update_mask() and friends as appropriate. If one is using files compressed with Bzip2 0.9.5 or later, those randomization functions are not even called.

Talk about preserving compatibility with the past.

Explanation, or building my headcanon

Bzip2's compression starts by running a Burrows-Wheeler Transform on a block of data to compress, which is a wonderful algorithm that I'm trying to fully understand. Part of the BWT involves sorting all the string rotations of the block in question.

Per the comment I cited, really old versions of bzip2 used a randomization helper to make sorting perform well in extreme cases, but not-so-old versions fixed this.

This explains why the decompression struct DState has a blockRandomised bit, but the compression struct EState doesn't need one. The fields that the original macro was pasting into EState were just a vestige from 1999, which is when Bzip2 0.9.5 was released.

What is my Project?

Main objectives of the project are to retrieve cover art for music files and also some metadata tags from MusicBrainz Database. Now there are many MusicBrainz tags and identifiers which can be used to retrieve information from MusicBrainz Database, but we don’t need everything. Mainly, we need Album ID, Track ID, Recording ID and Artist ID. Names of these identifiers are self explanatory, like MusicBrainz Artist ID uniquely identifies an artist in the MusicBrainz database. So we are gonna deal with 4 MBIDs?
Not really!

MusicBrainz Album ID is being deprecated in favour of MusicBrainz Release ID and MusicBrainz Release Group ID. So finally we’ve 5 MBIDs which we are gonna use to retrieve information from MusicBrainz Database. Now natural question arises, where do we get these MusicBrainz IDs for a file, now there are two cases:

A music file already has MusicBrainz IDs stored inside it:
In this case, all we need to do is extract those mbids and store them in tracker. Tracker is a file indexing and search framework, which GNOME Music relies on. Hence it’s necessary to extract and index mbids in tracker from file.
If a file doesn’t have MusicBrainz IDs:
In this case, first we need to retrieve MusicBrainz IDs based on AcoustID and store in tracker. Also through tracker writeback, we will write those mbids in the file also.

Once we have MusicBrainz IDs, we can write grilo plugins to retrieve cover art and tags.

First Two Weeks

GSoC Coding Period officially began on 29th May, so it has been nearly two weeks till now.
For a week, I had gone to my home in Maharashtra state, so I couldn’t do much work. But I had already started working a week or so before 29th.
From here onwards, I will be spending more time on my project, and I hope to finish the whole project by July end.

Coding Period Begins!! But what to code??

Jean Felder ( My mentor for GSoC project) and Marinus Schraal (GNOME Music Maintainer) suggested that I propose a plan of the whole project. Now trust me! This is the much more difficult than actual coding!
I usually work on my personal projects and start working from scratch, but here the project involves so many different libraries, so I really struggled with making a plan with a proper timeline.
When I first started making plan, my first instinct was to understand past patches , now I understood some of them to some extent, but still I couldn’t see big picture. So I talked to Jean, and he explained that first I need to work in tracker and grilo then GNOME Music. First I will present short summary of my project.

June 10, 2019

Google Summer of Code 2019: Week 1 and 2

I got selected as a Google Summer of Code 2019 student with GNOME. It has been almost two weeks since the coding period started and I've been working with two awesome mentors Fabiano Fidencio and Felipe Borges.

Currently GNOME Boxes is able to do either express-installations on a downloaded ISO or to download an ISO and offer the option to express-install it. My project aims to add the support for express-installations using the OSes’ network trees. This would reduce the download size and mainly benefit the users with not so good internet connection.

I first tried few express-installations(unattended installations) on boxes from a downloaded media and compared the command line generated with that of the one generated in virt-install while installing a virtual machine from the network.

I've added a button  "Install from a tree" in boxes, on clicking it goes to a new window which lists all the operating systems that haven't yet crossed their end of lifecycle. Now I am trying to remove the operating systems that do not support tree based installations from the list.


I appreciate how builder has made my life much easier by having a button to update dependencies, which pulls the upstream repositories of all the dependencies and builds them itself.

First Two Weeks at GSoC

Two weeks ago, I wasn’t sure about the technology that was to be used in this project. I was completely unfamiliar with some of the tools that were to be used in this project. But I backed myself and was able to pull off the things.

Things have went pretty good so far. I have learned a couple of new things which are going to form an important part of this project.

  1. Liquid
    • Liquid forms the basis for this website. All the conditionals and other logics are implemented with the mighty help of Liquid.
  2. Pipeplines
    • Because the website needs Continuous Integration and Deployment, Gitlab CI is a perfect tool for the same. Building efficient pipelines is going to be an important task for this website.

The landing page is the centerstage for this website and will provide routes to various other resources. I am working on some new sections and may remove/alter some of the existing ones. I looking for someone to draw some artworks/illustrations that I need in this website. If you can help with this thing, please file an issue and we will have a healthy conversation. A wiki has also been made. All the important information about the porject is present here. I have forked the original project for GTK website into my workspace. The website is hosted by gitlab pages for now and can be surfed here.

If you have any suggestion or find an issue, please report it here.

June 09, 2019

On Ubuntu Updates

I’d been planning to announce that Ubuntu has updated Epiphany appropriately in Ubuntu 18.04, 19.04, and its snap packaging, but it seems I took too long and Sebastien has beaten me to that. So thank you very much, Sebastien! And also to Ken and Marcus, for helping with the snap update. I believe I owe you, and also Iain, an apology. My last blog post was not very friendly, and writing unfriendly blog posts is not a good way to promote a healthy community. That wasn’t good for GNOME or for Ubuntu.

Still, I was rather surprised by some of the negative reaction to my last post. I took it for granted that readers would understand why I was frustrated, but apparently more explanation is required.

We’re Only Talking about Micro-point Updates!

Some readers complained that stable operating systems should not take updates from upstream, because they could  introduce new bugs. Well, I certainly don’t expect stable operating systems to upgrade to new major release versions. For instance, I wouldn’t expect Ubuntu 18.04, which released with GNOME 3.28, to upgrade packages to GNOME 3.32. That would indeed defeat the goal of providing a stable system to users. We are only talking about micro version updates here, from 3.28.0 to 3.28.1, or 3.28.2, or 3.28.3, etc. These updates generally contain only bugfixes, so the risk of regressions is relatively low. (In exceptional circumstances, new features may be added in point releases, but such occurrences are very rare and carefully-considered; the only one I can think of recently was Media Source Extensions.) That doesn’t mean there are never any regressions, but the number of regressions introduced relative to the number of other bugs fixed should be very small. Sometimes the bugs fixed are quite serious, so stable release updates are essential to providing a quality user experience. Epiphany stable releases usually contain (a) fixes for regressions introduced by the previous major release, and (b) fixes for crashes.

Other readers complained that it’s my fault for releasing software with  bugs in the first place, so I shouldn’t expect operating system updates to fix the bugs. Well, the first point is clearly true, but the second doesn’t follow at all. Expecting free software to be perfect and never make any bad releases is simply unreasonable. The only way to fix problems when they occur is with a software update. GNOME developers try to ensure stable branches remain stable and reliable, so operating systems packaging GNOME can have high confidence in our micro-point releases, even though we are not perfect and cannot expect to never make a mistake. This process works very well in other Linux-based operating systems, like Fedora Workstation.

How Did We Get Here?

The lack of stable release updates for GNOME in Ubuntu has been a serious ongoing problem for most of the past decade, across all packages, not just Epiphany. (Well, probably for much longer than a decade, but my first Ubuntu was 11.10, and I don’t claim to remember how it was before that time.) Look at this comment I wrote on an xscreensaver blog post in 2016, back when I had already been fed up for a long time:

Last week I got a bug report from a Mint user, complaining about a major, game-breaking bug in a little GNOME desktop game that was fixed two and a half years ago. The user only needed a bugfix-only point release upgrade (from the latest Mint version x.y.z to ancient version x.y.z+1) to get the fix. This upgrade would have fixed multiple major issues.

I would say the Mint developers are not even trying, but they actually just inherit this mess from Ubuntu.

So this isn’t just a problem for Ubuntu, but also for every OS based on Ubuntu, including Linux Mint and elementary OS. Now, the game in question way back when was Iagno. Going back to find that old bug, we see the user must have been using Iagno 3.8.2, the version packaged in Ubuntu 14.04 (and therefore the version available in Linux Mint at the time), even though 3.8.3, which fixed the bug, had been available for over two years at that point. We see that I left dissatisfied yet entirely-appropriate comments on Bugzilla, like “I hate to be a distro crusader, but if you use Linux Mint then you are gonna have to live with ancient bugs.”

So this has been a problem for a very long time.

Hello 2019!

But today is 2019. Ubuntu 14.04 is ancient history, and a little game like Iagno is hardly a particularly-important piece of desktop software anyway. Water under the bridge, right? It’d be more interesting to look at what’s going on today, rather than one specific example of a problem from years ago. So, checking the state of a few different packages in Ubuntu 19.04 as of Friday, June 7, I found:

  • gnome-shell 3.32.1 update released to Ubuntu 19.04 users on June 3, while 3.32.2 was released upstream on May 14
  • mutter 3.32.1 update released to Ubuntu 19.04 users on June 3, while 3.32.2 was released upstream on May 14 (same as gnome-shell)
  • glib 2.60.0 never updated in Ubuntu 19.04, while 2.60.1 was released upstream on April 15, and 2.60.3 is the current stable version
  • glib-networking 2.60.1 never updated in Ubuntu 19.04, while I released 2.60.2 on May 2
  • libsoup 2.66.1 never updated in Ubuntu 19.04, while 2.66.2 was released upstream on May 15

(Update: Sebastien points out that Ubuntu 19.04 shipped with git snapshots of gnome-shell and mutter very close to 3.32.1 due to release schedule constraints, which was surely a reasonable approach given the tight schedule involved. Of course, 3.32.2 is still available now.)

I also checked gnome-settings-daemon, gnome-session, and gdm. All of these are up-to-date in 19.04, but it turns out that there have not been any releases for these components since 3.32.0. So 5/8 of the packages I checked are currently outdated, and the three that aren’t had no new versions released since the original 19.04 release date. Now, eight packages is a small and very unscientific review — I haven’t looked at any packages other than the few listed here — but I think you’ll agree this is not a good showing. I leave it as an exercise for the reader to check more packages and see if you find similar results. (You will.)

Of course, I don’t expect all packages to be updated immediately. It’s reasonable to delay updates by a couple weeks, to allow time for testing. But that’s clearly not what’s happening here. (Update #2: Marco points out that Ubuntu is not shipping gnome-shell and mutter 3.32.2 yet due to specific regressions. So actually, I’m wrong and allowing time for testing is exactly what’s happening here, in these particular cases. Surprise! So let’s not count outdated gnome-shell and mutter against Ubuntu, and say 3/8 of the packages are old instead of 5/8. Still not great results, though.)

Having outdated dependencies like GLib 2.60.0 instead of 2.60.3 can cause just as serious problems as outdated applications: in Epiphany’s case, there are multiple fixes for name resolution problems introduced since GLib 2.58 that are missing from the GLib 2.60.0 release. When you use an operating system that provides regular, comprehensive stable release updates, like Fedora Workstation, you can be highly confident that you will receive such fixes in a timely manner, but no such confidence is available for Ubuntu users, nor for users of operating systems derived from Ubuntu.

So Epiphany and Iagno are hardly isolated examples, and these are hardly recent problems. They’re widespread and longstanding issues with Ubuntu packaging.

Upstream Release Monitoring is Essential

Performing some one-time package updates is (usually) easy. Now that the Epiphany packages are updated, the question becomes: will they remain updated in Ubuntu going forward? Previously, I had every reason to believe they would not. But for the first time, I am now cautiously optimistic. Look at what Sebastien wrote in his recent post:

Also while we have tools to track available updates, our reports are currently only for the active distro and not stable series which is a gap and leads us sometime to miss some updates.
I’ve now hacked up a stable report and reviewed the current output and we will work on updating a few components that are currently outdated as a result.

It’s no wonder that you can’t reliably provide stable release updates without upstream release monitoring. How can you provide an update if you don’t know that the update is available? It’s too hard for humans to manually keep track of hundreds of packages, especially with limited developer resources, so quality operating systems have an automated process for upstream release monitoring to notify them when updates are available. In Fedora, we use https://release-monitoring.org/ for most packages, which is an easy solution available for other operating systems to use. Without appropriate tooling, offering updates in a timely manner is impractical.

So now that Sebastien has a tool to check for outdated GNOME packages, we can hope the situation might improve. Let’s hope it does. It would be nice to see a future where Ubuntu users receive quality, stable software updates.

Dare to Not Package?

Now, I have no complaints with well-maintained, updated OS packages. The current state of Epiphany updates in Ubuntu is (almost) satisfactory to me (with one major caveat, discussed below). But outdated OS packages are extremely harmful. My post two weeks ago was a sincere request to remove the Epiphany packages from Ubuntu, because they were doing much more harm than good, and, due to extreme lack of trust built up over the course of the past decade, I didn’t trust Ubuntu to fix the problem and keep it fixed. (I am still only “cautiously optimistic” that things might improve, after all: not at all confident.) Bugs that we fixed upstream long ago lingered in the Ubuntu packages, causing our project serious reputational harm. If I could choose between outdated packages and no packages at all, there’s no question that I would greatly prefer the later.

As long as operating system packages are kept up-to-date — with the latest micro-point release corresponding to the system’s minor GNOME version — then I don’t mind packages. Conscientiously-maintained operating system packages are fine by me. But only if they are conscientiously-maintained and kept up-to-date!

Not packaging would not be a horrible fate. It would be just fine. The future of Linux application distribution is Flatpak (or, less-likely, snap), and I really don’t mind if we get there sooner rather than later.

Regarding OpenJPEG

We have one more issue with Ubuntu’s packaging left unresolved: OpenJPEG. No amount of software updates will fix Epiphany in Ubuntu if it isn’t web-compatible, and to be web-compatible it needs to display JPEG 2000 images. As long as we have Safari without Chromium in our user agent, we have to display JPEG 2000 images, because, sadly, JPEG 2000 is no longer optional for web compatibility. And we cannot change our user agent because that, too, would break web compatibility. We attempted to use user agent quirks only for websites that served JPEG 2000 images, but quickly discovered it was entirely impractical. The only practical way to avoid the requirement to support JPEG 2000 is to give up on WebKit altogether and become yet another Chromium-based browser. Not today!

Some readers complained that we are at fault for releasing a web browser that depends on OpenJPEG, as if this makes us bad or irresponsible developers. Some of the comments were even surprisingly offensive. Reality is: we have no other options. Zero. The two JPEG 2000 rendering libraries are libjasper and OpenJPEG. libjasper has been removed from both Debian and Ubuntu because it is no longer maintained. That leaves OpenJPEG. Either we use OpenJPEG, or we write our own JPEG 2000 image decoder. We don’t have the resources to do that, so OpenJPEG it is. We also don’t have the resources to fix all the code quality bugs that exist in OpenJPEG. Firefox and Chrome are certainly not going to help us, because they are big enough that they don’t need to support JPEG 2000 at all. So instead, we’ve devoted resources to sandboxing WebKit with bubblewrap. This will mitigate the damage potential from OpenJPEG exploits. Once the sandbox is enabled — which we hope to be ready for WebKitGTK 2.26 — then an OpenJPEG exploit will be minimally-useful unless combined with a bubblewrap sandbox escape. bubblewrap is amazing technology, and I’m confident this was the best choice of where to devote our resources. (Update: To clarify, the bubblewrap sandbox is for the entire web process, not just the OpenJPEG decoder.)

Of course, it would be good to improve OpenJPEG. I repeat my previous call for assistance with the OpenJPEG code quality issues reported by Ubuntu, but as before, I only expect to hear crickets.

So unfortunately, we’re not yet at a point where I’m comfortable with Epiphany’s Ubuntu packaging. (Well, the problem is actually in the WebKit packaging. Details.) I insist: distributing Epiphany without support for JPEG 2000 images is harmful and makes Epiphany look bad. Please, Ubuntu, we need you to either build WebKit with OpenJPEG enabled, or else just drop your Epiphany packages entirely, one or the other. Whichever you choose will make me happy. Please don’t accept the status quo!

WOGUE is no friend of GNOME

Alex Diavatis is the person behind the WOGUE account on YouTube. For a while he’s been posting videos about GNOME. I think the latest idea is that he’s trying to “shame” developers into working harder. From the person who’s again on the other end of his rants it’s having the opposite effect.

We’re all doing our best, and I’m personally balancing about a dozen different plates trying to keep them all spinning. If any of the plates fall on the floor, perhaps helping with triaging bugs, fixing little niggles or just saying something positive might be a good idea. In fact, saying nothing would be better than the sarcasm and making silly videos.

Same UX, different backend

Lately I have been spending a lot of time staring at the Resume/Restart dialog of GNOME Games.

The Dialog

Games currently allows the user to resume most retro games from the exact state the game was left in last time it was quit. To do this Games saves several files required to restore the state of the emulator core and resume it’s execution. These files grouped together are called a “savestate”. Currently Games has only one savestate per game which is overwritten everytime the user exits the game.

The purpose of my GSoC project is to allow the user to store and manage more than one savestate. These past days I have been writing code that makes Games create a brand new savestate everytime the user exits the game.

Before my changes the Resume button would simply load the single savestate which existed for that particular game. Now it loads the latest savestate that was created, so from the user’s perspective nothing has changed (with one tiny exception I’ll explain later).

My changes are slowly amounting to 1000 lines of code (insertions + deletions). I admit it does feel a bit uneasy to edit 1000 lines from the app’s code and not see any changes from the user’s point of view. It makes me wonder how often does this also happen with other software that I’m using :S

Unfortunately, there is one minor downgrade that my changes have introduced. As can be seen from the first image in the post, there is a screenshot of the game behind the Dialog. My changes have altered the sequence of operations required to boot the emulator core, which in turn broke the loading of screenshots. Now, when starting the game there is always a black screen behind the Resume/Restart Dialog 😦

Gloomy Dialog

In the next days I will finally start working on a primitive UI that is going to display all of the savestates that are available when launching a game. The UI would also allow the user to boot the game from a particular savestate on click.

June 07, 2019

Tweaking the parallel Zip file writer

A few years ago I wrote a command line program to compress and decompress Zip files in parallel. It turned out to work pretty well, but it had one design flaw that kept annoying me.

What is the problem?

Decompressing Zip files in parallel is almost trivial. Each file can be decompressed in parallel without affecting any other decompression task. Fire up N processing tasks and decompress files until finished. Compressing Zip files is more difficult to parallelize. Each file can be compressed separately, but the problem comes from writing the output file.

The output file must be written one file at a time. So if one compressed file is being written to the output file then other compression tasks must wait until it finishes before their output can be written to the result file. This data can not be kept in memory because it is common to have output files that are larger than available memory.

The original solution (and thus the design flaw alluded to) was to to have each compressor write its output to a temporary file. The writer would then read the data from the file, write it to the final result file and delete the temporary file.

This works but means that the data gets written to the file system twice. It may also require up to 2× disk space. The worst case happens when you compress only one very big file. On desktop machines this is not such a problem, but on something like a Raspberry Pi the disk is an SD card, which is very slow. You only want to write it once. SD cards also wear out when written to, which is another reason to avoid writes.

The new approach

An optimal solution would have all of these properties:
  1. Uses all CPU cores 100% of the time (except at the end when there are fewer tasks than cores).
  2. Writes data to the file system only once.
  3. Handles files of arbitrary size (much bigger than available RAM).
  4. Has bounded memory consumption.
It turns out that not all of these are achievable at the same time. Or at least I could not come up with a way. After watching some Numberphile videos I felt like writing a formal proof but quickly gave up on the idea. Roughly speaking since you can't reliably estimate when the tasks finish and how large the resulting files will be, it does not seem possible to choose an optimal strategy for writing the results out to disk.

The new architecture I came up with looks like this:


Rather than writing its result to a temporary file, each compressor writes it to a byte queue with a fixed maximum size. This was chosen to be either 10 or 100 megabytes, which means that in practice most files will fit the buffer. The queue can be in one of three states: not full, full or finished. The difference between the last two is that a full queue is one where the compression task still has data to compress but it can't proceed until the queue is emptied.

The behaviour is now straightforward. First launch compressor tasks as in decompressing. The file writer part will go through all the queues. If it finds a finished queue it will write it to disk and launch a new task. If it finds a full queue it will do the same, but it must write out the whole stream, meaning it is blocked until the current file has been fully compressed. If the compressions takes too long all other compression tasks will finish (or get full) but new ones can't be launched leading to CPU underutilization.

Is it possible to do better?

Yes, but only as a special case. Btrfs supports copying data from one file to another in O(1) time taking only an extra O(1) space. Thus you could write all data to temp files, copy the data to the final file and delete the temp files.

Ubuntu keeping up with GNOME stable updates

Recently Michael blogged about epiphany being outdated in Ubuntu. While I don’t think that a blog ranting was the best way to handle the problem (several of the Ubuntu Desktop members are on #gnome-hackers for example, it would have been easy to talk to us there) he was right that the Ubuntu package for epiphany was outdated.

Ubuntu does provide updates, even for packages in the universe repository

One thing Michael wrote was

Because Epiphany is in your universe repository, rather than main, I understand that Canonical does not provide updates

That statement is not really accurate.

First Ubuntu is a community project and not only maintained by Canonical. For example most of work done in recent cycles on the epiphany package was from Jeremy (which was one of the reason the package got outdated, Jeremy had to step down from that work and no-one picked it up).

Secondly, while it’s true that Canonical doesn’t provide official support for packages in universe we do have engineers who have interest in some of those components and help maintaining them.

Epiphany is now updated (deb & snap)

Going back to the initial problem, Michael was right and in this case Ubuntu didn’t keep up with available updates for epiphany, which has now been resolved

    • 3.28.5 is now available in Bionic (current LTS)
    • 3.32.1 is available in the devel serie and in the Disco (the current stable)
    • The snap versions are a build of gnome-3-32 git for the stable channel and a build of master in the edge channel.

Snaps and GTK 3.24

Michael also wrote that

The snap is still using 3.30.4, because Epiphany 3.32 depends on GTK 3.24, and that is not available in snaps yet.

Again the reality is a bit more complex. Snaps don’t have depends like debs do, so by nature they don’t have problems like being blocked by missing depends. To limit duplication we do provide a gnome platform snap though and most of our GNOME snaps use it. That platform snap is built from our LTS archive which is on GTK 3.22 and our snaps are built on a similar infrastructure.

Ken and Marcus are working on resolving that problem by providing an updated gnome-sdk snap but that’s not available yet. Meanwhile they changed the snap to build gtk itself instead of using the platform one, which unblocked the updates, thanks Ken and Marcus!

Ubuntu does package GNOME updates

I saw a few other comments recently along the lines of “Ubuntu does not provide updates for its GNOME components in stable series” which I also wanted to address here.

We do provide stable updates for GNOME components! Ubuntu usually ship its new version with the .1 updates included from the start and we do try to keep up with doing stable updates for point releases (especially for the LTS series).

Now we have a small team and lot to do so it’s not unusual to see some delays in the process.
Also while we have tools to track available updates, our reports are currently only for the active distro and not stable series which is a gap and leads us sometime to miss some updates.
I’ve now hacked up a stable report and reviewed the current output and we will work on updating a few components that are currently outdated as a result.

Oh, and as a note, we do tend to skip updates which are “translations updates only” because launchpad does allows us to get those without needing a stable package upload (the strings are shared by serie so getting the new version/translations uploaded to the most recent serie is enough to have those available for the next language pack stable updates)

And as a conclusion, if as an upstream or user you have an issue with a component that is still outdated in Ubuntu feel free to get in touch with us (IRC/email/launchpad) and we will do out best to fix the situation.

Domotica because mosquitoes

Mosquitoes

To combat mosquitoes I once had a mosquito net. Those nets are usually awful; they hang from one point in the middle and then extend over your bed. Meaning that your bed turns into a cramped igloo tent. One exceptional hot week in The Netherlands resulted in some mosquitoes bites. The refreshed mosquito annoyance resulted in the discovery of rectangular mosquito nets, see e.g. this example product (I bought a different one btw). It basically adds straight walls and a ceiling around your bed. The net can be removed pretty easily once summer is over. It’s a pretty cool solution with two drawbacks: a) difficult to combine with a ceiling fan b) difficult to combine with a hanging lamp.

A fun solution to the lamp problem is to replace the light with remote controllable LED panel, mostly because why not. And so the lamp was replaced with an Ikea LED panel after which the rectangular  mosquito net was installed. The LED light is remote controllable. You get a pretty cool remote control with it, one you can install against a wall but is magnetic so you can still take the remote control with you. The Ikea LED light uses a communication protocol called Zigbee. This protocol is one of the various home automation protocols out there. The decision for buying the Ikea LED panel was mostly made because of interest into Zigbee.

Communication methods/protocols

For personal use the most common home automation methods/protocols seem to be:

  • Zigbee
    As mentioned. It works around 2.4Ghz (has multiple channels). Battery efficient. Mesh network. It should support a huge amount of devices, though range is less than Z-Wave Plus. Range could also be restricted due to interference with 2.4GHz Wi-Fi .
  • Z-Wave Plus
    Similar to Zigbee. It works around 900MHz, exact frequency differs per region. Battery efficient. Also uses a mesh network. There’s a huge amount of devices using Z-Wave Plus, though they tend to be more expensive than any other method. Due to using 900MHz it goes through walls easier than e.g. Zigbee or Wi-Fi.
  • Wi-Fi devices
    They mostly use 2.4GHz and support slow Wi-Fi b/g/n. Most of these devices use a chip called ESP8266. These devices are often not as power efficient as Zigbee/Z-Wave Plus so seems this is not a good choice for anything battery operated.
  • 433MHz
    Various devices work on the frequency 433MHz. They’re usually battery efficient, though (it seems?) there’s no real standardized protocol so aside from the frequency it depends on the service. A lot of the cheap ‘turn all these lights on/off’ use this frequency.
  • MQTT
    This is purely a protocol like e.g. HTTP or SSH. Its origin is from 1999. It doesn’t define how the data is transported (e.g. wired or wireless xx G/MHz), purely focusses on the protocol.

Security

Although a few methods use encryption it doesn’t mean that they’re secure. E.g. Zigbee apparently is NOT secure against replay attacks, though the newer Zigbee v3 should be. The Wi-Fi devices by default are often “cloud connected”. The 433Mhz devices lack any encryption and it’s quite easy to influence things. MQTT has authentication and it could use encryption but appears it’s often not available on devices. Without encryption probably anyone on the same network can just use a network sniffer to capture the MQTT login details. The list of security issues is pretty extensive. It’s best to always assume someone could takeover anything connected to any Domotica which has wireless functionality. Same for a wire but at least it requires slightly more effort.

Free software

There’s various free software available for Domotica. A list of a few nice ones:

  • Home Assistant
    Tries to ensure it’s NOT “cloud connected”. You need a Raspberry Pi or some 24/7 on machine to run this on. This software supports loads of devices, including loads of Zigbee hubs, MQTT, Z-Wave Plus controllers, etc. There’s Hass.io for easy installation on a Raspberry Pi.
    Aside from Home Assistant there are loads of alternative free software options.
  • Tasmota firmware
    Alternative firmware for ESP8266 devices. It seems pretty much all Wi-Fi b/g/n-only devices have an ESP8266 chip in them. The Tasmota firmware is NOT “cloud connected”, though it does rely somewhat on the internet, e.g. NTP for time synchronization. It adds support for MQTT, KNX, rules, timers, etc. The rules and timers would allow you to have the ESP8266 device itself know when to do something instead of having Home Assistant telling the device to do something.
    The big drawback is that it’s not easy to flash this firmware on most of the ESP8266 devices. It usually means taking something apart, soldering, etc. Fortunately the firmware is able to auto update itself making it a one-off hassle.
    Aside from Tasmote I noticed a few other similar ESP8266 firmware options, each with their benefits and drawbacks. Tasmota seems the most used/popular.
  • Zigbee2mqtt
    This software consists of two parts: One part is the real Zigbee2mqtt software, the other part  is firmware.
    The combination of both bits allow you to directly control Zigbee devices instead of needing to rely on the various Zigbee hubs. Often a Zigbee hub can only control a limited amount of devices, usually only within the same brand. Zigbee2mqtt supports way more devices than most of the Zigbee hubs. It then transforms this into MQTT.
    The drawback is that you need various components plus some tinkering, though the website explains what to do quite well. The site suggests to use a CC2531 chip, though I prefer the CC2530 chip. This as the CC2530 allows you to use an antenna. The CC2531 is easier to use and has an integrated antenna (worse range, 30m line of sight). I highly prefer a better range (60m for CC2530). Hopefully within 6 months better chip solution will become available.
    Another drawback is the limited amount of devices directly supported by the CC2530 or CC2531 chips. After 20-25 directly connected devices you might run into issues. To use more devices you’ll to have some Zigbee routers. It’s best to plan this ahead and plan for some non-battery operated Zigbee devices (Zigbee routers).
    For using Zigbee2mqtt, one possibility would be to have a chip with Zigbee2mqtt connected to a Raspberry Pi (or any other device with Home Assistant), another option is to combine it with a Wi-Fi chip and operate it independently.
    Aside from translating Zigbee to MQTT the firmware also allows you to create a Zigbee router. This to extend the range of your Zigbee network. The cost of such a router is only a few EUR at most. Easier solutions are either any Zigbee device connected to electricity (most are then a router), or a signal enhancer from Ikea for 9.99 EUR,.

Devices

I noticed a few nice devices. There might be many more nice ones out there. One way of figuring out which options are out there is to browse the supported devices list by zigbee2mqtt.

  • Zigbee Ikea TRÅDFRI
    I like their remote, their LED panel and their LED lamps. They also have various sensors, but they seem quite big compared to other solutions.
  • Zigbee Xiaomi/Aqara sensors
    They used to be dirt cheap until AliExpress raised all of the prices. The ones I like are their window/door sensors, shock sensors, motion sensor plus their magic cube.
  • Wi-Fi BlitzWolf BW-SHP6
    I like their plugs because they’re low cost, small footprint plus they still have a button on it making it easy to keep a physical way to turn the device on. The BW-SHP6 is a small EU plug which doesn’t take up too much space and allows up to 2300 Watt. There’s also the BW-SHP2 but I’d use the Osram smart+ plug over the BW-SHP2. You’re able to flash the Tasmota firmware on both the BW-SHP2 as the BW-SHP6, though it’s quite difficult for the BW-SHP6.
  • Osram smart+ plug
    This is a Zigbee plug with a button (Ikea one lacks a button).
    Drawbacks: bigger than the BW-SHP6
    Benefits: it speaks Zigbee, it’s a Zigbee router (extends your Zigbee range), it supports 16 Ampere/3600 Watt, I hope/guess the standby usage is lower
    You can buy it pretty cheaply on Amazon.de, use Keepa.com to check if you have a good price. New it’s often 15 EUR. Osram also sells used ones on Amazon; it seems they’re basically new but without a box. The price for those are around 11.50 EUR.
  • Sonoff (Wi-Fi)
    Loads and loads of different low cost options.  They’re supported by the Tasmota firmware. Their Wi-Fi switches are about 5.50 EUR if you buy 3; a possible use case  is to make these part of an extension cord.
  • Shelly (Wi-Fi)
    These devices are small enough to hide them within your wall outlet. They’re supported by the Tasmota firmware. Do look at the Tasmota Wiki as the Shelly hardware has some issues. For wall sockets I’d prefer an obvious device over something hidden within the outlet.
  • QuinLED (Wi-Fi)
    These aren’t devices, more like complete design for a device. The website extensively explains not only the device designs but also tools and equipment, LED strip advices, recommendations, guideline on what tools and equipment to buy to create these devices, etc.
  • Z-Wave Plus devices
    They’re so many nice devices which do not seem to have a Wi-Fi or Zigbee equivalent. E.g. 300Watt dimmers (Zigbee seems limited to 30Watt). The devices are way more expensive though; easily pay 70+ EUR per device/sensor.

Price

Price wise, it might be good to buy a device (“hub”) with support for all the Zigbee devices, Z-Wave, Wi-Fi, 433MHz, etc. This as building everything yourself might not even be cheaper if you’re only building it once. This as you might need to buy loads of things: soldering equipment, wire stripper, up to possibly a 3D printer. That would be less fun though!

That said, it’s a bit unfortunate that to really integrate everything together requires too much knowledge. I’d like a more out-of-the box type of solution. Something I’d be comfortable with giving to family that’s easy to use, works well and is still free software.

Gnome-gitg Split View feature progress. [GSoC]

Introducing split view in gitg is comprised of small small tasks just like any other software feature requires.

One such task is:- Gitg should be able to detect binaries and images diffs if these are in the diffs then split-view simply does not makes sense.

With Albfan’s mentoring I was able to complete this task in mere 1-2 days and get it merged :- https://gitlab.gnome.org/GNOME/gitg/merge_requests/85 , 🙂

Now, This task was really less about code for me and more about research, the real challenge was to figure out some hints which might already be in gitg. And finally after looking a lots of code I found how it can be done.

Gitg has already two functions implemented which handles images and binary for example:- https://gitlab.gnome.org/GNOME/gitg/merge_requests/85/diffs#556709be8fab13c1f74f3f6557627ee7089bcb00_264_267

All I needed to do was to find these functions and then hide the split-view toggle buttons and it works like a charm 🙂

Now the next sub-task is to add a drawing-area between these two Diff-views which will be having curves drawing like we have in meld.

These curves will make the split-view look more interesting and intuitive.

The port from meld’s implementation is undergoing and I have been contributing to Albfan’s pet project https://gitlab.gnome.org/albfan/diferencia for this purpose.

Right now I was able to refactor it so that it can be ported to gitg’s needs easily.

The prototype looks kind of like this, so right now here I am adding the data manually with this handy UI, when ported to gitg the diff-commits will add the data automatically so basically this model needs to be implemented data source can by anything. I also added support for 3-way diff. Which basically be in the case of 3-way merge which is one of the other sub-task that needs to be poured into split-view feature.

https://gitlab.gnome.org/albfan/diferencia/merge_requests/7

So basically right now there are 2-3 blockers which I added to my task list in this MR, I believe will be solved by the end of this week and then we can start work to port it to gitg 🙂

I am available at [Kirito-3] #newcomers and #gitg , if anyone wants to know more 🙂

Fingers crossed 😉

GTK 3 Frame Profiler

I back-ported the GTK 4 frame-profiler by Matthias to GTK 3 today. Here is an example of a JavaScript application using GJS and GTK 3. The data contains mixed native and JS stack-traces along with compositor frame information from gnome-shell.

What is going on here is that we have binary streams (although usually captured into a memfd) that come from various systems. GJS is delivered a FD via GJS_TRACE_FD environment variable. GTK is delivered a FD via GTK_TRACE_FD. We get data from GNOME Shell using the org.gnome.Sysprof3.Profiler D-Bus service exported on the org.gnome.Shell peer. Stack-traces come from GJS using SIGPROF and the SpiderMonkey profiler API. Native stack traces come from the Linux kernel’s perf_event_open system. Various data like CPU frequency and usage, memory and such come from /proc.

Muxing all the data streams uses sysprof_capture_writer_cat() which knows how to read data frames from supplemental captures and adjust counter IDs, JIT-mappings, and other file-specific data into a joined capture.

A quick reminder that we have a Platform Profiling Initiative in case you’re interested in helping out on this stuff.

June 06, 2019

Sysprof Developments

This week I spent a little time fixing up a number of integration points with Sysprof and our tooling.

The libsysprof-capture-3.a static library is now licensed under the BSD 2-clause plus patent to make things easier to consume from all sorts of libraries and applications.

We have a MR for GJS to switch to libsysprof-capture-3.a and improve plumbing so Sysprof can connect automatically.

We also have a number of patches for GLib and GTK that improve the chances we can get useful stack-traces when unwinding from the Linux kernel (which perf_event_open does).

A MR for GNOME Shell automatically connects the GJS profiler which is required as libgjs is being used as a library here. The previous GJS patches only wire things up when the gjs binary is used.

With that stuff in place, you can get quite a bit of data correlated now.

# Logout, Switch to VT2
sysprof-cli -c "gnome-shell --wayland --display-server" --gjs --gnome-shell my-capture.syscap

If you don’t want mixed SpiderMonkey and perf stack-traces, you can use --no-perf. You can’t really rely on sample rates between two systems at the same time anyway.

With that in place, you can start correlating more frame data.

June 05, 2019

Gthree is alive!

A long time ago in a galaxy far far away I spent some time converting three.js into a Gtk+ library, called gthree.

It was never really used for anything and wasn’t really product quality. However, I really like the idea of having an easy way to add some 3D effects to Gnome apps.

Also, time has moved on and three.js got cool new features like a PBR material and a GLTF support.

So, last week I spent some time refreshing gthree to the latest shaders from three.js, adding a GLTF loader and a few new features, ported to meson and did some general cleanup/polish. I still want to add some more features in like skinning, morphing and animations, but just the new rendering is showing off just how cool this stuff is:

Here is a screencast showing off the model viewer:

Cool, eh!

I have to do some other work too, but I hope to get some more time in the future to work on gthree and to use it for something interesting.

June 04, 2019

OpenShift 4: Streamlining RHEL as Kubernetes-native OS

Been a while since I’ve blogged here, going to try to do so more often!  For quite a while now in the CoreOS group at Red Hat I’ve been part of a team working to create RHEL CoreOS, the cluster-managed operating system that forms a base of the just-released OpenShift 4.

With OpenShift 4 and RHEL CoreOS, we have created a project called machine-config-operator – but I like to think of it as the “RHEL CoreOS operator”.  This is a fusion of technologies that came from the CoreOS acquisition (Container Linux, Tectonic) along with parts of RHEL Atomic Host, but with a lot of brand new code as well.

What the MCO (machine-config-operator) does is pair with RHEL CoreOS to manage operating system updates as well as configuration in a way that makes the OS feel like a Kubernetes component.

This is a radically different approach than the OpenShift 3.x days, where the mental model was to provision + configure the OS (and container runtime), then provision a cluster on top.   With OpenShift 4 using RHCOS and the MCO, the cluster controls the OS.

If you haven’t yet, I encourage you to dive right in and play around with some of the example commands from the docs as well as examples from the upstream repository.  There is also my Devconf.cz 2019 talk (slightly dated now).

The release of 4.1 of course is just a beginning – there’s a whole lot more to do to bridge the worlds of the “traditional” operating system and Kubernetes/OpenShift.  For example, in git master of the MCO (for the next release after 4.1) we landed support for kernel arguments.  I think it’s quite cool to be able to e.g. oc edit machineconfig/50-nosmt, change the KernelArguments field in the MachineConfig CRD, add e.g. nosmt (or any other karg) and watch that change incrementally roll out across the cluster, reconciling the OS state just like any other Kubernetes object.

The links above have lots more detail for those interested in learning more – I’ll just link again operating system updates as I think that one is particularly interesting.

This release of OpenShift 4.1 is laying a powerful new foundation for things to come, and I’m really proud of what the teams have accomplished!

 

June 03, 2019

pictie, my c++-to-webassembly workbench

Hello, interwebs! Today I'd like to share a little skunkworks project with y'all: Pictie, a workbench for WebAssembly C++ integration on the web.

loading pictie...

<noscript> JavaScript disabled, no pictie demo. See the pictie web page for more information. </noscript>>&&<&>>>&&><<>>&&<><>>

wtf just happened????!?

So! If everything went well, above you have some colors and a prompt that accepts Javascript expressions to evaluate. If the result of evaluating a JS expression is a painter, we paint it onto a canvas.

But allow me to back up a bit. These days everyone is talking about WebAssembly, and I think with good reason: just as many of the world's programs run on JavaScript today, tomorrow much of it will also be in languages compiled to WebAssembly. JavaScript isn't going anywhere, of course; it's around for the long term. It's the "also" aspect of WebAssembly that's interesting, that it appears to be a computing substrate that is compatible with JS and which can extend the range of the kinds of programs that can be written for the web.

And yet, it's early days. What are programs of the future going to look like? What elements of the web platform will be needed when we have systems composed of WebAssembly components combined with JavaScript components, combined with the browser? Is it all going to work? Are there missing pieces? What's the status of the toolchain? What's the developer experience? What's the user experience?

When you look at the current set of applications targetting WebAssembly in the browser, mostly it's games. While compelling, games don't provide a whole lot of insight into the shape of the future web platform, inasmuch as there doesn't have to be much JavaScript interaction when you have an already-working C++ game compiled to WebAssembly. (Indeed, much of the incidental interactions with JS that are currently necessary -- bouncing through JS in order to call WebGL -- people are actively working on removing all of that overhead, so that WebAssembly can call platform facilities (WebGL, etc) directly. But I digress!)

For WebAssembly to really succeed in the browser, there should also be incremental stories -- what does it look like when you start to add WebAssembly modules to a system that is currently written mostly in JavaScript?

To find out the answers to these questions and to evaluate potential platform modifications, I needed a small, standalone test case. So... I wrote one? It seemed like a good idea at the time.

pictie is a test bed

Pictie is a simple, standalone C++ graphics package implementing an algebra of painters. It was created not to be a great graphics package but rather to be a test-bed for compiling C++ libraries to WebAssembly. You can read more about it on its github page.

Structurally, pictie is a modern C++ library with a functional-style interface, smart pointers, reference types, lambdas, and all the rest. We use emscripten to compile it to WebAssembly; you can see more information on how that's done in the repository, or check the README.

Pictie is inspired by Peter Henderson's "Functional Geometry" (1982, 2002). "Functional Geometry" inspired the Picture language from the well-known Structure and Interpretation of Computer Programs computer science textbook.

prototype in action

So far it's been surprising how much stuff just works. There's still lots to do, but just getting a C++ library on the web is pretty easy! I advise you to take a look to see the details.

If you are thinking of dipping your toe into the WebAssembly water, maybe take a look also at Pictie when you're doing your back-of-the-envelope calculations. You can use it or a prototype like it to determine the effects of different compilation options on compile time, load time, throughput, and network trafic. You can check if the different binding strategies are appropriate for your C++ idioms; Pictie currently uses embind (source), but I would like to compare to WebIDL as well. You might also use it if you're considering what shape your C++ library should have to have a minimal overhead in a WebAssembly context.

I use Pictie as a test-bed when working on the web platform; the weakref proposal which adds finalization, leak detection, and working on the binding layers around Emscripten. Eventually I'll be able to use it in other contexts as well, with the WebIDL bindings proposal, typed objects, and GC.

prototype the web forward

As the browser and adjacent environments have come to dominate programming in practice, we lost a bit of the delightful variety from computing. JS is a great language, but it shouldn't be the only medium for programs. WebAssembly is part of this future world, waiting in potentia, where applications for the web can be written in any of a number of languages. But, this future world will only arrive if it "works" -- if all of the various pieces, from standards to browsers to toolchains to virtual machines, only if all of these pieces fit together in some kind of sensible way. Now is the early phase of annealing, when the platform as a whole is actively searching for its new low-entropy state. We're going to need a lot of prototypes to get from here to there. In that spirit, may your prototypes be numerous and soon replaced. Happy annealing!

June 02, 2019

Breaking apart Dell UEFI Firmware CapsuleUpdate packages

When firmware is uploaded to the LVFS we perform online checks on it. For example, one of the tests is looking for known badness like embedded UTF-8/UTF-16 BEGIN RSA PRIVATE KEY strings. As part of this we use CHIPSEC (in the form of chipsec_util -n uefi decode) which searches the binary for a UEFI volume header which is a simple string of _FVH and then decompresses the volumes which we then read back as component shards. This works well on plain EDK2 firmware, and the packages uploaded by Lenovo and HP which use IBVs of AMI and Phoenix. The nice side effect is that we can show the user what binaries have changed, as the vendor might have accidentally forgotten to mention something in the release notes.

The elephants in the room were all the hundreds of archives from Dell which could not be loaded by chipsec with no volume header detected. I spent a few hours last night adding support for these archives, and the secret is here:

  1. Decompress the firmware.cab archive into firmware.bin, disregarding the signing and metadata.
  2. If CHIPSEC fails to analyse firmware.bin, look for a > 512kB decompress-able Zlib section somewhere after the capsule header, actually in the PE binary.
  3. The decompressed blob is in PFS format, which seems to be some Dell-specific format that’s already been reverse engineered.
  4. The PFS blob is not further compressed and is in one continuous block, and so the entire PFS volume can be passed to chipsec for analysis.

The Zlib start offset seems to jump around for each release, and I’ve not found any information in the original PE file that indicates the offset. If anyone wants to give me a hint to avoid searching the multimegabyte blob for two bytes (and then testing if it’s just chance, or indeed an Zlib stream…) I would be very happy, even if you have to remain anonymous.

So, to sum up:

CapsuleHeader
  PE Binary
    Zlib stream
      PFS
        FVH
          PE DXEs
          PE PEIMs
          …

I’ll see if chipsec upstream wants a patch to do this as it’s probably useful outside of the LVFS too.

June 01, 2019

Looking at why the Meson crowdfunding campaign failed

The crowdfunding campaign to create a full manual for the Meson build system ended yesterday. It did not reach its 10 000€ goal so the book will not be produced and instead all contributed money will be returned. I'd like to thank everyone who participated. A special thanks goes out to Centricular for their bronze corporate sponsorship (which, interestingly, was almost 50% of the total money raised).

Nevertheless the fact remains that this project was a failure and a fairly major one at that since it did not reach even one third of its target. This can not be helped, but maybe we can salvage some pieces of useful information from the ruins.

Some statistics

There were a total of 42 contributors to the campaign. Indiegogo says that a total of 596 people visited the project when it was live. Thus roughly 7% of all people who came to the site participated. It is harder to know how many people saw information about the campaign without coming to the site. Estimating based on numbers based on the blog's readership, Twitter reach and other sources puts the number at around 5000 globally (with a fairly large margin of error). This would indicate a conversion rate of 1% of all the people who saw any information about the campaign. In reality the percentage is lower since many of the contributors were people who did not really need convincing. Thus the conversion rate is probably closer to 0.5% or even lower.

The project was set up so that 300 contributors would have been enough to make the project a success. Given the number of people using Meson (estimated to be in the tens of thousands) this seemed like a reasonable goal. Turns out that it wasn't. Given these conversion numbers you'd need to reach 30 000 – 60 000 people in order to succeed. For a small project with zero advertising budget this seems like a hard thing to achieve.

On the Internet everything drowns

Twitter, LinkedIn, Facebook and the like are not very good channels for spreading information. They are firehoses where any one post has an active time of maybe one second if you are lucky. And if you are not, the platforms' algorithms will hide your post because they deem it "uninteresting".  Sadly filtering seems to be mandatory, because not having it makes the firehose even more unreadable. The only hope you have is that someone popular writes about your project. In practice this can only be achieved via personal connections.

Reddit-like aggregation sites are not much better, because you have basically two choices: either post on a popular subreddit or an unpopulare one. In the first case your post probably won't even make it on the front page, all it takes is a few downvotes because the post is "not interesting" or "does not belong here". A post that is not on the front page might not as well even exist; no-one will read it. Posting on an non-popular area is no better. Your post is there but it will reach 10 people and out of those maybe 1 will click on the link.

New sites are great for getting the information out, but they suffer from the same popularity problem as everything else. A distilled (and only slightly snarky) explanation is that news sites write mainly about two things:
  1. Things they have already written about (i.e. have deemed popular)
  2. Things other news sites write about (i.e. that other people have deemed popular)
This is not really the fault of news sites. They are doing their best on a very difficult job. This is just how the world and popularity work. Things that are popular get more popular because of their current popularity alone. Things that are not popular are unlikely to ever become popular because of their current unpopularity alone.

Unexpected requirements

One of the goals of this campaign (or experiment, really) was to see if selling manuals would be a sustainable way to compensate FOSS project developers and maintainers for their work. If working this would be a good way for compensation, because there are already established legal practices for selling books across the world. Transferring money in other ways (donations etc) is difficult and there may be legal obstacles.

Based on this one experiment this does not seem to be a feasible approach. Interestingly multiple people let me know that they would not be participating because the end result would not be released under a free license. Presumably the same people do not complain to book store tellers that "I will only buy this Harry Potter book if, immediately after my purchase, the full book is released for free on the Internet". But for some reason there is a hidden assumption that because a person has released something under a free license, they must publish everything else they do under free licenses as well.

These additional limitations make this model of charging for docs really hard to pull off. There is no possibility of steady, long term money flow because once a book is out under a free license it becomes unsellable. People will just download the free PDF instead. A completely different model (or implementation of the attempted model) seems to be needed.

So what happens next?

I don't really know. Maybe the book can get published through an actual publisher. Maybe not. Maybe I'll just take a break from the whole thing and get back to it later. But to end on some kind of a positive note I have extracted one chapter from the book and have posted it here in PDF form for everyone to read. Enjoy.

What is a Platform?

Often when looking for apps on Linux, one might search for something “cross-platform”. What does that mean? Typically it refers to running on more than one operating system, e.g. Windows, macOS, and GNU/Linux. But, what are developers really targeting when they target GNU/Linux, since there’s such diverse ecosystem of environments with their own behaviors? Is there really a “Linux Desktop” platform at all?

The Prerequisites

When developing an app for one platform, there are certain elements you can assume are there and able to be relied on. This can be low-level things like the standard library, or user-facing things like the system tray. On Windows you can expect the Windows API or Win32, and on macOS you can expect Cocoa. With GNU/Linux, the only constants are the GNU userspace and the Linux kernel. You can’t assume systemd, GLib, Qt, or any of the other common elements will be there for every system.

What about freedesktop? Even then, not every desktop follows all of the specifications within freedesktop, such as the Secrets API or the system tray. So making assumptions based on targeting freedesktop as a platform will not work out.

To be a true platform, the ability to rely on elements being stable for all users is a must. By this definition, the “Linux Desktop” itself is not a platform, as it does not meet the criteria.

Making Platforms Out of “The Linux Desktop”

It is possible to build fully realized platforms on top of GNU/Linux. The best example of this is elementary OS. Developers targeting elementary OS know that different elements like Granite will be present for all users of elementary OS. They also know elements that won’t be there, such as custom themes or a system tray. Thus, they can make decisions and integrate things with platform specifics in mind. This ability leads to polished, well-integrated apps on the AppCenter and developers need not fear a distro breaking their app.

To get a healthy app development community for GNOME, we need to be able to have the same guarantees. Unfortunately, we don’t have that. Because GNOME is not shipped by upstream, downstreams take the base of GNOME we target and remove or change core elements. This can be the system stylesheet or something even more functional, like Tracker (our file indexer). By doing this, the versions of GNOME that reach users break the functionality or UX in our apps. Nobody can target GNOME if every instance of it can be massively different from another. Just as no one can truly target the “Linux Desktop” due to the differences in each environment.

How do we solve this, then? To start, the community idea of the “Linux Desktop” as a platform needs to be moved past. Once it’s understood that each desktop is target that developers aim for, it will be easier for users to find what apps work best for their environment. That said, we need to have apps for them to find. Improving the development experience for various platforms will help developers in making well-integrated apps. Making sure they can safely make assumptions is fundamental, and I hope that we get there.

More little testing

Back in March, I wrote about µTest, a Behavior-Driven Development testing API for C libraries, and that I was planning to use it to replace the GLib testing API in Graphene.

As I was busy with other things in GTK, it took me a while to get back to µTest—especially because I needed some time to set up a development environment on Windows in order to port µTest there. I managed to find some time over various weekends and evenings, and ended up fixing a couple of small issues here and there, to the point that I could run µTest’s own test suite on my Windows 10 box, and then get the CI build job I have on Appveyor to succeed as well.

Setting up MSYS2 was the most time consuming bit, really

While at it, I also cleaned up the API and properly documented it.

Since depending on gtk-doc would defeat the purpose, and since I honestly dislike Doxygen, I was looking for a way to write the API reference and publish it as HTML. As luck would have it, I remembered a mention on Twitter about Markdeep, a self-contained bit of JavaScript capable of turning a Markdown document into a half decent HTML page client side. Coupled with GitHub pages, I ended up with a fairly decent online API reference that also works offline, falls back to a Markdown document when not running through JavaScript, and can get fixed via pull requests.

Now that µTest is in a decent state, I ported the Graphene test suite over to it and, now I can run it on Windows using MSVC—and MSYS2, as soon as the issue with GCC gets fixed upstream. This means that, hopefully, we won’t have regressions on Windows in the future.

The µTest API is small enough, now, that I don’t plan major changes; I don’t want to commit to full API stability just yet, but I think we’re getting close to a first stable release soon; definitely before Graphene 1.10 gets released.

In case you think this could be useful for you: feedback, in the form of issues and pull requests, is welcome.

May 31, 2019

Profiling GNOME Shell

As of today, Mutter and GNOME Shell support Sysprof-based profiling.

Christian wrote a fantastic piece exposing what happened to Sysprof during this cycle already, and how does it look like now, so I’ll skip that.

Instead, let me focus on what I contributed the most: integrating Mutter/GNOME Shell to Sysprof.

Let’s start with a video:

Sysprof 3 profiling GNOME Shell in action

When it comes to drawing, GTK- and Clutter-based applications follow a settled cycle drawing cycle:

  • Size negotiation: position screen elements, allocates sizes;
  • Paint: draw the actual pixels;
  • Pick: find the element below the pointer;

What you see in the video above is a visual representation of this cycle happening inside GNOME Shell, although a bit more detailed.

With it, we can see when and why a frame was missed; what was probably happening when it occurs; and perhaps the most important aspect, we have actual metrics about how we’re performing.

Of course, there is room for improvements, but I’m sure this already is a solid first step. Even before landing, the profiling results already provided us with various insights of potential improvements. Good development tools like this result in better choices.

Enjoy!

May 27, 2019

Improving #newcomer experience at Gnome

Being an #old newcomer myself , I am still lacking here and there about Gnome , So just few weeks ago I was trying to learn more about our Gitlab instance, thanks to csoriano for not just letting me know about my query but also for letting me help him in this newcomer initiative.

The main motto is:-

“Remove every possible obstacle in the way of a newcomer’s first contribution”

So basically Carlos encouraged me that he also wants someone who has experienced this newcomer journey recently which can help him know what can be improved.

I really like the fact that even though being just 8 months in Gnome I can work with the President of the Gnome Project itself 😛 , this is what I love about open source.

So basically when Carlos got time out of his busy schedule , I was able to discuss few points with him which we can improve:-

1. Remove Bugzilla links and mentions from Gnome wiki

2. It will be cool to have a project recommendation system:- whatcanidoforfedora.org

3. Focus on IRC channels where people can get help

etc.

I also proposed the idea of creation of sudo pet projects for example :-

https://github.com/ibqn/libgd-vala-samples

or mention the existing pet projects which developers use to experiment with:-

https://gitlab.gnome.org/exalm/sidebarexample

https://gitlab.gnome.org/albfan/textviewhyperlinks [This one was my starting point]

The point of sudo projects is that , for newcomers it’s really hard to straightaway jump into huge code of projects where they might not even know things like vala , gtk , code-style , …

So these pet projects I believe that can help newcomers play around with the technology and make them learn how workflow goes in Gnome about gitlab, coding style etc.

Carlos liked the idea but it’s complex , hence for now main focus is to improve the guides and make sure they are up to date and ensure they work.

If any newcomer wants to give more feedback or maybe any existing developer wants to give feedback from his past journey kindly please do in #newcomers 🙂

For example exalm gave:-

 “A common obstacle is that people clone projects before forking them on gitlabmake changes and then are told that actually they needed to fork to do a MR”

Here we need to identify things which we can improve and innovate so that our newcomers can easily get on board in the community for contributions.

At last I will say, although I am in GSOC 2019, but I really love the fact that there are so many ways in which I can contribute to this lovely community and one of them is this, finally thanks to Carlos who get me involved in this to help him 🙂 .

Work Done Till Now:-

1.For starting I took it upon myself and tried to replace Bugzilla from the newcomers guide with Issue tracker at Gitlab.

Although I removed it as much as I can , but still if anyone finds mention of Bugzilla where it’s highly focused upon in the newcomers guide please help in the improvement.

Contributing to GNOME Boxes


GNOME Boxes  is used to view, access, and manage remote and virtual machines. Currently it is able to do either express-installations on a downloaded ISO or to download an ISO and offer the option to express-install it.

I stumbled upon the project ‘Improve GNOME Boxes express-installations by adding support to tree-based installations’, it required the knowledge of Object Oriented programming and I had done this course last semester only and had a project for the course. Even though I had used Java primarily for the project, it didn’t take me long to understand Vala. I quickly jumped into the irc channel of GNOME Boxes, introduced myself to the mentors(one of them being Felipe Borges), got the source code and built it. Initially it took me some time to understand how things were working but the mentors were patient enough to even answer my dumbest queries. I then started working on a few bugs, they helped me understand the front-end part of the project. I’ve fixed around 6 issues till now and am now working on understanding how the back-end works, mainly how we use libosinfo to get certain information about operating systems, hypervisors and the (virtual) hardware devices they can support.
Happy Coding :)

Why you can and should apply for the board

It’s GNOME board elections time!

Community members can apply to become GNOME Foundation directors, and the process is quite easy, it’s just about sending an email to two mailing lists. We can improve on the number of participation though, and having a good amount of applicants is important for having a healthy foundation - the more applicants there are, the more likely that different views, skills and working areas are represented.

I believe one of the big factors of not having high participation in elections is the lack of knowledge of what the board does and how much of a commitment it is. Because of that, we question whether we are ready for taking on the position. While minutes published by the board are an excellent tool (and I really need to thank Phillip and Federico here), minutes usually don’t tell the whole story.

In light of recent discussions on foundation list, I realized we also didn’t explain yet how the work of the board has changed to a more strategic position, and what that entails.

So let me try to improve that, and explain why you should apply and why you definitely can apply.

How does the work of the board look nowadays?

For the purpose of understanding what kind of work we do, let me split them in two types. Work that is about execution, and work that is about strategy.

Examples of execution based tasks are:

  • Sending an email to a certain committee.
  • Moving this wiki page to this other wiki page.
  • Cleaning up minutes.
  • Approving/declining small budget stuff such as stickers, release dinners, hackfests snacks, etc.
  • Approving/declining big budget stuff such as GUADEC, GNOME Asia, services (CI, map titles, etc.), hardware, etc.
  • Approving/declining legal things such as trademarks usage.

Examples of strategic tasks are:

  • Defining a policy for travel sponsorship.
  • Defining what GNOME software is.
  • Defining committee responsibilities.
  • Defining yearly goals.
  • Defining trademarks and its usage.
  • Defining budget spending policy for small events.
  • Defining events bidding process.

You can already see that strategic tasks are usually done to support execution based tasks. However, execution based tasks are what actually delivers, and without them strategic tasks don’t have much use. Execution based tasks are about doing, strategic tasks are about thinking and creating.

The difference between those is quite significant, execution based tasks take around 1-2 weeks to complete, are quite independent of other people and have clear steps to do them. Strategic tasks take between 6 months and 2 years long to complete, they are dependent on other people, involving several discussions in various meetings and emails, they are quite more complex and usually involve different areas such as community, legal, market knowledge, etc. Most of the times you learn about them while you go, so while you gain significant experience on those areas, the ramp up to work on strategic tasks takes a while.

Historically, the board has been mostly execution based. The reason is that execution based tasks needs to be done no matter what. So what happened is that strategic tasks were delayed, or never done, in order to keeps things running. I would say the balance was around 90%-10% for execution based and strategic tasks respectively.

This last year that has changed significantly. The staff of the foundation is doing the heavy lifting on the execution based tasks, so now the job left for the board is mostly strategic. The balance has flipped to a 10%-90% split of execution based and strategic tasks. This is already implemented, there is almost nothing left to hand off to the staff nowadays.

The board tasks over the last ~6 months have been focused on those that are usually up to a board to do, most of them cannot be done by the staff due to legal or general charity guidance, such as setting the compensation for the ED or setting the goals for the foundation.

Why can you apply?

Alright, so you think you have some motivation to be on the board. But maybe reading the candidacy emails or the section I wrote before makes you think that the tasks are too complex, or that the commitment to be on the board might be too big for you. Well, I want to debunk this here.

One particularity of the board is that everyone takes on things they want to take on. And no more than that. There isn’t a “you didn’t do anything for a month, you should do this other task” kind of situations. Some of us simply provide opinions here and there, either because we don’t have time, or don’t have the knowledge, or don’t have the energy or interest on learning that knowledge.

And that is okay, specially if you don’t have experience on dealing with the tasks mentioned in the previously, the rest of the board and the staff are aware of that, it’s expected that you will help where you can. We are volunteers after all, and sometimes an opinion or a question is the most helpful thing we can do.

There are only a few things that are required: Being open to learn, a bit of proactivity, attending 1h weekly meetings, attending GUADEC, and lastly, some time and willingness to take on some work (1-2h per week, some weeks much more, some weeks none at all).

Some areas may not be interesting for you, but some others may. Once you are in the board, take the initiative to volunteer on those that you feel are of interest to you and what you will do will be already useful.

Why should you apply?

You have some motivation to be on the board, but what’s really in there for you? What do you get from it? For me, there are two big benefits.

One is leading the community and having an impact on the direction of the project. If you are reading this, most probably you care about the project and the community behind it. This is usually the biggest motivation, and having a way to impact the project with your ideas is certainly a good feeling. Being on the board will give you the insight necessary to move forward with your high impact ideas, will open the doors to external entities that will help you drive those initiatives in a much easier way, and will give you certain skills that will make them most likely to succeed.

The second one is personal and career development. Personally speaking, I gained knowledge about handling changes that have impact for over (most probably) 100 people, negotiating deals with companies, handling legal issues and concerns, manage personal conflicts, hiring process and salaries, and in general, management and leadership.

In regards of soft skills, I can say that some opportunities on the board are around the level of Principal or Senior Principal Engineer, with the advantage that you take only on the things you feel comfortable with while allowing you to get out of the comfort zone with the support of the rest of the board and staff.

Is there any downside?

I hope what I said so far has motivated you enough to apply, you can see there are quite a few advantages both personally and for the community for you to go ahead and apply.

So is there any downside? Honestly, I think there is only one. Fear of rejection.

I applied for first time to the board about 4 years ago. I got 5 first position votes out of 100. Well… honestly, this had an impact on me. I thought the community simply didn’t want me, and that’s all about it. I decided to not apply again.

Despite that, here I am, last year I got quite a few of those first position votes for myself.

The key is to not take it personally, this can happen for multiple reasons that are not specific to you as a person.

Maybe you proposed an initiative in your candidacy email and it was too early to make it happen, maybe simply you were a visionary. Or maybe members didn’t know you enough, or you lacked some experience in coordinating initiatives.

Good thing is, these can be improved quite easily. You can take the lead on small tasks that involve several contributors, you will get the experience of leading those initiatives and members will get to know you. Maybe simply go over 3.34 items and ask about their status and who can help move them forward? Or just improve Meson build along different modules? Maybe help with an existing initiative such as newcomers or Flatpak? How about coordinating an announcement with the engagement team? Or coordinating the release notes?

Owning tasks that go across different projects is a good way get elected the next time.

Apply today

I think I speak for the whole board when I say that we are glad to answer any questions about what working for the board entails, so feel free to simply informally contact any of us on IRC or email and we will answer any questions as our individual availability permits.

Now is your turn, apply, apply and apply.

Thanks Nuritzi and Rob for going through the document.