February 28, 2021

Cambalache…

Excursions: Driving on the wrong side

I think it's about time for the next installment in the Excursions series.

One area I was always interested in this a long time back has been transportation-related infrastructure like roads and rail. And a fact that comes up quite naturally along that is the “sidedness” of traffic in different countries. Today a majority makes use of right-hand traffic, but this has changed over the cause of time. It is sometimes said the prevalence of right-hand traffic in Europe (and the Western world in general) is related to Napoleon and him wanted to keep England “at arms length”. This seems to quite disputed though…

But rather that going through various countries handedness here. I thought we should look at something more interesting and quirky. Because as it turns out, it's not always the case that the standard is entirely the same within a single country.

For the first example we could take the United States. Driving on the right, right?

Well, mostly, but there's an exception as it turns out:

The territory The US Virgin Islands is actually left-hand traffic.

Wikipedia 

https://openstreetmap.org/relation/286898

Here we can see a map view of the capital city Charlotte Amalie, and on the follow view we can clearly see LHT on a dual carriageway street:


And talking about the United Kingdom (LHT, and remember the Napoleon remarks…). Despite being a well-known LHT example, especially in a European context, there's a part of the UK practicing RHT. Namely Gibraltar:

Always sharing land border with Spain this was more practical it seems…

Another quirky feature in Gibraltar is the level-crossing with the runway on the airport on the main road (connecting with Spain) Winston Churchill Avenue (Wikipedia)

It has crossing gates and lights, pretty much like on a railroad crossing… except when it's ringing, a Boeing 737 might zip by…


Next we go east to Hong Kong. Under British influence Hong Kong practiced left-hand trafic. And this has been kept also after 1997. Thus on the borders to mainland China contraptions like these can be seen to facilitate switching sides:


And not far from there we have a similar situation in Macao. When Macao was a Portuguese colony from the 16th century until 1999. And while Portugal itself switched from LHT to RHT in 1928, in Macao LHT continued, and it has been retained. A similar “trafic carousel“ as in the Hong Kong case can be seen at the mainland side of the Lótus Bridge (Wikipedia).


On the more quirky historical case is a road near the lake Björkvattnet (geo:64.608587,13.700729;crs=wgs84;u=0) in Jämtland in Sweden. While Sweden was LHT until 1967, neighboring Norway had been practicing RHT since a long time before. So when a new road was built between Kvelia and Tunnsjø (in Norway), the shortest and most conveniant option was to build the road for a stretch through Sweden. But imposing left-hand trafic on this single and at the time not connected to other roads in Sweden seemed inconvenient. So this road (and residents living there) was just practising RHT. And the only connections where through Norway anyway (the Z822 road leading to Gäddede and connecting to the rest of the Swedish road network was not built until some time after 1967, I think I read some discussions on this and when the roads where built at some forum, but I can't find those again…).


And also speaking of quirks, we also have the Punte Umberto I bridge in Rome https://openstreetmap.org/relation/5679034

Wikipedia 

The bridge itself is actually LHT (fortunately, maybe, there's a median fence).

And while speaking of countries switch sides, it seems historically it's been most common switching from LHT to RHT. But there's case of the opposite as well. After WWII Okinawa was occupied by the United States and RHT was practiced there, but in 1978 (six years after control of the islands was returned to Japan) they switched back to LHT.

https://openstreetmap.org/relation/4556086

Arguably this could be seen more as a case of reverting back to how it was before and getting back to the standards in the rest of the country. But still a bit interesting, I think.

And there is actually plans for a country as a whole to switch from RHT to LHT. And this is in Rwanda. The motivation would be that neighboring countries are predominantly LHT, so besides benefits of being similar to neighbors, prices of used vehicles with the steering wheel on the right side are generally lower.

 

Here's a map view from the capital Kigali:

https://openstreetmap.org/node/60485579

 

And by the way, if you use GNOME Maps, you can paste the openstreetmap.org URLs and geo: URIs directly into the search bar to go to these places.

 

So, let's see what we should explore next time!

February 27, 2021

A new data format has landed in the upcoming GTG 0.5

Here’s a general call for testing from your favorite pythonic native Linux desktop personal productivity app, GTG.

In recent months, Diego tackled the epic task of redesigning the XML file format from a new specification devised with the help of Brent Saner (proposal episodes 1, 2 and 3), and then implementing the new file format in GTG. This work has now been merged to the main development branch on GTG’s git repository:

Diego’s changes are major, invasive technological changes, and they would benefit from extensive testing by everybody with “real data” before 0.5 happens (very soon). I’ve done some pretty extensive testing & bug reporting in the last few months; Diego fixed all the issues I’ve reported so far, so I’ve pretty much run out of serious bugs now, as only a few remain targetted to the 0.5 milestone… But I’m only human, and it is possible that issues might remain, even after my troll-testing.

Grab GTG’s git version ASAP, with a copy of your real data (for extra caution, and also because we want you to test with real data); see the instructions in the README, including the “Where is my user data and config stored?” section.

Please torture-test it to make sure everything is working properly, and report issues you may find (if any). Look for anything that might seem broken “compared to 0.4”, incorrect task parenting/associations, incorrect tagging, broken content, etc.

If you’ve tried to break it and still couldn’t find any problems, maybe one way to indicate that would be a “👍” on the merge request—I’m not sure we really have another way to know if it turns out that “everything is OK” 🙂

Your help in testing this (or spreading the word) will help ensure a smooth transition for users getting an upgrade from 0.4 to 0.5, letting us release 0.5 with confidence. Thanks!

February 26, 2021

2021-02-26 Friday

  • Short TDF board call; partner / sales call, admin. Out for a run with J.
  • Finally got around to posting my FOSDEM slides, first an update for the Collaboration dev-room on integrating (with video)
  • And also for the work we did to integrate COOL into an app-image for easy installation as part of Nextcloud Hub (with video)

February 25, 2021

2021-02-25 Thursday

  • TDF budget ranking, calls, day of admin. COOL community roundup. Some perf / profiling of COOL - nice N^3 algorithm in writer affecting huge docs found.
  • Idle Freeze Pane fix, product management pieces, late partner call.

Friends of GNOME Update – February 2021

Welcome to the February Friends of GNOME Update!

A photo of snow and ice crystals clinging to plants
“Snow!” by neil-5110 is licensed under CC BY-SA 2.0

GNOME on the Road

Typically FOSDEM is a big deal for the GNOME Foundation. We have a booth, we give talks, we run hackfests, there is GNOME Beers, and we have lots and lots of meetings. This year FOSDEM was a little different.

While we didn’t give any talks or run a hackfest, we had a virtual stand. For us, the highlight of this was having scheduled hours in the chat, during which we talked with participants about different GNOME-related topics. It was great to meet people, and it’s always fun to talk about GNOME.

Our GNOME Beers event was also a lot of fun. Around 40 people joined Neil McGovern for a tour of three different Belgian beers. We learned more about beer than many of us expected to.

In March, Neil will be speaking at LibrePlanet 2021, the Free Software Foundation’s annual conference. LibrePlanet 2021 takes place online March 20-21.

Events Hosted By GNOME

We have four upcoming events we’d like to share with you.

GNOME Latino Event

With a goal to have a one day event to celebrate GNOME in Latin America, we’re supporting a GNOME event that will take place entirely in Spanish and Portuguese. This will take place on March 27th, and an event on events.gnome.org will be added soon.

Community Education Challenge Phase Three Winner Showcase

On April 7 at 17:00 UTC, the Community Education Challenge phase three winners will be showing off the work they’ve done on their projects — and you can join us. These projects have been working for months to build programs and tools to help people get involved in building FOSS and with the GNOME community. You can learn more about them online.

Linux App Summit

We co-organize the Linux App Summit with KDE. This year’s conference is taking place online, May 13 – 15. LAS is about building and sustaining a Linux application ecosystem. We believe that having many excellent apps is important to promote FOSS adoption, including GNOME.

The call for papers is open, so consider submitting a talk today! We’re looking for sessions on everything related to apps, including legal and licensing and community growth and care, in addition to more technical topics.

GUADEC

We have also announced GUADEC 2021! GUADEC will take place July 21 – 25, also online. GUADEC is the GNOME conference, covering everything GNOME and many general FOSS topics in talks, birds of a feather sessions, and workshops.

The call for abstracts it open. We’re looking for talks related to FOSS in general as well as GNOME specifically. Past talks I’ve personally enjoyed have been on growing the tech community in Kenya; the environmental impact of tech and what we can do about it; better communication with open, remote collaborative communities; how to have great meetings; and many GNOME specific topics.

While a formal announcement will be coming soon, we’re pretty excited about the GUADEC keynotes, Hong Phuc Dang and Shauna Gordon-McKeon.

Technical Developments

Since GTK 4.0 released, we’ve put out several bug fixes. We’ve been working with the community on GTK 4.2, which should be ready in time for the GNOME 40 release. We’re also working on revamping the documentation, including using a new tool to generate references from the introspection data also consumed by the various language bindings.

GNOME has been doing a lot of work on GNOME Shell for GNOME 40. This includes numerous UX updates. You can read about them on the GNOME Shell & Mutter blog. Topics include multi-monitor development, the user resting and research that went into the design changes, and general status updates.

Outreachy

We’re always excited for Outreachy, and this round is no different! We are currently looking mentors (signup by March 5). You can submit an idea online.

Outreachy provides paid internships in FOSS (and in this case in GNOME) for people who face systemic bias that historically has made it difficult for them to participate in FOSS and/or the technology industry.

We are planning on participating in Google Summer of Code, and will share more details as they arise. You can check out project ideas on GitLab.

Chat Evaluation

GNOME uses a number of different communication tools: IRC, Matrix, Rocketchat, and Telegram. Kristi Progri is in the process of leading a chat evaluation. This is to determine which communication channels people are using, and how and why they are using those channels. Preliminary research has been completed, and we’ll be working on to surveying the community in March.

Thank you!

We try to highlight the most exciting things we’re working on in this Update, but we do a lot more, including infrastructure support, community work, and things like taxes. Your generosity helps us make sure we can get everything done. Thank you.

Sysprof and Podman

With the advent of immutable/re-provisional/read-only operating systems like Fedora’s Silverblue, people will be doing a lot more computing inside of containers on their desktops (as if they’re not already).

When you want to profile an entire system with tools like perf this can be problematic because the files that are mapped into memory could be coming from strange places like FUSE. In particular, fuse-overlayfs.

There doesn’t seem to be a good way to decode all this indirection which means in Sysprof, we’ve had broken ELF symbol decoding for your things running inside of podman containers (such as Fedora’s toolbox). For those of us who have to develop inside those containers, that can really be a drag.

The problem at the core is that Sysprof (and presumably other perf-based tooling) would think a file was mapped from somewhere like /usr/lib64/libglib-2.0.so according to the /proc/$pid/maps. Usually we translate that using /proc/$pid/mountinfo to the real mount or subvolume. But if fuse-overlayfs is in the picture, you don’t get any insight into that. When symbols are decoded, it looks at the host’s /usr/lib/libglib-2.0.so and finds an inode mismatch at which point it will stop trying to decode the instruction address.

But since we still have a limited number of container technologies to deal with today, we can just cheat. If we look at /proc/$pid/cgroup we can extract the libpod container identifier and use that to peek at ~/.local/share/containers/storage/overlay-containers/containers.json to get the overlayfs layer. With that, we can find the actual root for the container which might be something like ~/.local/share/containers/storage/overlay/$layer/diff.

It’s a nasty amount of indirection, and it’s brittle because it only works for the current user, but at least it means we can keep improving GNOME even if we have to do development in containers.

Obligatory screenshot of turtles. gtk4-demo running in jhbuild running in Fedora toolbox (podman) with a Fedora 34 image which uses fuse-overlayfs for file access within the container. Sysprof now can discover this and decode symbols appropriately alongside the rest of the system. Now if only we could get distributions to give up on omitting frame pointers everywhere just so their unjustifiable database benchmarks go up and to the right a pixel.

Librsvg, Rust, and non-mainstream architectures

Almost five years ago librsvg introduced Rust into its source code. Around the same time, Linux distributions started shipping the first versions of Firefox that also required Rust. I unashamedly wanted to ride that wave: distros would have to integrate a new language in their build infrastructure, or they would be left without Firefox. I was hoping that having a working Rust toolchain would make it easier for the rustified librsvg to get into distros.

Two years after that, someone from Debian complained that this made it hard or impossible to build librsvg (and all the software that depends on it, which is A Lot) on all the architectures that Debian builds on — specifically, on things like HP PA-RISC or Alpha, which even Debian marks as "discontinued" now.

Recently there was a similar kerfuffle, this time from someone from Gentoo, specifically about how Python's cryptography package now requires Rust. So, it doesn't build for platforms that Rust/LLVM don't support, like hppa, alpha, and Itanium. It also doesn't build for platforms for which there are no Rust packages from Gentoo yet (mips, s390x, riscv among them).

Memories of discontinued architectures

Let me reminisce about a couple of discontinued architectures. If I'm reading Wikipedia correctly, the DEC Alpha ceased to be developed in 2001, and HP, who purchased Compaq, who purchased DEC, stopped selling Alpha systems in 2007. Notably, Compaq phased out the Alpha in favor of the Itanium, which stopped being developed in 2017.

I used an Alpha machine in 1997-1998, back at the University. Miguel kindly let me program and learn from him at the Institute where he worked, and the computer lab there got an Alpha box to let the scientists run mathematical models on a machine with really fast floating-point. This was a time when people actually regularly ssh'ed into machines to run X11 applications remotely — in their case, I think it was Matlab and Mathematica. Good times.

The Alpha had fast floating point, much faster than Intel x86 CPUs, and I was delighted to do graphics work on it. That was the first 64-bit machine I used, and it let me learn how to fix code that only assumed 32 bits. It had a really picky floating-point unit. Whereas x86 would happily throw you a NaN if you used uninitialized memory as floats, the Alpha would properly fault and crash the program. I fixed so many bugs thanks to that!

I also have fond memories of the 32-bit SPARC boxes at the University and their flat-screen fixed-frequency CRT displays, but you know, I haven't seen one of those machines since 1998. Because I was doing graphics work, I used the single SPARC machine in the computer lab at the Institute that had 24-bit graphics, with a humongous 21" CRT display. PCs at the time still had 8-bit video cards and shitty little monitors.

At about the same time that the Institute got its Alpha, it also got one of the first 64-bit UltraSPARCs from Sun — a very expensive machine definitely not targeted to hobbyists. I think it had two CPUs! Multicore did not exist!

I think I saw a single Itanium machine in my life, probably around 2002-2005. The Ximian/Novell office in Mexico City got one, for QA purposes — an incredibly loud and unstable machine. I don't think we ever did any actual development on that box; it was a "can you reproduce this bug there" kind of thing. I think Ximian/Novell had a contract with HP to test the distro there, I don't remember.

Unsupported architectures at the LLVM level

Platforms like the Alpha and Itanium that Rust/LLVM don't support — those platforms are dead in the water. The compiler cannot target them, as Rust generates machine code via LLVM, and LLVM doesn't support them.

I don't know why distributions maintained by volunteers give themselves the responsibility to keep their software running on platforms that have not been manufactured for years, and that were never even hobbyist machines.

I read the other day, and now I regret not keeping the link, something like this: don't assume that your hobby computing entitles you to free labor on the part of compiler writers, software maintainers, and distro volunteers. (If someone helps me find the source, I'll happily link to it and quote it properly.)

Non-tier-1 platforms and "$distro does not build Rust there yet"

I think people are discovering these once again:

  • Writing and supporting a compiler for a certain architecture takes Real Work.

  • Supporting a distro for a certain architecture takes Real Work.

  • Fixing software to work on a certain architecture takes Real Work.

Rust divides its support for different platforms into tiers, going from tier 1, the most supported, to tier 3, the least supported. Or, I should say, taken care of, which is a combination of people who actually have the hardware in question, and whether the general CI and build tooling is prepared to deal with them as effectively as it does for tier 1 platforms.

In other words: there are more people capable of paying attention to, and testing things on, x86_64 PCs than there are for sparc-unknown-linux-gnu.

Some anecdotes from Suse

At Suse we actually support IBM's s390x big iron; those mainframes run Suse Linux Enterprise Server. You have to pay a lot of money to get a machine like that and support for it. It's a room-sized beast that requires professional babysitting.

When librsvg and Firefox started getting rustified, there was of course concern about getting Rust to work properly on the s390x. I worked sporadically with the people who made the distro work there, and who had to deal with building Rust and Firefox on it (librsvg was a non-issue after getting Rust and Firefox to work).

I think all the LLVM work for the s390x was done at IBM. There were probably a couple of miscompilations that affected Firefox; they got fixed.

One would expect bugs in software for IBM mainframes to be fixed by IBM or its contractors, not by volunteers maintaining a distro in their spare time.

Giving computing time on mainframes to volunteers in distros could seem like a good samaritan move, or a trap to extract free labor from unsuspecting people.

Endianness bugs

Firefox's problems on the s390x were more around big-endian bugs than anything. You see, all the common architectures these days (x86_64 and arm64) are little-endian. However, s390x is big-endian, which means that all multi-byte numbers in memory are stored backwards from what most software expects.

It is not a problem to write software that assumes little-endian or big-endian all the time, but it takes a little care to write software that works on either.

Most of the software that volunteers and paid people write assumes little-endian CPUs, because that is likely what they are targeting. It is a pain in the ass to encounter code that works incorrectly on big-endian — a pain because knowing where to look for evidence of bugs is tricky, and fixing existing code to work with either endianness can be either very simple, or a major adventure in refactoring and testing.

Two cases in point:

Firefox. When Suse started dealing with Rust and Firefox in the s390x, there were endianness bugs in the graphics code in Firefox that deals with pixel formats. Whether pixels get stored in memory as ARGB/ABGR/RGBA/etc. is a platform-specific thing, and is generally a combination of the graphics hardware for that platform, plus the actual CPU architecture. At that time, it looked like the C++ code in Firefox that deals with pixels had been rewritten/refactored, and had lost big-endian support along the way. I don't know the current status (not a single big-endian CPU in my vincinity), but I haven't seen related bugs come in the Suse bug tracker? Maybe it's fixed now?

Librsvg had two root causes of bugs for big-endian. One was in the old code for SVG filter effects that was written in C; it never supported big-endian. The initial port to Rust inherited the same bug (think of a line-by-line port, althought it wasn't exactly like that), but it got fixed when my Summer of Code intern Ivan Molodetskikh refactored the code to have a Pixel abstraction that works for little-endian and big-endian, and wraps Cairo's funky requirements.

The other endian-related bug in librsvg was when computing masks. Again, a little refactoring with that Pixel abstraction fixed it.

I knew that the original C code for SVG filter effects didn't work on big-endian. But even back then, at Suse we never got reports of it producing incorrect results on the s390x... maybe people don't use their mainframes to run rsvg-convert? I was hoping that the port to Rust of that code would automatically fix that bug, and it kind of happened that way through Ivan's careful work.

And the code for masks? There were two bugs reported with that same root cause: one from Debian as a failure in librsvg's test suite (yay, it caught that bug!), and one from someone running an Apple PowerBook G4 with a MATE desktop and seeing incorrectly-rendered SVG icons.

And you know what? I am delighted to see people trying to keep those lovely machines alive. A laptop that doesn't get warm enough to burn your thighs, what a concept. A perfectly serviceable 32-bit laptop with a maximum of about 1 GB of RAM and a 40 GB hard drive (it didn't have HDMI!)... But you know, it's the same kind of delight I feel when people talk about doing film photography on a Rollei 35. A lot of nostalgia for hardware of days past, and a lot of mixed feelings about not throwing out working things and creating more trash.

As a graphics programmer I feel the responsibility to write code that works on little-endian and big-endian, but you know, it's not exactly an everyday concern anymore. The last big-endian machine I used on an everyday basis was the SPARCs in the university, more than 20 years ago.

Who gets paid to fix this?

That's the question. Suse got paid to support Firefox on the s390x; I suppose IBM has an interest in fixing LLVM there; both actually have people and hardware and money to that effect.

Within Suse, I am by default responsible for keeping librsvg working for the s390x as well — it gets built as part of the distro, after all. I have never gotten an endianness bug report from the Suse side of things.

Which leads me to suspect that, probably similar to Debian and Gentoo, we build a lot of software because it's in the build chain, but we don't run it to its fullest extent. Do people run GNOME desktops on s390x virtual machines? Maybe? Did they not notice endianness bugs because they were not in the code path that most GNOME icons actually use? Who knows!

I'm thankful to Simon from the Debian bug for pointing out the failure in librsvg's test case for masks, and to Mingcong for actually showing a screenshot of a MATE desktop running on a PPC PowerBook. Those were useful things for them to do.

Also — they were kind about it. It was a pleasure to interact with them.

February 24, 2021

GTK 4 NGL Renderer

I spent a lot of time in 2020 working on projects tangential to what I’d consider my “main” projects. GtkSourceView got a port to GTK 4 and a load of new features, GTK 4 got a new macOS backend, and in December I started putting together a revamp of GTK 4’s GL renderer.

The nice thing about having multiple renderer backends in GTK 4 is that we still have Cairo rendering as an option. So while doing bring-up of the new GTK macOS backend I could just use that. Making software rendering fast enough to not be annoying is a good first step because it forces you to shake out performance issues pretty early.

But once that is working, the next step is to address how well the other backends can work there. We had two other backends. OpenGL (requiring 3.2 Core and up) and Vulkan. Right now, the OpenGL renderer is the best supported renderer for acceleration in terms of low bug count, so that seemed like the right way to go if you want to stay inline with Linux and Windows backends. Especially after you actually try to use MoltenVK on macOS and realize it’s a giant maze. The more work we can share across platforms (even if temporarily) the better we can make our Linux experience. Personally, that is something I care about.

From what I’ve seen, it looks like OpenGL on the M1 was built on top of Metal, so it seems fine to have chosen that route for now. People seem to think that OpenGL is going to magically go away just because Apple says they’ll remove it. First off, if they did, we’d just fallback to another renderer. Second, it’s likely that Zink will be a viable (and well funded) alternative soon. Third, they just released a brand new hardware architecture and it still works. That was the best point in time to drop it if there ever was one.

The NGL renderer makes full snapshots of uniforms and attachments while processing render nodes so that we can reorder batches going forward. Currently, we only reorder by render target, but that alone is a useful thing. We can start to do a lot more in the future as we have time. That might include tiling, executing batches on threads, and reordering batches within render targets based on programs so long as vertices do not overlap.

But anyway, my real motivation for cleaning up the GL renderer was so that someone who is interested in Metal can use it as a template for writing a renderer. Maybe that’s you?

Major shout-out to everyone that worked on the previous GL renderer in GTK. I learned so much from it and it’s really quite amazing to see GTK 4 ship with such interesting designs.

Millennium prize problems but for Linux

There is a longstanding tradition in mathematics to create a list of hard unsolved problems to drive people to work on solving them. Examples include Hilbert's problems and the Millennium Prize problems. Wouldn't it be nice if we had the same for Linux? A bunch of hard problems with sexy names that would drive development forward? Sadly there is no easy source for tens of millions of euros in prize money, not to mention it would be very hard to distribute as this work would, by necessity, be spread over a large group of people.

Thus it seems is unlikely for this to work in practice, but that does not prevent us from stealing a different trick from mathematicians' toolbox and ponder how it would work in theory. In this case the list of problems will probably never exist, but let's assume that it does. What would it contain if it did exist? Here's one example I came up with. it is left as an exercise to the reader to work out what prompted me to write this post.

The Memory Depletion Smoothness Property

When running the following sequence of steps:
  1. Check out the full source code for LLVM + Clang
  2. Configure it to compile Clang and Clang-tools-extra, use the Ninja backend and RelWithDebInfo build type, leave all other settings to their default values
  3. Start watching a video file with VLC or a browser
  4. Start compilation by running nice -19 ninja
The outcome must be that the video playback works without skipping a single frame or audio sample.

What happens currently?

When Clang starts linking, each linker process takes up to 10 gigabytes of ram. This leads to memory exhaustion, flushing active memory to swap and eventually crashing the linker processes. Before that happens, however, every other app freezes completely and the entire desktop remains frozen until things get swapped back in to memory, which can take tens of seconds. Interestingly all browser tabs are killed before linker processes start failing. This happens both with Firefox and Chromium.

What should happen instead?

The system handles the problematic case in a better way. The linker processes will still die as there is not enough memory to run them all but the UI should never noticeably freeze. For extra points the same should happen even if you run Ninja without nice.

The wrong solution

A knee-jerk reaction many people have is something along the lines of "you can solve this by limiting the number of linker processes by doing X". That is not the answer. It solves the symptoms but not the underlying cause, which is that bad input causes the scheduler to do the wrong thing. There are many other ways of triggering the same issue, for example by copying large files around. A proper solution would fix all of those in one go.

TPM2 Key Trust: where did Keylime go wrong

In my previous blog post, I explained how a verifier can get a signing key that it trusts is on a TPM for attestation (part 2 of the other post in the making).

I have been contributing to a specific implementation of remote attestation for Linux, called Keylime.

As part of the effort on porting the agent to Rust, I was looking into how the process works, and as part of that I identified a vulnerability in how Keylime deals with the TPM2 that breaks the Chain of Trust in two different places.

For the quick rundown, see the advisory. For details, please read on!

Keylime Agent Registration Protocol

When a new keylime agent is started, it will register itself to the Registrar. This happens in two distinct steps, each being an individual HTTP request: During the registration step, the agent sends its Endorsement Key (EK) and corresponding certificate, and an Attestation (Identity) Key (AK/AIK, the signing key from the previous article). The response to this is the encrypted challenge as described in the previous article. During the activation step, the agent decrypts the challenge, computes an hmac of its agent UUID with the challenge as key, and submits that to the Registrar.

One thing to point out is that for this entire process, we should consider everything the agent sends to be untrusted until verified: verifying that the agent is running correct code is the whole point of Keylime!

Chain of Trust break 1: Endorsement Key(s?!)

The first thing I noticed when looking at the agents’ registrar client was that it is passing three different things that all have a name prefixed with “ek”: “ek”, “ekcert” and “ek_tpm”. So my first challenge was to determine what all three were doing there.

It turns out that “ek” was the public key of the Endorsement Key, in SubjectPublicKeyInfo PEM format, the “ekcert” was the Endorsement Key Certificate (in DER format), and “ek_tpm” is the Endorsement Key, but then in the TPM proprietary format.

This means that technically, the same public key is sent in three different encodings: once as a raw ASN.1 SubjectPublicKeyInfo structure (read: “public key”), once as part of the EK Certificate, and once in the proprietary format.

This immediately sets off alarm bells: which of the versions would be used for verifying the chain of trust?

After some digging, it turns out that the “ekcert” is used to determine whether the TPM is indeed from a trusted vendor. The “ek” value (also known as “ekpem” or “pubek”) is verified to contain the same public key as “ekcert” if a cert was provided. (There is an option to make Keylime not use the certificate to establish trust, but instead have a script to validate the raw endorsement public key in case you want to whitelist EK’s for example.)

This leaves the “ek_tpm” version: that is used during the actual Credential Protection protocol to encrypt the challenge against. This is done because the tpm2_makecredential tool that it is using for this purpose requires the TPM proprietary format for the endorsement key.

However, and this is the critical part: never in the code is there any check performed that the public key in “ek_tpm” is the same public key that’s in “ek” or “ekcert”! This means that it would be possible for an attacker to generate a new key in system memory (unprotected), and send that as “ek_tpm” (in the TPM format) together with a valid (but unrelated) “ek” and “ekcert” from any random TPM, and the registrar would just assume that the verified key is correct.

The fix

As a fix for this issue, we changed the code on the client to only send the ekcert if it has one, and otherwise send the ek in TPM format, but nothing else. The registrar will ignore any “ek” (PEM format) provided by the agent, and if the agent provided an “ekcert”, it will ignore any provided “ek_tpm” value. If only “ekcert” was provided, the registrar will build the “ek_tpm” value itself. This means that when the certificate is validated later, the “ek_tpm” is indirectly validated (because it is produced by trusted code), and now the trusted “ek_tpm” value is passed into the Credential Protection protocol.

Chain of Trust break 2: Attestation Key: who are you?

The second part that I noticed in the same registrar client was that it sends both an “pub_aik” and a “aik_name”. The “pub_aik” is a PEM encoded version of the SubjectPublicKeyInfo of the Attestation (Identity) Key, and the “aik_name” is the hexadecimal encoded TPM Name of the Attestation (Identity) Key.

This again is a case where the aik_name was sent additionally because that’s what’s required for tpm2_makecredential, and the “pub_aik” was needed to verify quotes from the agent as part of the attestation protocol.

However, there is no check done between these two representations, and neither are any checks performed regarding the attributes of the Attestation Key.

This would mean that an attacker would be able to generate a random key in system memory, and send that to the registrar as the “pub_aik”, but providing the “aik_name” of an object that is on an actual, trusted TPM. (Or, in combination with the previous part, any byte stream, since there’s no validation that the Endorsement Key used for the activation step is on a TPM.)

There additionally were no checks on the object attributes of the Attestation Key, which means that even if an attacker sends the name of a key that is on a valid TPM, it might be a key where the key material was provided from outside the TPM. So it would be possible to have an AIK that happens to be loaded into a TPM at this moment, but whose private key is available outside of the TPM, to the main operating system.

The fix

As a fix for this issue, the agent now no longer sends the “pub_aik”, or the “aik_name”. Instead, it only sends the TPM representation (“aik_tpm”) of the Attestation Key.

With this, the Registrar will first verify that the object attributes are as expected for an Attestation Key (non-exportable (FIXED_TPM & FIXED_PARENT) , key generated by the TPM (SENSITIVE_DATA_ORIGIN), etc).

After that, the registrar will compute the Attestation Key’s Name for the Credential Protection protocol, and use this instead of a client-provided value.

After this, the aik_tpm is stored, and any time that a quote by the TPM is checked, this will be done against a PEM public key synthesized from the aik_tpm value.

This means that we verify the object attributes of the Attestation Key, and that we have cryptographically verified that the key we store as valid is indeed the one proven to be on a TPM as proven by the Credential Protection protocol.

CVE ID

These issues have been assigned CVE-2021-3406.

TPM2 Key Attestation

Part 1 of a 2-part series on TPM attestation

Background

These days, the Trusted Platform Module (TPM) is a pretty ubiquitous piece of hardware. This is thanks in part due to Microsoft requiring it [since 2016 for Windows 10] (https://docs.microsoft.com/en-us/windows-hardware/design/minimum/minimum-hardware-requirements-overview#37-trusted-platform-module-tpm).

The TPM enables very interesting security features, like decryption/signing of data, key exchange protocols, and more, without handling the private key in software.

One of the other big things a TPM can be used for is attesting a server to a remote server. This allows a remote server to verify that the software that booted your computer, or is running on it, is known (unmodified) software. This can be used for example by companies to check their servers, whether their systems have booted with the correct BIOS and configuration, and whether the system files are not modified, before you provide it with access to certain things like TLS/HTTPS keys.

This attestation procedure is what this blog post series is about.

In this post, “attester” is the system that is trying to prove to the “verifier” that it is running valid code.

This blog post talks about some of the core features in a TPM, and how a verifier can use those features to verify that a public key is stored on the TPM (and only a TPM) that is trusted.

TPM Feature: key storage and operations

The TPM is able to hold cryptographic keys and use those keys for various operations. So it can do things like signing things with an asymmetric RSA or ECC key.

These keys can either be imported or generated by the TPM itself, and they have various attributes that the TPM stores and returns about their properties, like whether the key was generated by the TPM, and whether the key is exportable (as in, you can get the private part for this key out of the TPM).

Keys (and other objects in the TPM) also have a “Name”. For keys, this Name is a digest over the TPM representation of the keys, which includes their attributes. This means that two keys with the exact same public numbers (i.e. RSA modulus and exponent) may still have a different name, if one is for example non-exportable and one is exportable.

Endorsement key

One specific private key that the TPM has[1] is the Endorsement Key, which is a key for which it also has a corresponding x.509 certificate issued and signed by the TPM vendor. This certificate can be used by the verifier to ensure a key is in a TPM by a vendor it trusts.

However, this key is restricted by the TPM, in a way where it can only be used for decryption. This means that it is impossible to make signatures with this key. This means that we can’t use it for signing data coming out of the TPM, as we would use in an attestation flow (post for that upcoming) or other use cases.

So, how do we now get a key that the verifier can trust is stored in a trusted TPM in a non-exportable way that we can use for signing things?

Credential Protection protocol

Here comes in: the Credential Protection protocol that is part of the TPM 2 specification!

As part of this protocol, the attester sends two things: the Endorsement Key public key (and maybe its certificate), and the public key and object attributes of the signing key.

The verifier checks that the object attributes for the signing key are Acceptable for its purposes (i.e. non-exportable (FIXED_TPM & FIXED_PARENT) , key generated by the TPM (SENSITIVE_DATA_ORIGIN), etc). It can then compute the Name of the signing key by hashing the TPM structure representing the signing key.

Then the verifier generates a challenge (random byte string). It then computes a symmetric encryption key from the Name of the signing key and a seed (a random number), and encrypts the challenge with that symmetric key, and encrypts the seed for the public key [2] and sends those to the attester.

Now, the attester sends the encrypted Credential structure and the encrypted key to the TPM, providing the handle (reference) to the signing key object in the TPM. The TPM decrypts the seed, and computes the Name of the provided signing key. If the Name matches the value the verifier used, it can compute the same symmetric encryption key that the verifier used, and is then able to decrypt the challenge from the credential.

The attester can then return the challenge back to the verifier as proof that the TPM did indeed have an object with that Name loaded into it, proving that the signing key is indeed on a TPM with the expected attributes.

This terminates the protocol, and at this moment, the verifier is now sure that the public key it got earlier is indeed a valid signing key that is created by a TPM and can not be exported out of the TPM.

Note that because of how this is oriented, after the protocol has concluded, the verifier can’t verify anything about the signing key anymore, so it will have to trust that it previously validated the key correctly before storing it.

But assuming that’s in place, we now have a public key that we can possibly trust, and use for remote attestation (look out for part 2 of this series for information on that).

Footnotes

[1]: Technically keys are (usually) not stored but reproduced every time, but I’m trying to not go into the full depth to keep the blog post somewhat readable.

[2]: It generates a new seed, and then derives the symmetric key and HMAC keys, but again, too technical.

Making Releases

A few days ago, I posted to desktop-devel-list asking how we can ensure releases happen, especially beta releases for the freeze. I was frustrated and my language was too abrasive, and I’m sorry for that. My intention was really to open a discussion on how we can improve our release process. Emmanuele replied with a thorough analysis of which bits are hard to automate, which I enjoyed reading.

Earlier today, I tweeted asking developers of other open source projects how they make releases, just to get a sense of what the rest of the world does. There have been a lot of responses, and it will take me a while to digest it all.

In the meantime, I wanted to share my process for rolling releases. I maintain five core GNOME modules, plus a handful of things in the wider open source world. My release process hasn’t fundamentally changed in the 18 years I’ve been a maintainer. A lot of other stuff has changed (merge requests, CI, freeze break approvals, etc), so I’m just trying to think of how any of this could be better. Anyway, here’s my process:

  1. First, I run git status in my development checkout to do a sanity check for files I forgot to add to the repo. In at least one project, I have auto-generated docs files that I keep in git, because various tools rely on them being there.
  2. Next, I always want to make sure I’m making releases from a clean checkout. So I will git clone a fresh checkout. Or sometimes, I already have a checkout I use just for releases, so I will git pull there.
  3. Next, I actually roll a tarball before doing anything else, which I will promptly throw away. I’ve had a few times where I discovered a dist breakage after doing everything else, and I’ve had to start over.
  4. Now it’s time to write a NEWS entry. I run git log --stat PREVTAG.. > changes, where PREVTAG is the tag name of the previous release. I edit changes to turn it into a NEWS entry, then I copy it to the top of the NEWS file.
  5. I then bump the version number in either configure.ac or meson.build. I know a lot of people do pre-release version bumps. I don’t have a strong opinion on this, so I’ve never changed my habits.
  6. Now it’s time to roll the tarball that I don’t throw away. The commands I run depend on the build system, of course. What matters is that I run these commands myself and have a tarball at the end.
  7. Before I actually release that tarball, I run git commit and git push. If there have been any commits since I started, I either have to rebase and update the NEWS file, or do some branching. This is fortunately quite rare.
  8. Also before releasing the tarballs, I tag the release with git tag -s and push the tag. Importantly, I only do this after pushing the commits, because otherwise if other commits have happened I have to do tag surgery. Nobody likes that.
  9. Finally, I scp the tarball to a GNOME server, ssh into that server, and run a release script that the release team maintains.

The two things that take the most time are rolling a tarball (which I do at least twice), and creating the NEWS entry. Rolling tarballs is something that can happen in the background, so I usually have multiple terminal tabs, and I work on other releases while one release is building. So that part isn’t too bad, but sometimes I am just waiting on a build to finish with nothing else to do. I know some people auto-generate NEWS entries, or don’t write them at all, but I find hand-edited entries extremely valuable. (I do read them when writing help for apps, when I actually find time to do that, so a big thanks to app maintainers who write them.)

I’m tossing around in my head what a more GitLab-focused workflow would look like. I could imagine a workflow where I click a “Release” button, and I get prompted for a version number and a NEWS entry. I’m not even sure I care if the “news” is in a file in the tarball, as long as I know where to find it. (Distro packagers might feel differently. I don’t know what their processes look like.) I would still need to look at a commit log to write the news. And I guess I’d probably still want to do my own git status sanity check before going to GitLab, but I could probably catch a lot of that with either CI or local commit checks. Ensuring there are no dist breakages should almost certainly be done on every commit with CI.

I suppose another thing to consider is just maintainers remembering it’s time to make releases. I didn’t miss this one because I was eagerly awaiting playing with it and updating the help, but I’ve missed lots of releases in the past. Could we all get automatic issues added to our todo lists a few days in advance of expected releases? Would that be annoying? I don’t know.

February 23, 2021

GNOME Shell 40 and multi-monitor

Multi-monitor has come up a fair bit in conversations about the GNOME Shell UX updates that are coming in GNOME 40. There’s been some uncertainty and anxiety in this area, so we wanted to provide more detail on what the multi-monitor experience will exactly be like, so people know what to expect. We also wanted to provide background on the decisions that have been made.

Newsflash

Before we get into multi-monitor, a short status update! As you would expect for this stage in the development cycle, the main bulk of the UI changes are now in the GNOME Shell master branch. This was the result of a really hard push by Georges and Florian, so huge thanks to them! Anyone who is interested in following this work should ensure that they are running the master branch, and not the now redundant development branch.

There are still a few relatively minor UI changes that we are hoping to land this cycle, but overall the emphasis is now on stabilisation and bug fixing, so if you are testing and have spotted any issues, now’s the time to report them.

Multi-monitor

OK, back to multi-monitor!

In many key respects, multi-monitor handling in GNOME 40 will be identical to how it is in 3.38. GNOME 40 still defaults to workspaces only on the primary display, as we have since 3.0. The top bar and overview will only be shown on the primary display, and the number of workspaces will still be dynamic. In many respects, GNOME 40 should feel very similar to previous GNOME versions, therefore.

That still leaves a lot of unanswered questions, of course, so let’s run through the GNOME 40 multi-monitor experience in more detail. Much of this concerns how workspaces will work in combination with multi-monitor setups.

Default configuration

As mentioned already, GNOME 40 will continue to default to only showing workspaces on the primary display. With a dual display setup, the overview will therefore look like this by default:

One detail to notice is how we’re scaling down the background on the secondary display, to communicate that it’s a single workspace like those on the primary display. We feel that this presentation helps to make the logic of the multiple displays clearer, and helps to unify the different screens.

To get an idea of what this will look like in use, Jakub Steiner has kindly created some motion mockups. These are intended to communicate how each part fits together and what the transitions will be like, rather than being a 100% accurate rendering of the final product (in particular, the transitions have been slowed down).

Here you can get an idea of what it will look like opening the overview and moving between workspaces. Just like the current default configuration, workspace switching only happens on the primary display.

Workspaces on all displays

While workspaces only being on the primary display is the default behaviour, GNOME also supports having workspaces on all displays, using the workspaces-only-on-primary settings key. The following static mockup shows what the overview will look like in GNOME 40 with this configuration.

As you can see, this is very similar to the workspaces only on primary configuration. The main difference is that you can see the additional workspaces extending to the right on the secondary display. It’s also possible to see that the workspace navigator (the small set of thumbnails at the top) is visible on both displays. The introduction of the workspace navigator on secondary displays is a new change for GNOME 40, which is intended to improve the experience for users who opt to have workspaces on all displays. We know from our user research that this is something that many users will welcome.

Like in other GNOME versions, when workspaces are on all displays, they are switched in unison. For example, all displays show workspace 2 at the same time. This can be seen in the motion mockups:

Keyboard shortcuts

The existing workspace shortcuts will continue to work in GNOME 40. Super+PgUp/PgDown will continue to switch workspace. Adding Shift will continue to move windows between workspaces.

We are also introducing additional shortcuts which align with the horizontal layout. The new shortcut to switch workspace will be Super+Alt+←/→. Moving windows between workspaces will be Super+Alt+Shift+←/→. Super+Alt+↑ will also open the overview and then app grid, and Super+Alt+↓ will close them.

These directional keyboard shortcuts have matching touchpad gestures: three-finger swipes left and right will switch workspaces, and three-finger swipes up and down will open the overview and app grid.

Why horizontal?

A few people have pointed out that horizontal workspaces aren’t as clean with horizontal multi-monitor setups. The concern is that, when multiple displays are horizontal, they end up clashing with the layout of the workspaces. There is some truth in that, and we recognise that some users might need to adjust to this aspect of the design.

However, it’s worth pointing out that horizontal workspaces are a feature of every other desktop out there. Not only is it how every other desktop does it, but it is also how GNOME used to do it prior to 3.0, and how GNOME’s classic mode continues to do it. Therefore, we feel that horizontal workspaces and horizontally-arranged displays can get along just fine. If anyone is concerned about this, we’d suggest that you give it a try and see how it goes.

Some people have also asked why we are making the switch to horizontal workspaces at all, which is fair! Here I think that it needs to be understood that horizontal workspaces are fundamental to the the design we’re pursuing for 40: the film-strip of workspaces (which proved effective in testing), the clearer organisation of the overview, the coherent touchpad gestures, a dash that can more comfortably scale to include more items, and so on. This is all facilitated by the workspace orientation change, and would not be possible without it.

GNOME ❤ multi-monitor

In case there’s any doubt: multi-monitor is absolutely a priority for us in the shell design and development team. We know that the multi-monitor experience is important to many GNOME users (including many of us who work on GNOME!), and it is something that we’re committed to improving. This applies to both the default workspaces behaviour as well as the workspaces on all displays option.

Multi-monitor considerations regularly featured in the design planning for GNOME 40. They were also a research theme, both in our early discovery interviews and survey, as well as in the diary study that we ran. As a result of this, we are confident that GNOME 40 will provide an excellent multi-monitor experience.

We actually have a few plans for multi-monitor improvements in the future. Some of these pre-date the GNOME 40 work that is currently happening, and we hope to get back to them during the next development cycle. Our ambition is for the multi-monitor story to keep on getting better.

Thanks for reading!

Future of libsoup

The libsoup library implements HTTP for the GNOME platform and is used by a wide range of projects including any web browser using WebKitGTK. This past year we at Igalia have been working on a new release to modernize the project and I’d like to share some details and get some feedback from the community.

History

Before getting into what is changing I want to briefly give backstory to why these changes are happening.

The library has a long history, that I won’t cover all of, being created in 2000 by Helix Code, which became Ximian, where it was used in projects such as an email client (Evolution).

While it has been maintained to some degree for all of this time it hasn’t had a lot of momementum behind it. The library has maintained its ABI for 13 years at this point with ad-hoc feature additions and fixes being often added on top. This has resulted in a library that has multiple APIs to accomplish the same task, confusing APIs that don’t follow any convention common within GNOME, and at times odd default behaviors that couldn’t be changed.

What’s Coming

We are finally breaking ABI and making a new libsoup 3.0 release. The goal is to make it a smaller, more simple, and focused library.

Making the library smaller meant deleting a lot of duplicated and deprecated APIs, removing rarely used features, leveraging additions to GLib in the past decades, and general code cleanup. As of today the current codebase is roughly at 45,000 lines of C code compared to 57,000 lines in the last release with over 20% of the project deleted.

Along with reducing the size of the library I wanted to improve the quality of the codebase. We now have improved CI which deploys documentation that has 100% coverage, reports code coverage for tests, tests against Clang’s sanitizers, and the beginnings of automated code fuzzing.

Lastly there is ongoing work to finally add HTTP/2 support improving responsiveness for the whole platform.

There will be follow up blog posts going more into the technical details of each of these.

Release Schedule

The plan is to release libsoup 3.0 with the GNOME 41 release in September. However we will be releasing a preview build with the GNOME 40 release for developers to start testing against and porting to. All feedback would be welcomed.

Igalia plans on helping the platform move to 3.0 and will port GStreamer, GVFS, WebKitGTK and may help with some applications so we can have a smooth transition.

For more details on WebKitGTK’s release plans there is a mailinglist thread over it.

The previous release libsoup 2.72 will continue to get bug fix releases for the forseable future but no new 2.7x releases will happen.

How to Help

You can build the current head of libsoup to test now.

Installing it does not conflict with version 2.x however GObject-Introspection based applications may accidentally import the wrong version (Python for example needs gi.require_version('Soup', '2.4') and GJS needs imports.gi.versions.Soup = "2.4"; for the old API) and you cannot load both versions in the same process.

A migration guide has been written to cover some of the common questions as well as improved documentation and guides in general.

All bug reports and API discussions are welcomed on the bug tracker.

You can also join the IRC channel for direct communication (be patient for timezones): ircs://irc.gimp.net/libsoup.

February 21, 2021

Making hibernation work under Linux Lockdown

Linux draws a distinction between code running in kernel (kernel space) and applications running in userland (user space). This is enforced at the hardware level - in x86-speak[1], kernel space code runs in ring 0 and user space code runs in ring 3[2]. If you're running in ring 3 and you attempt to touch memory that's only accessible in ring 0, the hardware will raise a fault. No matter how privileged your ring 3 code, you don't get to touch ring 0.

Kind of. In theory. Traditionally this wasn't well enforced. At the most basic level, since root can load kernel modules, you could just build a kernel module that performed any kernel modifications you wanted and then have root load it. Technically user space code wasn't modifying kernel space code, but the difference was pretty semantic rather than useful. But it got worse - root could also map memory ranges belonging to PCI devices[3], and if the device could perform DMA you could just ask the device to overwrite bits of the kernel[4]. Or root could modify special CPU registers ("Model Specific Registers", or MSRs) that alter CPU behaviour via the /dev/msr interface, and compromise the kernel boundary that way.

It turns out that there were a number of ways root was effectively equivalent to ring 0, and the boundary was more about reliability (ie, a process running as root that ends up misbehaving should still only be able to crash itself rather than taking down the kernel with it) than security. After all, if you were root you could just replace the on-disk kernel with a backdoored one and reboot. Going deeper, you could replace the bootloader with one that automatically injected backdoors into a legitimate kernel image. We didn't have any way to prevent this sort of thing, so attempting to harden the root/kernel boundary wasn't especially interesting.

In 2012 Microsoft started requiring vendors ship systems with UEFI Secure Boot, a firmware feature that allowed[5] systems to refuse to boot anything without an appropriate signature. This not only enabled the creation of a system that drew a strong boundary between root and kernel, it arguably required one - what's the point of restricting what the firmware will stick in ring 0 if root can just throw more code in there afterwards? What ended up as the Lockdown Linux Security Module provides the tooling for this, blocking userspace interfaces that can be used to modify the kernel and enforcing that any modules have a trusted signature.

But that comes at something of a cost. Most of the features that Lockdown blocks are fairly niche, so the direct impact of having it enabled is small. Except that it also blocks hibernation[6], and it turns out some people were using that. The obvious question is "what does hibernation have to do with keeping root out of kernel space", and the answer is a little convoluted and is tied into how Linux implements hibernation. Basically, Linux saves system state into the swap partition and modifies the header to indicate that there's a hibernation image there instead of swap. On the next boot, the kernel sees the header indicating that it's a hibernation image, copies the contents of the swap partition back into RAM, and then jumps back into the old kernel code. What ensures that the hibernation image was actually written out by the kernel? Absolutely nothing, which means a motivated attacker with root access could turn off swap, write a hibernation image to the swap partition themselves, and then reboot. The kernel would happily resume into the attacker's image, giving the attacker control over what gets copied back into kernel space.

This is annoying, because normally when we think about attacks on swap we mitigate it by requiring an encrypted swap partition. But in this case, our attacker is root, and so already has access to the plaintext version of the swap partition. Disk encryption doesn't save us here. We need some way to verify that the hibernation image was written out by the kernel, not by root. And thankfully we have some tools for that.

Trusted Platform Modules (TPMs) are cryptographic coprocessors[7] capable of doing things like generating encryption keys and then encrypting things with them. You can ask a TPM to encrypt something with a key that's tied to that specific TPM - the OS has no access to the decryption key, and nor does any other TPM. So we can have the kernel generate an encryption key, encrypt part of the hibernation image with it, and then have the TPM encrypt it. We store the encrypted copy of the key in the hibernation image as well. On resume, the kernel reads the encrypted copy of the key, passes it to the TPM, gets the decrypted copy back and is able to verify the hibernation image.

That's great! Except root can do exactly the same thing. This tells us the hibernation image was generated on this machine, but doesn't tell us that it was done by the kernel. We need some way to be able to differentiate between keys that were generated in kernel and ones that were generated in userland. TPMs have the concept of "localities" (effectively privilege levels) that would be perfect for this. Userland is only able to access locality 0, so the kernel could simply use locality 1 to encrypt the key. Unfortunately, despite trying pretty hard, I've been unable to get localities to work. The motherboard chipset on my test machines simply doesn't forward any accesses to the TPM unless they're for locality 0. I needed another approach.

TPMs have a set of Platform Configuration Registers (PCRs), intended for keeping a record of system state. The OS isn't able to modify the PCRs directly. Instead, the OS provides a cryptographic hash of some material to the TPM. The TPM takes the existing PCR value, appends the new hash to that, and then stores the hash of the combination in the PCR - a process called "extension". This means that the new value of the TPM depends not only on the value of the new data, it depends on the previous value of the PCR - and, in turn, that previous value depended on its previous value, and so on. The only way to get to a specific PCR value is to either (a) break the hash algorithm, or (b) perform exactly the same sequence of writes. On system reset the PCRs go back to a known value, and the entire process starts again.

Some PCRs are different. PCR 23, for example, can be reset back to its original value without resetting the system. We can make use of that. The first thing we need to do is to prevent userland from being able to reset or extend PCR 23 itself. All TPM accesses go through the kernel, so this is a simple matter of parsing the write before it's sent to the TPM and returning an error if it's a sensitive command that would touch PCR 23. We now know that any change in PCR 23's state will be restricted to the kernel.

When we encrypt material with the TPM, we can ask it to record the PCR state. This is given back to us as metadata accompanying the encrypted secret. Along with the metadata is an additional signature created by the TPM, which can be used to prove that the metadata is both legitimate and associated with this specific encrypted data. In our case, that means we know what the value of PCR 23 was when we encrypted the key. That means that if we simply extend PCR 23 with a known value in-kernel before encrypting our key, we can look at the value of PCR 23 in the metadata. If it matches, the key was encrypted by the kernel - userland can create its own key, but it has no way to extend PCR 23 to the appropriate value first. We now know that the key was generated by the kernel.

But what if the attacker is able to gain access to the encrypted key? Let's say a kernel bug is hit that prevents hibernation from resuming, and you boot back up without wiping the hibernation image. Root can then read the key from the partition, ask the TPM to decrypt it, and then use that to create a new hibernation image. We probably want to prevent that as well. Fortunately, when you ask the TPM to encrypt something, you can ask that the TPM only decrypt it if the PCRs have specific values. "Sealing" material to the TPM in this way allows you to block decryption if the system isn't in the desired state. So, we define a policy that says that PCR 23 must have the same value at resume as it did on hibernation. On resume, the kernel resets PCR 23, extends it to the same value it did during hibernation, and then attempts to decrypt the key. Afterwards, it resets PCR 23 back to the initial value. Even if an attacker gains access to the encrypted copy of the key, the TPM will refuse to decrypt it.

And that's what this patchset implements. There's one fairly significant flaw at the moment, which is simply that an attacker can just reboot into an older kernel that doesn't implement the PCR 23 blocking and set up state by hand. Fortunately, this can be avoided using another aspect of the boot process. When you boot something via UEFI Secure Boot, the signing key used to verify the booted code is measured into PCR 7 by the system firmware. In the Linux world, the Shim bootloader then measures any additional keys that are used. By either using a new key to tag kernels that have support for the PCR 23 restrictions, or by embedding some additional metadata in the kernel that indicates the presence of this feature and measuring that, we can have a PCR 7 value that verifies that the PCR 23 restrictions are present. We then seal the key to PCR 7 as well as PCR 23, and if an attacker boots into a kernel that doesn't have this feature the PCR 7 value will be different and the TPM will refuse to decrypt the secret.

While there's a whole bunch of complexity here, the process should be entirely transparent to the user. The current implementation requires a TPM 2, and I'm not certain whether TPM 1.2 provides all the features necessary to do this properly - if so, extending it shouldn't be hard, but also all systems shipped in the past few years should have a TPM 2, so that's going to depend on whether there's sufficient interest to justify the work. And we're also at the early days of review, so there's always the risk that I've missed something obvious and there are terrible holes in this. And, well, given that it took almost 8 years to get the Lockdown patchset into mainline, let's not assume that I'm good at landing security code.

[1] Other architectures use different terminology here, such as "supervisor" and "user" mode, but it's broadly equivalent
[2] In theory rings 1 and 2 would allow you to run drivers with privileges somewhere between full kernel access and userland applications, but in reality we just don't talk about them in polite company
[3] This is how graphics worked in Linux before kernel modesetting turned up. XFree86 would just map your GPU's registers into userland and poke them directly. This was not a huge win for stability
[4] IOMMUs can help you here, by restricting the memory PCI devices can DMA to or from. The kernel then gets to allocate ranges for device buffers and configure the IOMMU such that the device can't DMA to anything else. Except that region of memory may still contain sensitive material such as function pointers, and attacks like this can still cause you problems as a result.
[5] This describes why I'm using "allowed" rather than "required" here
[6] Saving the system state to disk and powering down the platform entirely - significantly slower than suspending the system while keeping state in RAM, but also resilient against the system losing power.
[7] With some handwaving around "coprocessor". TPMs can't be part of the OS or the system firmware, but they don't technically need to be an independent component. Intel have a TPM implementation that runs on the Management Engine, a separate processor built into the motherboard chipset. AMD have one that runs on the Platform Security Processor, a small ARM core built into their CPU. Various ARM implementations run a TPM in Trustzone, a special CPU mode that (in theory) is able to access resources that are entirely blocked off from anything running in the OS, kernel or otherwise.

comment count unavailable comments

February 19, 2021

Documentation changes

Back in the late ‘90s, people working on GTK had the exact same problem we have today: how do we document the collection of functions, types, macros, and assorted symbols that we call “an API”. It’s all well and good to strive for an API that can be immediately grasped by adhering to a set of well defined conventions and naming; but nothing is, or really can be, “self documenting”.

When GTK 1.0 was released, the documentation was literally stored in handwritten Texinfo files; the API footprint of GTK was small enough to still make it possible, but not really maintainable in the longer term. In 1998, a new system was devised for documenting GTK 1.2:

  • a script, to parse the source files for the various declarations and dump them into machine parseable “templates”, that would then be modified to include the actual documentation, and committed to the source repository
  • a small tool that would generate, compile, and run a small tool to introspect the type system for things like hierarchy and signals
  • a script to take the templates, the list of symbols divided into logical “sections”, an index file, and generate a bunch of DocBook XML files
  • finally, a script to convert DocBook to HTML or man pages, via xsltproc and an XML stylesheet

Whenever somebody added a new symbol to GTK, they would need to run the script, find the symbol in the template files, write the documentation using DocBook tags if necessary, and then commit the changes alongside the rest of the code.

Since this was 1998, and the scripts had to parse a bunch of text files using regular expressions, they were written in Perl and strung together with a bunch of Makefile rules.

Thus, gtk-doc was born.

Of course, since other libraries needed to provide an API reference to those poor souls using them, gtk-doc ended up being shared across the GNOME platform. We even built part of our website and release infrastructure around it.

At some point between 1998 and 2009, gtk-doc gained the ability to generate those template files incrementally, straight from the C sources; this allowed moving the preambles of each section into the corresponding source file, thus removing the templates from the repository, and keeping the documentation close to the code it references, in the hope it would lead to fewer instances of docs drift.


Between 2009 and 2021, a few things happened:

  1. gobject-introspection has become “a thing”; g-i also parses the C code, and does it slightly more thoroughly than gtk-doc, to gather the same information: declarations, hierarchy, interfaces, properties, and signals, and even documentation, which is all shoved into a well-defined XML file; on top of that, g-i needs annotations in the source to produce a machine-readable description of the C ABI of a library, which can then be used to generate language bindings
  2. turns out that DocBook is pretty terrible, and running xsltproc on large, complex DocBook files is really slow
  3. Perl isn’t really a Hot Language™ like it was in the late ‘90s; many Linux distributions dropped it from the core installation, and not many people speak it that fluently, which means not many people will want to help with a large Perl application

To cope with issue (1), gtk-doc had to learn to parse introspection annotations.

Issue (2) led to replacing DocBook tags inside the inline documentation with subset of Markdown, augmented with custom code blocks and intra-document anchors for specific sections.

Issue (3) led to a wholesale rewrite in Python, in the hope that more people would contribute to the maintenance of gtk-doc.

Sadly, all three solutions ended up breaking things in different ways:

  1. gtk-doc never really managed to express the introspection information in the generated documentation, outside of references to an ancillary appendix. If an annotation says “this argument can be NULL”, for instance, there’s no need to write “or NULL” in the documentation itself: the documentation tool can write it out for you.
  2. the move to Markdown means that existing DocBook tags in the documentation are now ignored or, worse, misinterpreted for HTML and not rendered; this requires porting all the documentation in every library, in a giant flag day, to avoid broken docs; on top of that, DocBook’s style sheet to generate HTML started exhibiting regressions after a build system change, which led, among other things, to the disappearance of per-version symbols indices
  3. the port to Python probably came too late, and ended up having many, many regressions; gtk-doc is still a pretty complex tool, and it still caters to many different use cases, spanning two decades; as much as its use is documented and tested, its internals are really not, meaning that it’s not an easy project to pick up

Over the past 10 years various projects started migrating away from gtk-doc; gobject-introspection itself shipped a documentation tool capable of generating API references, though it mostly is a demonstrator of potential capabilities more than an actual tool. Language bindings, on the other hand, adopted the introspection data as the source for their documentation, and you can see it in Python, JavaScript, and Rust.

As much as I’d like to contribute to gtk-doc, I’m afraid we reached the point where we might want to experiment with something more radical, instead of patching something up, and end up breaking what’s left.

So, since we’re starting from the bottom up, let’s figure out what are the requirements for a tool to generate the documentation for GTK:

  • be fast. Building GTK’s API reference takes a long time. The API footprint of GDK, GSK, and GTK is not small, but there’s no reason why building the documentation should take a comparable amount of time as building the library. We moved to Meson because it has improved the build times of GTK, we don’t want to get into a bottleneck now.
  • no additional source parsing. We already parse the C sources in order to generate the introspection data, we don’t need another pass at that.
  • tailored for GTK. Whenever GTK changes, the tool must change with GTK; the output must adapt to the style of documentation GTK uses.
  • integrated with GTK. We don’t want an external dependency that makes it harder to deploy the GTK documentation on non-Linux platforms. Using it as a sub-project would be the best option, followed by being able to install it everywhere without additional, Linux-only dependencies.

The explicit non-goal is to create a general purpose documentation tool. We don’t need that; in fact: we’re actively avoiding it. Regardless of what you’ve been taught at university, or your geeky instincts tell you, not every problem requires a generic solution. The whole reason why we are in this mess is that we took a tool for generating the GTK documentation and then generalised the approach until it fell apart under its own weight.

If you want a general purpose documentation tool for C and C++ libraries, there are many to choose from:

There’s also gtk-doc: if you’re using it already, I strongly recommend helping out with its maintenance.


Back in November 2020, as a side project while we were closing in to the GTK 4.0 release date, I started exploring the idea of parsing the introspection data to generate the C API reference for GTK. I wanted to start from scratch, and see how far I could go, so I deliberately avoided taking the GIR parser from gobject-introspection; armed only with the GIR schema and a bunch of Python, I ended up writing a decent parser that would be able to load the GTK introspection XML data, including its dependencies, and dump the whole tree of C identifiers and symbols. After a break, at the end of January 2021, I decided to take a page out the static website generator rule book, and plugged the Jinja templates into the introspection data. The whole thing took about a couple of weeks to go from this:

Everybody loves a tree-like output on the command line

to this:

Behold! My stuff!

My evil plan of generating something decent enough to be usable and then showing it to people with actual taste and web development skills paid off, because I got a whole merge request from Martin Zilz to create a beautiful theme, with support for responsive layout and even for a dark variant:

Amazing what actual taste and skill can accomplish

Like night and day

Turns out that when you stop parsing C files and building small binaries to introspect the type system, and remove DocBook and xsltproc from the pipeline, things get fast. Who knew…

Additionally, once you move the template and the styling outside of the generator, and you can create more complex documentation hierarchies, while retaining the ability for people that are not programmers to change the resulting HTML.

The interesting side effect of using introspection data is that our API reference is now matching what language bindings are able to see and consume—and oh boy, do we suck at that. Part of the fault lies in the introspection parser not being able to cover some of the nastiest parts of C, like macros—though, hopefully, that will improve in the near future; but a lot of issues come from our own API design. Even after 10 years since the introduction of introspection, we’re still doing some very dumb things when it comes to C API—ad let’s ignore stuff that happened 20 years ago and that we haven’t been able to fix yet. Hopefully, the documentation slapping us in the face is going to help us figuring things out before they hit stable releases.


What’s missing from gi-docgen? Well, you can look at the 2021.1 milestone on GitLab:

  • more documentation on the ancillary files used for project and template configuration; stabilising the key/value pairs would also be part of the documentation effort
  • client-side search, with symbols exposed to tools like GNOME Builder through something that isn’t quite as tragic as DevHelp files
  • automatic cross-linking with dependencies documented by gi-docgen, especially for libraries with multiple namespaces, like GTK and Pango
  • generating proper dependency files, for the consumption of build tools like Meson

In the meantime, what’s missing for GTK to use this? Mainly, porting the documentation away from the various gtk-doc-isms, like marking symbols with sigils, or using |[ ... ]| to define code blocks. Additionally, since the introspection scanner only attaches SECTION blocks to the documentation element of a class, all the sections that operate as “grab bag of related symbols” need to be moved to a separate Markdown file.

It must needs be remarked: gi-docgen is not a generic solution for documenting C libraries. If your library does not have introspection, doesn’t use type classes, or has a very different ABI exposed through introspection than the actual C API, then you’re not going to find it useful—and I don’t have any plans to cater to your use cases either. You should keep using gtk-doc if it still works for you; or you may want to consider other documentation tools.

Anything that complicates the goal of this tool—generating the API reference for GTK and ancillary libraries—is completely out of scope.


February 18, 2021

GTK happenings

GTK 4.2 is due out in March – it will not be an enormous release, just incremental improvements. But besides the usual bug fixes and performance improvements, there are a few things that are worth calling out indvidually.

A new GL Renderer

Christian Hergert has been hard at work, creating a new GL renderer for GTK. The initial motivation for this work was the desire to improve our rendering performance on MacOS, where the GL drivers are not quite as forgiving as on Linux. Apart from that, starting over with a new renderer gives us a chance to apply all the things we’ve learned while working on our current GL renderer, and to reorganize the code with an eye towards future improvements, such as reordering and batching of draw commands.

The new renderer hasn’t been merged  yet, but it is closing in on feature parity and may well make it into 4.2. We will likely include both the old and the new renderer, at least for a while.

Popover Shadows

Ever since GtkPopover was introduced with its signature ‘beak’, popovers have clipped everything outside their border, since we needed the tip of the beak to be consistently placed. With the new xdg-popup based implementation in GTK4, we have an positioning protocol that is expressive enough to place popovers in a way that makes the ‘beak’ point where it is supposed to while allowing shadows underneath and around the popover. As with window shadows, the popover shadows are outside of the input region, so clicks go through to the underlying window.

This is a minor thing, but it may have a noticeable impact in giving the UI depth and structure.

Better Input

GtkIMContextSimple is the input method implementation that is built into GTK. It is used when we don’t have a platform method to use, such as the Wayland text protocol. GtkIMContextSimple only does a few things. One of them is to interpret hexadecimal input for Unicode characters, with Control-Shift-u. Another is that it handles Compose sequences, such as

<Compose Key> <a> <acute>

to enter an á  character.

Most Compose sequences start with that Compose Key, and the keyboard settings in GNOME 40 will include a way to assign a key on your keyboard to this function.

On the GTK side, we’ve addressed a few longstanding complaints with the Compose sequence support. Apart from its built-in sequences, GTK parses X11 Compose files. The format of these files is described in Compose(5), but until now, GTKs support for this format was pretty incomplete. With GTK 4.2, we’re improving this to

  • Allow sequences of up to 20 keys (previously, the limit was 7)
  • Generate multiple characters (notably, this allows Unicode Emoji sequences, such as 👩‍❤️‍👨)
  • Support hexadecimal codepoints

These are nice improvements for people who make their own Compose sequences by editing ~/.Compose. But what about the rest of us? One traditionally difficult aspect of using Compose sequence is that you have to know the sequences by heart, and type them blindly. There is no visual feedback at all until the sequence is complete and the final character appears. A while ago, IBus improved on this by showing the characters of the incomplete sequence as underlined preedit text, similar to what we do for hexadecimal Unicode input.

After copying their approach for GTK, initial user feedback was mixed, mainly because the official glyph for the Compose Key (⎄) is a bit distracting when it appears unexpectedly. So I went back to the drawing board and came up with a different approach:

I hope this works better. Feedback welcome!

All of these input changes will also appear in GTK 3.24.26.

A pre-supplied "custom" keyboard layout for X11

Last year I wrote about how to create a user-specific XKB layout, followed by a post explaining that this won't work in X. But there's a pandemic going on, which is presumably the only reason people haven't all switched to Wayland yet. So it was time to figure out a workaround for those still running X.

This Merge Request (scheduled for xkeyboard-config 2.33) adds a "custom" layout to the evdev.xml and base.xml files. These XML files are parsed by the various GUI tools to display the selection of available layouts. An entry in there will thus show up in the GUI tool.

Our rulesets, i.e. the files that convert a layout/variant configuration into the components to actually load already have wildcard matching [1]. So the custom layout will resolve to the symbols/custom file in your XKB data dir - usually /usr/share/X11/xkb/symbols/custom.

This file is not provided by xkeyboard-config. It can be created by the user though and whatever configuration is in there will be the "custom" keyboard layout. Because xkeyboard-config does not supply this file, it will not get overwritten on update.

From XKB's POV it is just another layout and it thus uses the same syntax. For example, to override the +/* key on the German keyboard layout with a key that produces a/b/c/d on the various Shift/Alt combinations, use this:


default
xkb_symbols "basic" {
include "de(basic)"
key <AD12> { [ a, b, c, d ] };
};
This example includes the "basic" section from the symbols/de file (i.e. the default German layout), then overrides the 12th alphanumeric key from left in the 4th row from bottom (D) with the given symbols. I'll leave it up to the reader to come up with a less useful example.

There are a few drawbacks:

  • If the file is missing and the user selects the custom layout, the results are... undefined. For run-time configuration like GNOME it doesn't really matter - the layout compilation fails and you end up with the one the device already had (i.e. the default one built into X, usually the US layout).
  • If the file is missing and the custom layout is selected in the xorg.conf, the results are... undefined. I tested it and ended up with the US layout but that seems more by accident than design. My recommendation is to not do that.
  • No variants are available in the XML files, so the only accessible section is the one marked default.
  • If a commandline tool uses a variant of custom, the GUI will not reflect this. If the GUI goes boom, that's a bug in the GUI.

So overall, it's a hack[2]. But it's a hack that fixes real user issues and given we're talking about X, I doubt anyone notices another hack anyway.

[1] If you don't care about GUIs, setxkbmap -layout custom -variant foobar has been possible for years.
[2] Sticking with the UNIX principle, it's a hack that fixes the issue at hand, is badly integrated, and weird to configure.

February 16, 2021

fwupd 1.5.6

Today I released fwupd 1.5.6 which the usual smattering of new features and bugfixes. These are some of the more interesting ones:

With the help of a lot of people we added support for quite a bit of new hardware. The slightly odd GD32VF103 as found in the Longan Nano is now supported, and more of the DFU ST devices with huge amounts of flash. The former should enable us to support the Pinecil device soon and the latter will be a nice vendor announcement in the future. We’ve also added support for RMI PS2 devices as found in some newer Lenovo ThinkPads, the Starlabs LabTop L4 and the new System76 Keyboard. We’ve refactored the udev and usb backends into self contained modules, allowing someone else to contribute new bluetooth peripheral functionality in the future. There are more than a dozen teams of people all working on fwupd features at the moment. Exciting times!

One problem that has been reported was that downloads from the datacenter in the US were really slow from China, specifically because the firewall was deliberately dropping packets. I assume compressed firmware looks quite a lot like a large encrypted message from a firewalls’ point of view, and thus it was only letting through ~20% of the traffic. All non-export controlled public firmware is now also mirrored onto the IPFS, and we experimentally fall back to peer-to-peer downloads where the HTTP download failed. You can prefer IPFS downloads using fwupdmgr --ipfs update although you need to have a running ipfs daemon on your local computer. If this works well for you, let me know and we might add support for downloading metadata in the future too.

We’ve fully integrated the fwupd CI with oss-fuzz, a popular fuzzing service from Google. Generating horribly corrupt firmware files has found a few little memory leaks, files that cause fwupd to spin in a loop and even the odd crash. It was a lot of work to build each fuzzer into a small static binary using a 16.04-based container but it was well worth all the hard work. All new PRs will run the same fuzzers checking for regressions which also means new plugins now also have to implement building new firmware (so the test payload can be a few tens of bytes, not 32kB), rather than just parsing it.

On some Lenovo hardware there’s a “useful” feature called Boot Order Lock that means whatever the OS adds as a BootXXXX entry the old bootlist gets restored on next boot. This breaks firmware updates using fwupdx64.efi and until we can detect BOL from a kernel interface we also check if our EFI entry has been deleted by the firmware on next boot and give the user a more helpful message than just “it failed”. Also, on some Lenovo hardware we’re limiting the number of UEFI updates to be deployed on one reboot as they appear to have slightly quirky capsule coalesce behavior. In the same vein we’re also checking the system clock is set approximately correct (as in, not set to before 2020…) so we can tell the user to check the clock on the machine rather than just failing with a obscure certificate error.

Now there are systems that can be switched to coreboot (and back to EDK2 again) we’ve polished up the “switch-branch” feature. We’re also checking both BIOSWE and BLE before identifying systems that can be supported. We’re also including the lockdown status in uploaded UEFI reports and added SBAT metadata to the fwupd EFI binary, which will be required for future versions of shim and grub – so for distro fwupd binaries the packager will need to set meson build options like -Defi_sbat_distro_id=. There are examples in the fwupd source tree.

Developing With The Flatpak CLI

Flatpak is a very powerful tool for development, and is well-integrated into GNOME Builder. This is what I’d recommend for most developers. But what if you use a plain text editor? Barring Visual Studio Code, there aren’t many extensions for common text editors to use flatpak. This tutorial will go over how to use flatpak to build and test your apps with only the command line.

Building & Testing Your App

First, you’ll need to have flatpak installed and a flatpak manifest. You’ll also need the right runtime and SDK installed. Then, you’ll need to set up the environment to build your application. Navigate to your project directory from the terminal. Once there run the following command:

# $MODULE_NAME is the name of your application's flatpak module
$ flatpak-builder --repo=repo --build-only --stop-at=$MODULE_NAME --force-clean flatpak_app $APP_ID.json

This will fetch all the dependencies declared in the flatpak manifest and build each one, stopping with your app. Then, you need to use flatpak build to run the build commands for your application.

First you configure the buildsystem:

# $CONFIG_OPTS should match the `config-opts` for the module in your flatpak manifest
$ flatpak build --filesystem=host flatpak_app meson _build --prefix=/app $CONFIG_OPTS

Then run the build & install command:

# $BUILD_OPTS should match the build-options in the flatpak manifest.
# `append-path` turns into `--env=PATH=$PATH:$APPEND_PATH`
$ flatpak build --filesystem=host $BUILD_OPTS flatpak_app ninja -C _build install

After that, you can also use flatpak build to test the application:

# $FINISH_ARGS would be the `finish-args` from your flatpak manifest
$ flatpak build --filesystem=host $FINISH_ARGS flatpak_app $APP_EXECUTABLE_NAME

Creating Dist Tarballs

One of the responsibilities an app maintainer has is creating tarballs of their applications for distribution. This can be challenging, as the maintainer needs to build using an environment that has all dependencies – including versions of dependencies that aren’t yet released.

Flatpak allows for developers to do this in a simple way. If you haven’t run the
command above to fetch and build your dependencies, do so now.
Also run the configuration step. Now you should be ready to run the dist command:

$ flatpak build --filesystem=host flatpak_app ninja -C _build dist

Now you should have a release tarball ready in _build/meson-dist.

Notes

While this method works for development, it’s a bit clumsy. I highly recommend using GNOME Builder or Visual Studio Code with the flatpak extension. These tools handle the clumsiness for you, allowing you to focus entirely on development. However, if you find yourself wanting to develop using flatpak and don’t want to use either of the above options, this is the way to do so.

February 15, 2021

Shell UX Changes: The Research

This post is part of an ongoing series about the overview design changes which are being worked on for GNOME 40. (For previous posts, see here.)

Ongoing user research has been a major feature of this design initiative, and I would say that it is by far the best researched project that I have worked on. Our research has informed the design as it has proceeded, resulting in particular design choices and changes, which have improved the overall design and will make it a better experience for users. As a result of this, we have a much greater degree of confidence in the designs.

This post is intended as a general overview of the research that we’ve been doing. I’m excited to share this, as a way of explaining how the design came about, as well as sharing some of the insights that we’ve found along the way.

What we did

In total, we conducted six separate research exercises as part of this initiative. These ran alongside the design and development effort, in order to answer the questions we had at each stage of the process.

Many of the research exercises were small and limited. This reflected our ambition to use a lean approach, which suited the limited resources we had available. These smaller exercises were supplemented with a larger piece of paid research, which was conducted for us by an external research company. In what follows I’ll go through each exercise in order, and give a brief description of what was done and what we found out.

So far the data from our research isn’t publicly available, largely because it contains personal information about test participants which can’t be shared. However, we do plan on making a version of the data available, with any identifying information removed.

1. Exploratory interviews

I already blogged about this exercise back in September. A summary: the initial interviews were an exploratory, sensitising, exercise, to find out how existing users used and felt about GNOME Shell. We spoke to seven GNOME users who had a range of roles and technical expertise. Each participant showed us their desktop setup and how they used it, and we asked them questions to find out how the existing shell design was working for them.

We found out a bunch of valuable things from those early interviews. A good portion of the people we spoke to really liked the shell, particularly its minimalism and the lack of distractions. We also discovered a number of interesting behaviours and user types around window and workspace usage.

2. Initial behavioural survey

The initial survey exercise was also covered in my September blog post. It was intended to provide some numbers on app, window and workspace usage, in order to provide some insight into the range of behaviours that any design changes needed to accommodate.

The survey was a deliberately quick exercise. We found out that most people had around 8 open windows, and that the number of people with a substantially higher number of open windows was low. We also found that most people were only using a single workspace, and that high numbers of workspaces in use (say, above six) was quite rare.

3. Running apps experiment

During the early design phase, the design team was interested in the role of the running apps in the dash. To explore this, I ran a little experiment: I got colleagues to install an extension which removes running apps from the dash, and asked them to record any issues that they experienced.

We found that most people got along just fine without running apps in the dash. Despite this, in the end we decided to keep the running apps, based on other anecdotal reports we’d seen.

4. External user testing

Thanks to support by Endless, we were lucky to have the opportunity to contract out some research work. This was carried out by Insights and Experimentation Agency Brooks Bell and was contracted under the umbrella of the GNOME Foundation.

The research occurred at a point in the process where we were weighing up key design decisions, which the research was designed to help us answer in an informed manner.

Methodology

The research itself consisted of 20 moderated user testing sessions, which were conducted remotely. Each participant tested GNOME 3.38 and then either a prototype of the new design or Endless OS. This provided us with a means to compare how each of the three desktops performed, with a view to identifying the strengths and weaknesses of each.

Each session involved a combination of exploration and evaluation. Participants were interviewed about their typical desktop usage, and were invited to recreate a typical desktop session within the test environment. They were then asked to perform some basic tasks. After testing both environments, they were required to fill in a post-test survey to give feedback on the two desktops they had tried.

Research participants included both existing GNOME users, as well as users who had never used GNOME before. The sample included a range of technical abilities and experience levels. It also included a mix of professional and personal computer users. The study was structured in such a way that we could analyse the differences between different user groups, so we could get a sense of how each desktop performed with different user groups. Participants were recruited from six countries: Brazil, Canada, Germany, Italy, United Kingdom and the USA.

Brooks Bell were a great firm to work with. Our own design and development team were able to have detailed planning conversations with them, as well as lengthy sessions to discuss the research findings. We were also given access to all the research data, to enable us to do our own analysis work.

Findings

The external research provided a wealth of useful information and analysis. It addressed the specific research questions that we had for the study, but also went further to address general questions about how and why the participants responded to the designs in the way that they did, as well as identifying a number of unrelated design issues which we hope to address in future releases.

One of the themes in the research was the degree to which users positively responded to UI conventions with which they were already familiar. This was reflected in both how respondents responded to the designs in general, as well as how successfully they were able to use specific aspects of them. For example, interactions with the app grid and dash were typically informed by the participants’ experiences with similar UIs from other platforms.

This might seem like an obvious finding, however the utility of the research was in demonstrating how this general principle played out in the specific context of our designs. It was also very interesting to see how conventions from both mobile and desktop informed user behaviour.

In terms of specific findings, there wasn’t a single clear story from the tests, but rather multiple overlapping findings.

Existing GNOME users generally felt comfortable with the desktop they already use. They often found the new design to be exciting and liked the look and feel, and some displayed a negative reaction to Endless, due to its similarity with Windows.

“I like the workspaces moving sideways, it feels more comfortable to switch between them.”
—Comment on the prototype by an existing GNOME user

All users seemed to find the new workspace design to be more engaging and intuitive, in comparison with the workspaces in GNOME 3.38. This was one particular area where the new design seemed to perform better than existing GNOME Shell.

“[It feels] quicker to navigate through. It [has a] screen where I can view my desktop at the top and the apps at the bottom, this makes it quicker to navigate.”
—Comment on the prototype by a non-GNOME user

On the other hand, new users generally got up to speed more quickly with Endless OS, often due to its similarity to Windows. Many of these testers found the bottom panel to be an easy way to switch applications. They also made use of the minimize button. In comparison, both GNOME 3.38 and the prototype generally took more adjustment for these users.

“I really liked that it’s similar to the Windows display that I have.”
—Comment on Endless OS by a non-GNOME user

5. Endless user testing

The final two research exercises we conducted were used to fill in specific gaps in our existing knowledge, largely as a validation exercise for the design we were working towards. The first of these consisted of 10 remote user testing sessions, conducted by Endless with participants from Guatemala, Kenya and the USA. These participants were picked from particular demographics that are of importance to Endless, particularly young users with limited computing experience.

Each test involved the participant running through a series of basic desktop tasks. Like the tests run by Brooks Bell, these sessions had a comparative element, with participants trying both Endless OS and the prototype of the new design. In many respects, these sessions confirmed what we’d already found through the Brooks Bell study, with participants both responding well to the workspace design in the prototype, and having to adjust to designs that were unfamiliar to them.

“Everything happens naturally after you go to Activities. The computer is working for you, you’re not working for it”
—Tester commenting on the new design

6. Diary study

The diary study was intended to identify any issues that might be encountered with long-term usage, which might have been missed in the previous user tests. Workspaces and multi-monitor usage were a particular focus for this exercise, and participants were selected based on whether they use these features.

The five diary study participants installed the prototype implementation and used it for a week. I interviewed them before the test to find out their existing usage patterns, then twice more over the test period, to see how they were finding the new design. The participant also kept a record of their experiences with the new design, which we referred back to during the interviews.

This exercise didn’t turn up any specific issues with multi-monitor or workspace usage, despite including participants who used those features. In all the participants generally had a positive response to the new design and preferred it over the existing GNOME shell they were using. It should be mentioned that this wasn’t universal however.

7. Community testing and feedback

While community testing isn’t strictly a research exercise, it has nevertheless been an important part of our data-driven approach for this initiative. One thing that we’ve managed to do relatively successfully is have a range of easy ways to test the new design. This was a priority for us from the start and has resulted in us being able to have a good round of feedback and design adjustment.

It should be noted that those of us on the design side have had detailed follow-up conversations with those who have provided feedback, in order to ensure that we have a complete understanding of the issues as described. (This often requires having background knowledge about users setup and usage patterns.) I have personally found this to be an effective way of developing empathy and understanding. It is also a good example of how our previous research has helped, by providing a framework within which to understand feedback.

The main thing that we have got from this stage of the process is testing with a wider variety of setups, which in particular has informed the multi-monitor and workspace aspects of the design.

Reflection

As I wrote in the introduction to this post, GNOME has never had a design initiative that has been so heavily accompanied by research work. The research we’ve done has undoubtedly improved the design that we’re pursuing for GNOME 40. It has also enabled us to proceed with a greater degree of confidence than we would have otherwise had.

We’re not claiming that every aspect of the research we’ve done has been perfect or couldn’t have been improved. There are gaps which, if were able to do it all again, we would have liked to have filled. But perfect is the enemy of good and doing some research – irrespective of its issues – is certainly better than doing none at all. Add to this the fact that we have been doing research in the context of an upstream open source project with limited resources, and I think we can be proud of what we’ve achieved.

When you put together the lessons from each of the research exercises we’ve done, the result is a picture of different user segments having somewhat different interests and requirements. On the one hand, we have the large number of people who have never used GNOME or an open source desktop, to whom a familiar design is one that is generally preferable. On the other hand, there are users who don’t want a carbon copy of the proprietary desktops, and there are (probably more technical) users who are particularly interested in a more minimal, pared back experience which doesn’t distract them from their work.

The best way for the GNOME project to navigate this landscape is a tricky question, and it involves a difficult balancing act. However, with the changes that are coming in GNOME 40, we hope that we are starting out on that path, with an approach that both adopts some familiar conventions from other platforms, while developing and refining GNOME’s unique strengths.

February 14, 2021

zbus and Implementing Async Rust API

zbus

As many of the readers already know, for the past (almost) 2 years, I've been developing a Rust-crate for D-Bus, called zbus. With this being my first big Rust project to start from scratch done almost entirely in my spare time, the progress was rather slow for many months. My perfectionism didn't help much with the progress either but also the fact that implementing a Serde API for the D-Bus format was quite challenging. Then along came my old friend and ex-colleague, Marc-André Lureau who sped up the progress 10 times and soon after we had the 1.0 release of zbus.

While my original plan (perfectionism again) was for the API to be primarily async, with the synchronous API mostly just a wrapper around it, it was Marc-Andre who ended up doing most of the work and coming up with nice high-level API and his use case was primarily synchronous so we decided to go with synchronous API first. I still believe that was the right thing to do, since neither of us were familiar with async programming in Rust and going with the original plan would have meant the first release getting delayed by at least another half an year.

This may sound very disappointing to readers who come from glib programming background but a purely synchronous blocking API in a Rust app is not at all as bad it would be in a glib+C (or even Vala) app. There is a reason why Rust is famous for its fearless concurrency.

Asynchronous API

However, a first class asynchronous API is still a must if we're serious about our goal of making D-Bus super easy. This is especially important for UI apps, that should have an easy way to communicate over D-Bus w/o blocking the UI and having to spawn threads and setting up communication channels between these threads etc.

So for the past many weeks, I've been working on adding async versions of our synchronous types, starting from the low-level Connection, followed by the client-side proxy and hopefully soon the service-side as well. It's been a very interesting challenge to say the least. Coming from Vala background, Rust's async/await syntax felt very familiar to me.

One of the great thing is that I was able to achieve one of my original goals and turned our existing types into thin blocking wrappers around their new async siblings, hence avoiding code duplication. Moreover, thanks to the futures and smol-rs crates, so far I've also been able to keep zbus agnostic of specific async runtimesas well.

Esther loves async Rust code

Pain Points

Having said that, while Rust's async/await is a lot of joy from a user's POV, implementing useful async API on top isn't a walk in the park. Here are some of the hurdles I bumped into:

Pinning

If you're going to be doing any async programming in Rust, you'll sooner or later have to learn what this is. It's not at all a hard concept but what I found especially challenging is the difference between the Unpin and !Unpin types. i-e which one is which? I also kept forgetting why I need to pin a future before awaiting on it.

Implementing Futures based on async API

This is something I found very surprising. I would have thought that would be easy but turns out it's not. The reason is the argument you receive in the Future's required poll method:

fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;

Notice, how the first argument here is Pin<&mut Self> and not the usual self, &self or &mut self. This is very much justified since we can't have things moving around while a future is not complete (for which the poll method will potentially be called many times). However, what an async function or block gives you is an abstract type that implements Future so you can't just do something like:

struct MyFuture;

impl Future for MyFuture {
type Output = String;

fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
match some_async_method_returning_string().poll(cx) {
Poll::Ready(s) => Poll::Ready(format!("string: {}", s)),
Poll::Pending => Poll::Pending,
}
}
}

You'll get:

error[E0599]: no method named `poll` found for opaque type `impl Future` in the current scope
--> src/lib.rs:
13:52
|
13 | match some_async_method_returning_string().poll(cx) {
| ^^^^ method not found in `impl Future`

While I still don't know how to turn an async call into a Pin<&mut T>, I was informed about how futures crate provides enough API for me not having to do that, e.g FutureExt::map. There is also API to convert futures into streams and vice versa. But I had to ask around to figure that out and it wasn't very obvious to me.

Async closures

Firstly, there are no async closures in Rust yet and from what I hear it'll be a long time before they'll be available in stable Rust. In the meantime, a common workaround is for closures to return a Future:

async fn call_async_cb<F, Fut>(func: F) -> String
where
F: Fn() -> Fut,
Fut: Future<Output = String>,
{
func().await
}

async fn pass_async_cb() {
let s = call_async_cb(||
async {
some_async_method_returning_string().await
}
).await;

println!("{}", s);
}

As you can see, for simple cases like in the sample code above, the code isn't very different from how it would look like if async closure were a thing. But let's take a slightly more complicated example, in the sense that the callback takes a reference as an argument:

async fn call_async_cb<F, Fut>(func: F) -> String
where
F: Fn(&str) -> Fut,
Fut: Future<Output = String>,
{
let w = "world".to_string();
func(&w).await
}

async fn pass_async_cb() {
let s = call_async_cb(|w|
// Also notice the `move` here. W/o it we get another error from the compiler.
async move {
let s = some_async_method_returning_string().await;

format!("{} {}", s, w)
}
).await;

println!("{}", s);
}

which will result in:

error: lifetime may not live long enough
--> src/main.rs:19:9
|
18
| let s = call_async_cb(|w|
| -- return type of closure `impl Future` contains a lifetime `'2`
| |
| has type `&'1 str`
19
| / async move {
20
| | let s = some_async_method_returning_string().await;
21
| |
22 | | format!("{} {}", s, w)
23
| | }
| |_________^ returning this value requires that `'1` must outlive `'2

The solution that I could find for this problem was to pass only owned data to such closures. This solution isn't ideal at all. A signal handler should really not be getting a Message but only a reference to it. A simple solution here would be to pass Arc<Message> instead but that would require the entire call chain to be converted to use that type instead. We'll likely be doing exactly that but it would'd have been nice not having to do that.

Errors from hell

At the time of this writing, the async Proxy API in zbus is slightly broken and I only found out about it after I tried to use it with tokio::select in our company's internal codebase:

98  |         Ok(tokio::spawn(async move {
| ^^^^^^^^^^^^ `dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send` cannot be shared between threads safely
|
::: /home/zeenix/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.0.1/src/task/spawn.rs:128:21
|
128 | T: Future + Send + 'static,
| ---- required by this bound in `tokio::spawn`
|
= help: the trait `Sync` is not implemented for `dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send`
= note: required because of the requirements on the impl of `Sync` for `std::ptr::Unique<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>`
= note: required because it appears within the type `Box<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>`
= note: required because it appears within the type `(&str, Box<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>)`
= note: required because of the requirements on the impl of `Sync` for `hashbrown::raw::RawTable<(&str, Box<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>)>`
= note: required because it appears within the type `hashbrown::map::HashMap<&str, Box<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>, RandomState>`
= note: required because it appears within the type `HashMap<&str, Box<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>>`
= note: required because of the requirements on the impl of `Sync` for `async_lock::mutex::MutexGuard<'_, HashMap<&str, Box<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>>>`
= note: required because of the requirements on the impl of `std::marker::Send` for `&async_lock::mutex::MutexGuard<'_, HashMap<&str, Box<dyn FnMut(zbus::Message) -> Pin<Box<dyn futures::Future<Output = std::result::Result<(), zbus::Error>> + std::marker::Send>> + std::marker::Send>>>`

***MORE SCARY INFO HERE***

= note: required because it appears within the type `impl futures::Future`

It's not even immediately obvious if the issue is lack of Send or Sync. After some hints from Sebastian Dröge (who btw, has been very helpful during this whole endeavour), it turns out the culprit is this line, where we gotta store a &HashMap across a .await boundary. Don't worry if you don't get it, I'm not sure I fully comprehend this either. :)

Most importantly, the fact that the issue is in zbus code but the error is only revealed on trying to build the using code, makes it very unexpected. It is a bit contrary to the usual experience with Rust. "If it builds, it works" is not just an empty slogan but in most cases actually (surprisingly) true.

Conclusion

I still find this experience much more pleasant compared to doing the same in Vala or C (especially C). You'll not encounter any of these pains in those languages, sure but you'll have a lot more real problem at runtime to deal with for years that are implications of the problems Rust want you to confront at build time. But let's not digress.. This is not supposed to be yet another Rust sermon from yours truly. :)

Can Rust do better here? For sure! That is the hope and the reason for this blog post.

February 13, 2021

GSoD Project Report

Update GNOME Applications Help Documentation (Update App help) 

Technical Writer –  Pranali Deshmukh

Project Mentor – Shaun McCance

Important Links

Project description

Review and update the help documentation for a number of GNOME applications as tracked in https://wiki.gnome.org/DocumentationProject/Tasks/ApplicationHelp.

Even though GNOME is extremely user-friendly, it is a large and complex system, and thus, requires some learning to use to the fullest and to do so GNOME has provided some very useful documentation. This project proposed a review and updation of a number GNOME application Help documents. The status of these documents are tracked in the Application Help Wiki (https://wiki.gnome.org/DocumentationProject/Tasks/ApplicationHelp). 

Issues I contributed to: 

  1. https://gitlab.gnome.org/GNOME/gnome-boxes/-/issues/611

Update help docs for 3.38

Find and implement changes to the user help docs for Boxes to ensure that the documents are in sync with version 3.38

  • Update the Create Box Page
  • Remove the Share Clipboard option
  • Update the User Interface page
    • Remove the Share Clipboard switch reference from the General tab section
    • Add an entry about the 3D Acceleration switch to the General tab section
  • Create new pages for the following new features:
    • 3D acceleration
    • Identify an OS
    • Edit VM configuration/XML
  1. https://gitlab.gnome.org/GNOME/gnome-calculator/-/issues/185

Update help docs for 3.38

Find and implement changes to the user help docs for the calculator to ensure that the documents are in sync with version 3.38

  • Update the following help pages:
    • Update Superscript and Subscript
    • Update Using the Keyboard
    • Update Using the history view
  • We don’t appear to have a page explaining the modes. We should add one of those and link to it everywhere we mention a mode. Create new pages for the following features:
    • Modes Overview
    • Basic Mode
    • Advanced Mode
    • Financial Mode
    • Programming Mode
    • Keyboard Mode
  1. https://gitlab.gnome.org/GNOME/gnome-user-docs/-/issues/84

Update help docs for 3.38

Find and implement changes to the user help docs for the GNOME Contacts app to ensure that the documents are in sync with version 3.38

Update the following help pages:

  • Update Contact add and remove
  • Update Starting Contacts for the first time
  • Update Connect with your contact
  • Update Edit contact details
  1. https://gitlab.gnome.org/GNOME/gedit/-/issues/349

Help: missing word in gedit-open-recent.page

In https://gitlab.gnome.org/GNOME/gedit/-/blob/master/help/C/gedit-open-recent.page is a string that starts with When hovering with the mouse a recently-used file from the menu, the full path to the file is displayed

here my guess is that the word over is missing, so that it really should start with When hovering with the mouse over a recently-used file

  1. https://gitlab.gnome.org/GNOME/swell-foop/-/issues/19

help: Zealous animation option mixed up

In https://gitlab.gnome.org/GNOME/swell-foop/-/blob/master/help/C/preferences.page is the string

To slow down the animations, uncheck the checkbox.

This feels wrong, as disabling the more advanced graphics probably would increase the speed. Should this be check the <gui style=\"checkbox\">Zealous Animation</gui> checkbox instead?

Furthermore the string before that is:

If it is too fast for you and you would like playing slower then this may be too fast for you.

The end of this string is always true if the start is, and it repeats itself, so it might be nice to change it a bit. Maybe shortening it to something like If it is too fast for you, then you might like playing slower.

  1. https://gitlab.gnome.org/GNOME/file-roller/-/issues/80

archive-edit.page: Add F2 key shortcut.

https://gitlab.gnome.org/GNOME/file-roller/-/blob/master/help/C/archive-edit.page#L48

<p>Right-click on the file and choose <gui style="menuitem">Rename…</gui>.

My suggestion:

<p>Right-click on the file and choose <gui style="menuitem">Rename…</gui> or press <key>F2</key>

  1. https://gitlab.gnome.org/GNOME/gnome-notes/-/issues/75

format-list.page in manual needs to be revised.

Link to original bug (#766195)

Description

While reading help/C/format-list.page however I noticed that the “Bullets” button has been moved to a popover, so this page needs to be revised as agreed in bug 766129.

  1. https://gitlab.gnome.org/GNOME/evince/-/issues/134

Consolidate the many Printing related pages in Evince user help

Those gazillions of pages are hard to maintain and unneededly cumbersome. Plus:

Both could be a simple <p>Please see <link xref="help:gnome-help/printing-*****" href="https://help.gnome.org/users/gnome-help/stable/printing-*****">the GNOME Desktop Help</link>.</p> but maybe we had standalone Windows users in mind here?

My Contributions:

Sr. No.AppIssueTitleStart DateEnd DateCommitsMerge Request
1EvinceUnlistedUpdated Contribution Guidelines02/07/2002/07/20https://gitlab.gnome.org/GNOME/evince/-/commit/e13339c8ee1dc0bf876d2af466f287215a2370a9https://gitlab.gnome.org/GNOME/evince/-/merge_requests/267
2Evincehttps://gitlab.gnome.org/GNOME/evince/-/issues/1344Consolidate the many Printing related pages in Evince user help02/07/2005/07/20https://gitlab.gnome.org/GNOME/evince/-/commit/6c4304598e92da50c3035953e82e12ddbc2e12dfhttps://gitlab.gnome.org/GNOME/evince/-/merge_requests/266
3Evincehttps://gitlab.gnome.org/GNOME/evince/-/issues/1345Update annotations-nav-to-page.png in user docs30/06/2030/06/20https://gitlab.gnome.org/GNOME/evince/-/commit/c8ae029bedeafe17d22928c0e805ff9b0329e246https://gitlab.gnome.org/GNOME/evince/-/merge_requests/265
4Bijibenhttps://gitlab.gnome.org/GNOME/gnome-notes/-/issues/75format-list.page in manual needs to be revised.10/07/2013/06/20https://gitlab.gnome.org/GNOME/gnome-notes/-/commit/642ce07665f3c1b9ad72e9afcf64de1fd2ace058https://gitlab.gnome.org/GNOME/gnome-notes/-/merge_requests/69
5File Rollerhttps://gitlab.gnome.org/GNOME/file-roller/-/issues/80archive-edit.page: Add F2 key shortcut20/07/2020/07/20https://gitlab.gnome.org/GNOME/file-roller/-/commit/5aa547f976754f4f5169e825db5aee5761185fd8https://gitlab.gnome.org/GNOME/file-roller/-/merge_requests/41
6Seahorsehttps://gitlab.gnome.org/GNOME/seahorse/-/issues/272Maybe change “date and time” to “date” in Expiration Date20/07/2020/07/20https://gitlab.gnome.org/GNOME/seahorse/-/commit/07f3a377a9adfb28ac5771235c7255c531c2f4b6https://gitlab.gnome.org/GNOME/seahorse/-/merge_requests/138
7SeahorseUnlistedhelp: Fix minor grammatical syntax27/07/2027/07/20https://gitlab.gnome.org/GNOME/seahorse/-/commit/de30c16489c014e568cfad89c4663e79005a50c2https://gitlab.gnome.org/GNOME/seahorse/-/merge_requests/139
8Swell-foophttps://gitlab.gnome.org/GNOME/swell-foop/-/issues/19help: Zealous animation option mixed up03/08/2003/08/20https://gitlab.gnome.org/GNOME/swell-foop/-/commit/5a741246df63a015dc59110df8866664c4e71aa8https://gitlab.gnome.org/GNOME/swell-foop/-/merge_requests/22
9Gedithttps://gitlab.gnome.org/GNOME/gedit/-/issues/349Help:missing word in gedit-open-recent-page03/08/2003/08/20https://gitlab.gnome.org/GNOME/gedit/-/commit/1767db62bb60a30952c94046ea095ce8f65abed9https://gitlab.gnome.org/GNOME/gedit/-/merge_requests/96
12Boxeshttps://gitlab.gnome.org/GNOME/gnome-boxes/-/issues/611Update help docs for GNOME 3.3825/09/2025/09/20https://gitlab.gnome.org/GNOME/gnome-boxes/-/commit/8f8d478f2107dd4680f1109cc2075e97d8fbebachttps://gitlab.gnome.org/GNOME/gnome-boxes/-/merge_requests/379
13Boxeshttps://gitlab.gnome.org/GNOME/gnome-boxes/-/issues/611Update help docs for GNOME 3.3803/10/2003/10/20https://gitlab.gnome.org/GNOME/gnome-boxes/-/commit/ca9aa0f710571d94cbad98b13e98d40f8e5213c7https://gitlab.gnome.org/GNOME/gnome-boxes/-/merge_requests/381
14Boxeshttps://gitlab.gnome.org/GNOME/gnome-boxes/-/issues/611Update help docs for GNOME 3.3804/10/2004/10/20https://gitlab.gnome.org/GNOME/gnome-boxes/-/commit/85d7b858f1c16e3a57dd880014d8a2f96525e104https://gitlab.gnome.org/GNOME/gnome-boxes/-/merge_requests/382
15Calculatorhttps://gitlab.gnome.org/GNOME/gnome-calculator/-/issues/185Update help docs for 3.3806/10/2006/10/20https://gitlab.gnome.org/GNOME/gnome-calculator/-/commit/3c05008cffb4a0cf6b4465795111f77713435921https://gitlab.gnome.org/GNOME/gnome-calculator/-/merge_requests/54
16Calculatorhttps://gitlab.gnome.org/GNOME/gnome-calculator/-/issues/185Update help docs for 3.3812/10/2012/10/20https://gitlab.gnome.org/GNOME/gnome-calculator/-/commit/7d45f8780ca1ef00e93605e756894b82c10905d0https://gitlab.gnome.org/GNOME/gnome-calculator/-/merge_requests/54
17Calculatorhttps://gitlab.gnome.org/GNOME/gnome-calculator/-/issues/185Update help docs for 3.3812/10/2012/10/20https://gitlab.gnome.org/GNOME/gnome-calculator/-/commit/e8459f5b490f6e920cca8a89bf98e3366b3632fchttps://gitlab.gnome.org/GNOME/gnome-calculator/-/merge_requests/54
18Calculatorhttps://gitlab.gnome.org/GNOME/gnome-calculator/-/issues/185Update help docs for 3.3812/10/2012/10/20https://gitlab.gnome.org/GNOME/gnome-calculator/-/commit/6175b699a78d9f75eebcff49239597379e49330chttps://gitlab.gnome.org/GNOME/gnome-calculator/-/merge_requests/55
19Calculatorhttps://gitlab.gnome.org/GNOME/gnome-calculator/-/issues/185Update help docs for 3.3829/10/2010/11/20https://gitlab.gnome.org/GNOME/gnome-calculator/-/commit/81a050a7fb581fa6f4bb859be41eaff1697fd793https://gitlab.gnome.org/GNOME/gnome-calculator/-/commit/0489e807e8b1781c781bdc3aeadbeff0cf699d64https://gitlab.gnome.org/GNOME/gnome-calculator/-/merge_requests/67
20Contactshttps://gitlab.gnome.org/GNOME/gnome-user-docs/-/issues/84Update help docs for 3.3831/10/2031/10/20https://gitlab.gnome.org/GNOME/gnome-user-docs/-/commit/9ca74650e28047c9569017c76b4c8080e82a7b1chttps://gitlab.gnome.org/GNOME/gnome-user-docs/-/merge_requests/81
21Contactshttps://gitlab.gnome.org/GNOME/gnome-user-docs/-/issues/84Update help docs for 3.3802/11/2002/11/20https://gitlab.gnome.org/GNOME/gnome-user-docs/-/commit/8e8f613f7dcbabad43c150d6c78722ac148f3967https://gitlab.gnome.org/GNOME/gnome-user-docs/-/merge_requests/82
22Contactshttps://gitlab.gnome.org/GNOME/gnome-user-docs/-/issues/84Update help docs for 3.3802/11/2002/11/20https://gitlab.gnome.org/GNOME/gnome-user-docs/-/commit/cefad457401016eb8cea6437dcd14188df7b9ee6https://gitlab.gnome.org/GNOME/gnome-user-docs/-/merge_requests/83
23Contactshttps://gitlab.gnome.org/GNOME/gnome-user-docs/-/issues/84Update help docs for 3.3802/11/2002/11/20https://gitlab.gnome.org/GNOME/gnome-user-docs/-/commit/71ec9adcb64b85aeb73fba0ba69460b2c5f58812https://gitlab.gnome.org/GNOME/gnome-user-docs/-/merge_requests/84
24Contactshttps://gitlab.gnome.org/GNOME/gnome-user-docs/-/issues/84Update help docs for 3.3802/11/2002/11/20https://gitlab.gnome.org/GNOME/gnome-user-docs/-/commit/1e3d1241bc2e23d14bb0848b3e255d405c425d60https://gitlab.gnome.org/GNOME/gnome-user-docs/-/merge_requests/85

Summary of the current state of the project

Currently, core projects like Boxes, Calculator and Contacts have been updated to GNOME 3.38 which was the target version of GNOME for the scope of GSoD. While most of the Merge Requests have been merged, some are still under review.

Along with that I am going to continue my work on updating documents for GNOME and my next task is updating Evince.

Challenges:

  • The very first challenge was Applying for GSoD, choosing the project and preparing the proposal.
  • Learning Open Source project workflows and Technical Writing tools
  • Finding potential issues and working with developers to make sure that the App Help Documentation is complete, easy to access and easy to understand.
  • Understanding the existing documentation and creating new documentation from scratch.

Key Learnings:

  • Finding issues in existing documentation and creating Issues on Upstream of GitLab so others also can work on it and I will summarize my work under one issue.
  • How open source communities collaborate with each other remotely using communication Channels like IRC, BluJeans, Manage work through Version Control systems like GitLab which allows software projects to keep track of all versions and revert to previous versions if necessary.
  • Working as a technical writer for GNOME helped me to learn Technical writing tools like Mallard along with that I got to test applications like Boxes, Contacts and Calculator in detail, it also had a chance to Install Fedora 32 and 33 on my system which was a great learning experience.
  • With GSoD I not only learned Technical writing but also learned blog writing skills too. 

Plans after GSoD

GNOME User documentation for Core projects are up-to-date but there are other tasks which needs to be completed and I would love to continue my work with GNOME after GSoD as well along with that as I like what I am doing under GNOME and I noticed my university is not much involved with Open Source that much and students in my class were have very little understanding of Open Source I decided to continue my work with GNOME and take that as my final year project so I can write a paper about Open Source details from my experience. Which provided me with a chance to continue what I love to do in open source alongside my university while sharing my knowledge with others.

February 12, 2021

Add Unsubscribe link in emails using Google Apps Script

When setting up our email marketing campaigns or newsletters, one thing that is often forgot is Unsubscribe link. Not providing an option to unsubscribe from the mailing list can land our emails into spam. In this blog, we will look at how we can add an Unsubscribe link in our emails sent using Google Apps Script.

Contents

1. Setting up a Spreadsheet

The first task is to set up a Spreadsheet.

  • Create a new Google Spreadsheet and name the sheet as emails.
  • Add the following fields in the top row of our spreadsheet.
Format for Google Spreadsheet
Format for Google Spreadsheet

2. Writing a Hash Function

To provide a secure way to unsubscribe, we need to create a unique token for each of our subscribers. Google Apps Script provides us with utility functions to create a hash of a string using the MD5 hashing algorithm. The following function is used to create a hash of the string provided as a parameter.

function getMD5Hash(value) {
  const digest = Utilities.computeDigest(Utilities.DigestAlgorithm.MD5,
                                         value,
                                         Utilities.Charset.UTF_8);
  let hash = '';
  for (i = 0; i < digest.length; i++) {
    let byte = digest[i];
    if (byte < 0) byte += 256;
    let bStr = byte.toString(16);
    if (bStr.length == 1) bStr = '0' + bStr;
    hash += bStr;
  }
  return hash;
}

Since no two strings in the world have the same hash, this is the right way to provide unsubscribe tokens to our subscribers in our marketing campaigns or newsletters. However, there is a security problem here. If anyone knows the email of our subscriber, he can easily compute the hash and unsubscribe the subscriber from our email list. So, to make the hash impossible to guess, we can add some randomness to our email string. We can create a random string and append it to our original email string. The following snippet of code will help us to achieve our purpose.

function getMD5Hash(value) {
  value = value + generateRandomString(8); // added this
  const digest = Utilities.computeDigest(Utilities.DigestAlgorithm.MD5,
                                         value,
                                         Utilities.Charset.UTF_8);
  let hash = '';
  for (i = 0; i < digest.length; i++) {
    let byte = digest[i];
    if (byte < 0) byte += 256;
    let bStr = byte.toString(16);
    if (bStr.length == 1) bStr = '0' + bStr;
    hash += bStr;
  }
  return hash;
}

function generateRandomString(length) {
  const randomNumber = Math.pow(36, length + 1) - Math.random() * Math.pow(36, length);
  const string = Math.round(randomNumber).toString(36).slice(1);
  return string;
}
Google Spreadsheet with Unsubscribe Hashes
Google Spreadsheet with Unsubscribe Hashes

3. Writing Email Template

Let us create a basic HTML template for testing our Google Apps Script for the unsubscribing feature. Our email template contains the link for unsubscribing. We have also provided two parameters, email and unsubscribe_hash. When the subscriber will tap this link, it will send a GET request to our Google Apps Script deployed as a Web App.

<!DOCTYPE html>
<html>
  <head>
    <base target="_top">
  </head>
  <body>
    <h1>We are testing our unsubscribe feature</h1>
    <a href="{{WEBAPP_URL}}?email={{EMAIL}}&unsubscribe_hash={{TOKEN}}">Unsubscribe</a>
  </body>
</html>

Make sure to replace the values in curly braces.

4. Writing Unsubscribe Code

The final step to bring our workflow together is to write a code that handles our unsubscribe functionality. In our Main.gs, let us add the following code to handle the GET request as we discussed earlier:

function doGet(e) {
  const email = e.parameter['email'];
  const unsubscribeHash = e.parameter['unsubscribe_hash'];
  const success = unsubscribeUser(email, unsubscribeHash);
  if (success) return ContentService.createTextOutput().append('You have unsubscribed');
  return ContentService.createTextOutput().append('Failed');
}

The above script is pretty self-explanatory. First of all, we retrieve the email and unsubscribe_hash from the query parameters and pass them to our unsubscribeUser function. Based on the output of our function, we return an appropriate response.

Let us write the code for unsubscribeUser:

function unsubscribeUser(emailToUnsubscribe, unsubscribeHash) {  
  // get the active sheet which contains our emails
  let sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('emails');

  // get the data in it
  const data = sheet.getDataRange().getValues();
  
  // get headers
  const headers = data[0];

  // get the index of each header
  const emailIndex = headers.indexOf('email');
  const unsubscribeHashIndex = headers.indexOf('unsubscribe_hash');
  const subscribedIndex = headers.indexOf('subscribed');
  
  // iterate through the data, starting at index 1
  for (let i = 1; i < data.length; i++) {
    const row = data[i];
    const email = row[emailIndex];
    const hash = row[unsubscribeHashIndex];

    // if the email and unsubscribe hash match with the values in the sheet
    // then update the subscribed value to 'no'
    if (emailToUnsubscribe === email && unsubscribeHash === hash) {
      sheet.getRange(i+1, subscribedIndex+1).setValue('no');
      return true;
    }
  }
}

In the above function, we simply iterate our Google Sheet and check for the details for every subscriber. If the subscriber’s email and unsubscribe hash match with those sent as a query parameter, we unsubscribe the subscriber by updated the value in the sheet.

Results

Let us send a test email to our subscriber specified in the Google Sheet.

Email with Unsubscribe link
Email with Unsubscribe link

We can see that we have received our email with an option to Unsubscribe. Let us unsubscribe and check back our sheet.

Google Sheet with updated data about subscriber
Google Sheet with updated data about subscriber

Oo Yea! We can see that the value for the subscribed field has changed to no. Using this workflow, we can provide our subscribers with an option to opt-out of our mailing list for newsletters or maybe marketing emails.

February 10, 2021

Your Service is not Open Source

Open sourcing the code to your SaaS is insufficient. For a service to be truly Open Source, we need to effectively enable users to contribute to the running SaaS itself.

February 06, 2021

All new yelp-tools

I’ve just released the 40.alpha release of yelp-tools, the collection of tools for building and maintaining your documentation in GNOME. This is the first release using the new Meson build system. More importantly, it’s the first release since I ported the tools from shell scripts to Python.

Porting to Python is a pretty big deal, and it comes with more improvements than you might expect. It fixes a number of issues that are just difficult to do right in a shell script, and it’s significantly faster. For some commands, it can be as much as 20 times faster.

But that’s not all. You can now provide a config file with default values for all command-line arguments. This is useful, for example, with the --version option for yelp-check status. Previously, to ensure you weren’t getting stale status information, everybody had to remember to pass --version. Now, you can set the current version in your config file, and it will always do the right thing for everybody.

It gets better. The config file can specify custom checkers for yelp-check. I blogged about custom checkers a couple months ago. You can use XPath expressions to make assertions about a document. This is very similar to how Schematron works. But now you don’t have to figure out how to write a Schematron file, call xmllint with it, and parse the output.

Here’s an example of how to ensure that every page uses XInclude to include a boilerplate legal.xml file:

[namespaces]
mal = http://projectmallard.org/1.0/
xi = http://www.w3.org/2001/XInclude

[check:gnome-info-legal-xi]
select = /mal:page/mal:info
assert = xi:include[@href='legal.xml']
message = Must include legal.xml
xinclude = false

To run this check, you simply call:

yelp-check gnome-info-legal-xi

For more examples, check out the config file I already added to gnome-user-docs.

What I’d like to do now is figure out a good way to share custom checkers across modules, and then add CI pipelines to all GNOME modules with user docs to ensure consistency.

Why most programming language performance comparisons are most likely wrong

For as long as programming languages have existed, people have fought over which one of them is the fastest. These debates have ranged from serious scientific research to many a heated late night bar discussion. Rather than getting into this argument, let's look at the problem at a higher level, namely how would you compare the performance of two different programming languages. The only really meaningful approach is to do it empirically, that is, implementing a bunch of test programs in both programming languages, benchmarking them and then declaring the winner.

This is hard. Really hard. Insanely hard in some cases and very laborious in any case. Even though the problem seems straightforward, there are a ton of error sources that can trip up the unaware (and even many very-much-aware) performance tester.

Equivalent implementations?

In order to make the two implementations comparable they should be "of equal quality". That is, they should have been implemented by people with roughly the same amount of domain knowledge as well as programming skills in their chosen language. This is difficult to organise. If the implementations are written by different people, they may approach the problem with different algorithms making the relative performance not a question of programming languages per se, but of the programming approaches chosen by each programmer.

Even if both implementation are written by the same person using the same algorithm, there are still problems. Typically people are better at some programming languages than others. Thus they tend to provide faster implementations in their favourite language. This causes bias, because the performance is not a measure of the programming languages themselves, but rather the individual programmer. These sorts of tests can be useful in finding usability and productivity differences, but not so much for performance.

Thus you might want to evaluate existing programs written by many expert programmers. This is a good approach, but sometimes even seasoned researches get it wrong. There is a paper that tries to compare different programming languages for performance and power efficiency using this approach. In their test results one particular program's C implementation was 30% faster than the same program written in C++. This single measurement throws a big shade over the entire paper. If we took the original C source, changed all the sources' file extension from .c to .cpp and recompiled, the end result should have the same performance within a few percentage points. Thus we have to conclude that one of the following is happening (in decreasing order of probability):
  1. The C++ version is suboptimally coded.
  2. The testing methodology has a noticeable flaw.
  3. The compiler used has a major performance regression for C++ as opposed to C.
Or, in other words, the performance difference comes somewhere else than the choice of programming language.

The difficulty of measurement

A big question is how does one actually measure the performance of any given program. A common approach is to run the test multiple times in a row and then do something like the following:
  • Handle outliers by dropping the points at extreme ends (that is, the slowest and fastest measurements)
  • Calculate the mean and/or median for the remaining data points
  • Compare the result between different programs, the one with the fastest time wins
Those who remember their high school statistics lessons might calculate standard deviation as well. This approach seems sound and rigorous, but it contains several sources of systematic error. The first of these is quite surprising and has to do with noise in measurements.

Most basic statistical tools assume that the error is normally distributed with an average value of zero. If you are measuring something like temperature or speed this is a reasonable assumption. For this case it is not. A program's measured time consists of the "true" time spent solving the problem and overhead that comes from things like OS interruptions, disk accesses and so on. If we assume that the noise is gaussian with a zero average then what it means is that the physical machine has random processes that make the program run faster than it would if the machine was completely noise free. This is, of course, impossible. The noise is strongly non-gaussian simply because it can never have a negative value.

In fact, the measurement that is the closest to the platonic ideal answer is the fastest one. It has the least amount of noise interference from the system. That is the very same measurement that was discarded in the first step when outliers were cleaned out. Sometimes doing established and reasonable things makes things worse.

Statistics even harder

Putting that aside, let's assume we have measurements for the two programs, which do look "sufficiently gaussian". Numerical analysis shows that language #1 takes 10 seconds to run whereas language #2 takes 9 seconds. A 10% difference is notable and thus we can conclude that language #2 is faster. Right?

Well, no. Suppose the actual measurement data look like this:


Is the one on the right faster or not? Maybe? Probably? Could be? Answering this question properly requires going all the way to university level statistics. First one formulates a null hypothesis, that is, that the two programs have no performance difference. Then one calculates the probability that both of these measurements have come from the same probability distribution. If the probability for this is small (typically 5%), then the null hypothesis is rejected and we have proven that one program is indeed faster than the other. This method is known as Student's t-test. and it is used commonly in heavy duty statistics. Note that some implementations of the test assume gaussian data and if you data has some other shape, the results you get might not be reliable.

This works for one program, but a rigorous test has many different programs. There are statistical methods for evaluating those, but they get even more complicated. Looking up how they work is left as an exercise to the reader.

All computers' alignment is Chaotic Neutral

Statistics are hard, but fortunately computers are simple because they are deterministic, reliable and logical. For example if you have a program and you modify it by adding a single NOP instruction somewhere in the stream, the biggest impact it could possibly have is one extra instruction cycle, which is so vanishingly small as to be unmeasurable. If you do go out and measure it, though, the results might confuse and befuddle you. Not only can this one minor change make the program 10% slower (or possibly even more), it can even make it 10% faster. Yes. Doing pointless extra work can make your the program faster.

If this is the first time you encounter this issue you probably won't believe it. Some fraction might already have gone to Twitter to post their opinion about this "stupid and wrong" article written by someone who is "clearly incompetent and an idiot". That's ok, I forgive you. Human nature and all that. You'll grow out of it eventually. The phenomenon is actually real and can be verified with measurements. How is it possible that having the CPU do extra work could make the program faster?

The answer is that it doesn't. The actual instruction is irrelevant. What actually matters is code alignment. As code shifts around in memory, its performance characteristics change. If a hot loop gets split by a cache boundary it slows down. Unsplitting it speeds it up. The NOP does not need to be inside the loop for this to happen, simply moving the entire code block up or down can cause this difference. Suppose you measure two programs in the most rigorous statistical way possible. If the performance delta between the two is under something like 10%, you can not reasonably say one is faster than the other unless you use a measurement method that eliminates alignment effects.

It's about the machine, not the language

As programs get faster and faster optimisation undergoes an interesting phase transition. Once performance gets to a certain level the system no longer about what the compiler and CPU can do to run the developer's program as fast as possible. Instead it becomes about how the programmer can utilize the CPU's functionality as efficiently as possible. These include things like arranging your data into a layout that the processor can crunch with minimal effort and so on. In effect this means replacing language based primitives with hardware based primitives. In some circles optimization works weirdly backwards in that the programmer knows exactly what SIMD instructions they want a given loop to be optimized into and then fiddles around with the code until it does. At this point the functionality of the programming language itself is immaterial.

This is the main reason why languages like C and Fortran are still at the top of many performance benchmarks, but the techniques are not limited to those languages. Years ago I worked on a fairly large Java application that had been very thoroughly optimized. Its internals consisted of integer arrays. There were no classes or even Integer objects in the fast path, it was basically a recreation of C inside Java. It could have been implemented in pretty much any programming language. The performance differences between them would have mostly come down to each compiler's optimizer. They produce wildly different performance outcomes even when using the same programming language, let alone different ones. Once this happens it's not really reasonable to claim that any one programming language is clearly faster than others since they all get reduced to glorified inline assemblers.

References

Most of the points discussed here has been scraped from other sources and presentations, which the interested reader is encouraged to look up. These include the many optimization talks by Andrei Alexandrescu as well as the highly informational CppCon keynote by Emery Berger. Despite its venue the latter is fully programming language agnostic so you can watch it even if you are the sort of person who breaks out in hives whenever C++ is mentioned.

February 05, 2021

Seeking for Career Opportunities

11 years ago, 13 year old Nasah landed her first IT job as the secretary in charge of her dad’s documentation where she did the typing, printing, photocopying, spiral binding and lamination of documents. Luckily, after her dad’s former employee resigned she was given the opportunity, to play with machines which helped her a lot when she was taught computer sciences in secondary school. Being a developer a couple of years later opened my eyes to the competition involved in landing a real job in this field. After my Outreachy internship at GNOME, I am looking forward to getting more paid internships, remote jobs and even scholarships that will help me pay for a Masters degree in Data Science .

The field of software development is one with lots of pressure. Dealing with power outages, poor internet connectivity, strikes and lock downs in my country (Cameroon) has taught me how to be resilient even under very harsh conditions. This has made me confident of the fact that I can cope with working in other environments which have their own challenges and also collaborating in teams of diverse individuals as the Outreachy internship and constant check-ins with other interns and alums have taught me. I code in JavaScript(Angular, HTML and CSS) and will be very excited to work on developer tools for this language.My interest in developer tools is powered by the fact that developers will keep developing solutions and will keep needing tools to facilitate the process. If developer tools keep improving(improvement in the user experience), it will be easier to develop software solutions, more people will join the field of software development hence more solutions will come up to solve the problems we face in our day-to-day lives making the world a better place to live in. Nonetheless, I am open to gaining skills in other languages used for back-end web development such as Java and Node.js to help me become a well-rounded full stack developer.

In 2020 I worked on a Library Management System built with vanilla JavaScript, HTML and CSS (please check the repository for more details https://github.com/Nasah-Kuma/NG-Library). This was one of the first projects I worked on as an attempt to master JavaScript (among a team of four friends). The first challenge I encountered throughout this process was sharing my code with the other team members collaboratively. There were so many times my work was lost because of the challenges faced while resolving merge conflicts. If I learnt something from this experience, it was how to work with and communicate my challenges to other team members.

From August 2019 – November 2019 I was involved in a training program that helped me gain skills in Angular development. I was also appointed the team lead for the front-end team of 4 members helping me to improve on my communication and team coordination skills. At the end of this project, we built a product catalog application which helps you perform the CRUD (Create, Read, Update and Delete) operations on products (https://github.com/Nasah-Kuma/ProductCatalog). This experience was very exciting because I had to deal with so much pressure since something happened to our repository and we lost most of the code few days to the deadline. Due to the determination of our team, we were able to get things working before the presentation. All these experiences were steps that led me to where I am today. In my opinion, the journey is just beginning and I know the world is pregnant with opportunities. I am prepared for the challenges that lie ahead and will gladly welcome them as stepping stones. My Outreachy internship has not only taught me that I can do better but has exposed me to best practices of both coding and communication in the free software world.

In addition to all these, I am a holder of a Bachelor in Engineering in Computer Engineering from the University of Buea Cameroon, can express myself fluently in English language and understand French slightly above the intermediate level. The development of software solutions has changed the world. All my life, I’ve wished to be part of this change.

First look at snowpack

Today is another Red Hat Day of Learning. A while ago I heard about snowpack, a new contender for the trusty old webpack to build modern web projects. Today I finally managed to take a quick look at it. Even with webpack --watch one often needs to wait several seconds up to half a minute with some larger cockpit pages, so the promised split-second builds certainly sound attractive. At first sight it also makes more opinionated choices about sensible defaults, so that one hopefully does not have to write such a wall of boilerplate.

February 04, 2021

Rift CV1 – Testing SteamVR

I’ve had a few people ask how to test my OpenHMD development branch of Rift CV1 positional tracking in SteamVR. Here’s what I do:

  • Make sure Steam + SteamVR are already installed.
  • Clone the SteamVR-OpenHMD repository:
git clone --recursive https://github.com/ChristophHaag/SteamVR-OpenHMD.git
  • Switch the internal copy of OpenHMD to the right branch:
cd subprojects/openhmd
git remote add thaytan-github https://github.com/thaytan/OpenHMD.git
git fetch thaytan-github
git checkout -b rift-kalman-filter thaytan-github/rift-kalman-filter
cd ../../
  • Use meson to build and register the SteamVR-OpenHMD binaries. You may need to install meson first (see below):
meson build
ninja -C build
./install_files_to_build.sh
./register.sh
  • Make sure your USB devices are accessible to your user account by configuring udev. See the OpenHMD guide here: https://github.com/OpenHMD/OpenHMD/wiki/Udev-rules-list
  • Please note – only Rift sensors on USB 3.0 ports will work right now. Supporting cameras on USB 2.0 requires someone implementing JPEG format streaming and decoding.
  • It can be helpful to test OpenHMD is working by running the simple example. Check that it’s finding camera sensors at startup, and that the position seems to change when you move the headset:
./build/subprojects/openhmd/openhmd_simple_example
  • Calibrate your expectations for how well tracking is working right now! Hint: It’s very experimental 🙂
  • Start SteamVR. Hopefully it should detect your headset and the light(s) on your Rift Sensor(s) should power on.

Meson

I prefer the Meson build system here. There’s also a cmake build for SteamVR-OpenHMD you can use instead, but I haven’t tested it in a while and it sometimes breaks as I work on my development branch.

If you need to install meson, there are instructions here – https://mesonbuild.com/Getting-meson.html summarising the various methods.

I use a copy in my home directory, but you need to make sure ~/.local/bin is in your PATH

pip3 install --user meson

February 03, 2021

Hang out with GNOME at FOSDEM 2021

We’re going to FOSDEM!

FOSDEM is online this year and we’re going to be there with a stand, volunteers, and GNOMEies of all varieties.

A photo of post-it notes below a sign reading "GNOME Love"
“Fosdem 2009” by faerie_eriu is licensed under CC BY-SA 2.0

Stands at FOSDEM will be web pages, including a Matrix chat. Come visit us at the GNOME Stand!

You can join us in the Matrix chat to talk with contributors, staff, board members, and executive director Neil McGovern. Our full schedule is below. Times are displayed in the local time zone for Brussels which is Central European Time (CET) (UTC+1).

Saturday, February 6th

  • 10:00-11:00 – Meet Executive Director Neil McGovern
  • 11:00-12:00 – General Chat Time
  • 12:00-13:00 – General Chat Time
  • 13:00-14:00 – GTK
  • 14:00-15:00 – General Chat Time
  • 15:00-16:00 – Newcomers
  • 16:00-17:00 – Meet the Foundation Staff
  • 17:00-18:00 – GNOME Events (GNOME.Asia, GUADEC, LAS)

Sunday, February 7th

  • 10:00-11:00 – GNOME Circle
  • 11:00-12:00 – General Chat Time
  • 12:00-13:00 – General Chat Time
  • 13:00-14:00 – GTK
  • 14:00-15:00 – Internships with GNOME
  • 15:00-16:00 – Meet the Foundation Board of Directors
  • 16:00-17:00 – Meet the Engagement Team
  • 17:00-18:00 – Panels with Community Engagement Challenge Participants

Saturday, after the booth is closed, we’ll be doing GNOME Beers. Read about it and how to register.

v3dv status update 2021-02-03

So some months have passed since our last update, when we announced that v3dv became Vulkan 1.0 conformant. The main reason for not publishing so many posts is that we saw the 1.0 checkpoint as a good moment to hold on adding new big features, and focus on improving the codebase (refactor, clean-ups, etc.) and the already existing features. For the latter we did a lot of work on performance. That alone would deserve a specific blog post, so in this one I will summarize the other stuff we did.

New features

Even if we didn’t focus on adding new features, we were still able to add some:

  • The following optional 1.0 features were enabled: logicOp, althaToOne, independentBlend, drawIndirectFirstInstance, and shaderStorageImageExtendedFormats.
  • Added support for timestamp queries.
  • Added implementation for VK_KHR_maintenance1, VK_EXT_private_data, and VK_KHR_display extensions
  • Added support for Wayland WSI.

Here I would like to highlight that we started to get feature contributions out of the initial core of developers that created the driver. VK_KHR_display was submitted by Steven Houston, and Wayland WSI support was submitted by Ella-0. Thanks a lot for it, really appreciated! We hope that this would begin a trend of having more things implemented by the rpi/mesa community as a whole.

Bugfixing and vulkan tools

Even if the driver got conformant, we were still testing the driver with several demos and applications, and provided fixes. As a example, we got Sascha Willem’s oit (Order Independent Transparency) working:

Sascha Willem’s oit demo on the rpi4

Among those applications that we were testing, we can highlight renderdoc and gfxreconstruct. The former is a frame-capture based graphics debugger and the latter is a tool that allows to capture and replay several frames. Both tools are heavily used when debugging and testing vulkan applications. We tested that they work on the rpi4 (fixing some bugs while doing it), and also started to use them to help/guide the performance work we are doing.

Fosdem 2021

If you are interested on an overview of the development of the driver during the last year, we are going to present “Overview of the Open Source Vulkan Driver for Raspberry Pi 4” on FOSDEM this weekend (presentation details here).

Previous updates

Just in case you missed any of the updates of the vulkan driver so far:

Vulkan raspberry pi first triangle
Vulkan update now with added source code
v3dv status update 2020-07-01
V3DV Vulkan driver update: VkQuake1-3 now working
v3dv status update 2020-07-31
v3dv status update 2020-09-07
Vulkan update: we’re conformant!

February 01, 2021

NlNet grant for Fractal

Some people already know, but now I’m officially announcing that for the next months I’ll be working full-time on Fractal thanks to a grant from NLnet. My main objective is to integrate end-to-end encryption into the GNOME Matrix client. Since user experience is crucial for getting E2EE right I’ll be working closely with Tobias Bernard from the design team throughout this project.

To give a rough roadmap, these are the main things I’m planning on working towards in the coming months:

  1. Fully use and integrate matrix-rust-sdk
  2. Device (Session) Management
    • List of active sessions
    • Logout (delete) from active sessions
    • View Session ID, Public Name, Last Seen
  3. Conversation Encryption and Decryption: Allow sending and receiving messages. People need to be informed whether a conversation is encrypted or not, and have the possibility to enable encryption (disabling isn’t allowed by design).
  4. User Verification: People can verify the identity of other Matrix users to set the trust level via emoji verification and QR code scanning.
  5. Session Verification: People can verify their own sessions (cross singing) or choose not to trust cross signed sessions and manually sign other users’ individual sessions.
  6. Key Backups (Secure Backup and Export Keys): Bundle all encryption keys and store them encrypted for backup.
  7. Encrypted Room Search: Needs configuration options, and a local cache of encrypted messages for search.

I still need to figure out what the best approach regarding the planned GTK4 port (and other ongoing structural changes) is going to be, but I’ll be working with the rest of the Fractal team to find a solution, and hopefully get E2EE support into Fractal as soon as possible.

GNOME Data Access 6.0 Released

After years in development, mayor 6.0 version of GNOME Data Access library known as libgda has been released!

Vivien Malerba is the champion in the number of commits with 2071, when we see at history of libgda. The long stable 5.2 series, has been useful for many people out there, to create Database oriented applications. I’ve found libgda scripts written in Python very useful, when I need to import data from CSV format. Also programs written in other languages takes advantage of its features.

New 6.0 series, is a modernization in the heart of libgda, powered by Meson Build System, that push ahead its development, followed by a lot of fixes to use modern database providers like PostgreSQL, MySQL and SQLite. New features include a new API for Data Definition, allowing to hide some complexities to create database objects like tables and columns in GDA.

More providers could be better, but GDA now has less. Over time some interested developers may intent to fix current providers, but add new ones are not recommended.

GDA has an old implementation of the concept of GInterface for Providers, that will make hard to implement new ones. A new set of interfaces to implement new providers more easy is required.

GDA has now fixed lot of issues for multi-threading, that will help in some situations, thanks to its internals modernization to latest GObject definition like private structs.

GDA is written in C, making it portable and bindings to other languages is possible, one of them are the ones supported for GObject Introspection, like Vala, Python and JavaScript. Write scripts to access databases data using a Python script is possible.

What about the future, we need to port Providers to use GInterface so anyone can write providers easy. VDA has started prototyping this interfaces, making interesting to see how both projects can provider better tools for developers.

There are lot of things we will need to port to resent technologies, like GDA’s GTK widgets, they need to be ported to GTK 4.0 to provide a set of useful tools to access database’s data in widgets, leaving behind lot of bugs.

Hope GDA 6.0 be useful for developers as old 5.2 series is today, for many years in the future. I’m a believer of GNOME Technologies like GObject, because it is highly portable and so, reusable in many ways. GDA needs an overhaul in how it is developed, so any suggestion is welcome.

Many thanks to all contributors, translators and users, they have kept this software alive for a long time, requiring new features, issues reports and providing patches; they have found GDA useful, that make us an big incentive to fix and prepare GDA for the future.

January 31, 2021

Rift CV1 – Pose rejection

I spent some time this weekend implementing a couple of my ideas for improving the way the tracking code in OpenHMD filters and rejects (or accepts) possible poses when trying to match visible LEDs to the 3D models for each device.

In general, the tracking proceeds in several steps (in parallel for each of the 3 devices being tracked):

  1. Do a brute-force search to match LEDs to 3D models, then (if matched)
    1. Assign labels to each LED blob in the video frame saying what LED they are.
    2. Send an update to the fusion filter about the position / orientation of the device
  2. Then, as each video frame arrives:
    1. Use motion flow between video frames to track the movement of each visible LED
    2. Use the IMU + vision fusion filter to predict the position/orientation (pose) of each device, and calculate which LEDs are expected to be visible and where.
  3. Try and match up and refine the poses using the predicted pose prior and labelled LEDs. In the best case, the LEDs are exactly where the fusion predicts they’ll be. More often, the orientation is mostly correct, but the position has drifted and needs correcting. In the worst case, we send the frame back to step 1 and do a brute-force search to reacquire an object.

The goal is to always assign the correct LEDs to the correct device (so you don’t end up with the right controller in your left hand), and to avoid going back to the expensive brute-force search to re-acquire devices as much as possible

What I’ve been working on this week is steps 1 and 3 – initial acquisition of correct poses, and fast validation / refinement of the pose in each video frame, and I’ve implemented two new strategies for that.

Gravity Vector matching

The first new strategy is to reject candidate poses that don’t closely match the known direction of gravity for each device. I had a previous implementation of that idea which turned out to be wrong, so I’ve re-worked it and it helps a lot with device acquisition.

The IMU accelerometer and gyro can usually tell us which way up the device is (roll and pitch) but not which way they are facing (yaw). The measure for ‘known gravity’ comes from the fusion Kalman filter covariance matrix – how certain the filter is about the orientation of the device. If that variance is small this new strategy is used to reject possible poses that don’t have the same idea of gravity (while permitting rotations around the Y axis), with the filter variance as a tolerance.

Partial tracking matches

The 2nd strategy is based around tracking with fewer LED correspondences once a tracking lock is acquired. Initial acquisition of the device pose relies on some heuristics for how many LEDs must match the 3D model. The general heuristic threshold I settled on for now is that 2/3rds of the expected LEDs must be visible to acquire a cold lock.

With the new strategy, if the pose prior has a good idea where the device is and which way it’s facing, it allows matching on far fewer LED correspondences. The idea is to keep tracking a device even down to just a couple of LEDs, and hope that more become visible soon.

While this definitely seems to help, I think the approach can use more work.

Status

With these two new approaches, tracking is improved but still quite erratic. Tracking of the headset itself is quite good now and for me rarely loses tracking lock. The controllers are better, but have a tendency to “fly off my hands” unexpectedly, especially after fast motions.

I have ideas for more tracking heuristics to implement, and I expect a continuous cycle of refinement on the existing strategies and new ones for some time to come.

For now, here’s a video of me playing Beat Saber using tonight’s code. The video shows the debug stream that OpenHMD can generate via Pipewire, showing the camera feed plus overlays of device predictions, LED device assignments and tracked device positions. Red is the headset, Green is the right controller, Blue is the left controller.

Initial tracking is completely wrong – I see some things to fix there. When the controllers go offline due to inactivity, the code keeps trying to match LEDs to them for example, and then there are some things wrong with how it’s relabelling LEDs when they get incorrect assignments.

After that, there are periods of good tracking with random tracking losses on the controllers – those show the problem cases to concentrate on.

January 29, 2021

Every Contribution Matters

GNOME is lucky to have a healthy mix of paid and volunteer contributors. Today’s post looks at how we can keep it that way.

I had some time free last summer and worked on something that crossed a number of project boundries. It was a fun experience. I also experienced how it feels to volunteer time on a merge request which gets ignored. That’s not a fun experience, it’s rather demotivating, and it got me thinking: how many people have had the same experience, and not come back?

I wrote a script with the Gitlab API to find open merge requests with no feedback, and I found a lot of them. I started to think we might have a problem.

GANGSTER CAT - Do we have a problem?

Code Reviews are Whose Job?

I’ve never seen a clear breakdown within GNOME of who is responsible for what. That’s understandable: we’re an open, community-powered project and things change fast. Even so, we have too much tribal knowledge and newcomers may arrive with expectations that don’t match reality.

Each component of GNOME lists one or more maintainers, In principle the maintainers review new contributions. Many GNOME maintainers volunteer their time, though. If they aren’t keeping up with review, nobody can force them to abandon their lives and spend more time reviewing patches, nor should they; so the solution to this problem can’t be “force maintainers to do X”.

Can we crowdsource a solution instead? Back in 2020 I proposed posting a weekly list of merge requests that need attention. There was a lot of positive feedback so I’ve continued doing this, and now mostly automated the process.

So far a handful of MRs have been merged as a result. The list is limited to MRs marked as “first contribution”, which happens when the submitter doesn’t have anything merged in the relevant project yet. So each success may have a high impact, and hopefully sends a signal that GNOME values your contributions!

Who can merge things, though?

Back to tribal knowledge, because now we have a new problem. If I’m not the maintainer of package X, can I review and merge patches? Should I?

If you are granted a GNOME account, you get ‘developer’ permission to the gitlab.gnome.org/GNOME group. This means you can commit and merge in every component, and this is deliberate:

The reason why we have a shared GNOME group, with the ability to review/merge changes in every GNOME project, is to encourage drive by reviews and contributions. It allows projects to continue improving without blocking on a single person.

— Emmanuele Bassi on GNOME Discourse

Those listed as module maintainers have extra permissions (you can see a comparison between Gitlab’s ‘developer’ and ‘maintainer’ roles here).

On many active projects the culture is that only a few people, usually the maintainers, actually review and merge changes. There are very good reasons for this. Those who regularly dedicate time to keeping the project going should have the final say on how it works, or their role becomes impossible.

Is this documented anywhere? It depends on the project. GTK is a good example, with a clear CONTRIBUTING.md file and list of CODEOWNERS too. GTK isn’t my focus here, though: it does have a (small) team of active maintainers, and patches from newcomers do get reviewed.

I’m more interested in smaller projects which may not have an active maintainer, nor a documented procedure for contributors. How do we stop patches being lost? How do you become a maintainer of an inactive project? More tribal knowledge, unfortunately.

Where do we go from here?

To recap, my goal is that new contributors feel welcome to GNOME, by having a timely response to their contributions. This may be as simple as a comment on the merge request saying “Thanks, I don’t quite know what to do with this.” It’s not ideal, but it’s a big step forwards for the newcomer who was, up til now, being ignored completely. In some cases, the request isn’t even in the right place — translation fixes go to a separate Gitlab project, for example — it’s easy to help in these cases. That’s more or less where we’re at with the weekly review-request posts.

We still need to figure out what to do with merge requests we get which look correct, but it’s not immediately obvious if they can be merged.

As a first step, I’ve created a table of project maintainers. The idea is to make it a little easier to find who to ask about a project:

Searchable table of project maintainers, at https://gnome-metrics.afuera.me.uk/maintainers.html

I have some more ideas for this initiative:

  • Require each project to add a CONTRIBUTING.md.
  • Agree a GNOME-wide process for when a project is considered “abandoned” — without alienating our many part-time, volunteer maintainers.
  • Show the world that it’s easy and fun to join the global diaspora of GNOME maintainers.

Can you think of anything else we could do to make sure that every contribution matters?

Disable Submit button if Form fields have not changed in a Nuxt/Vue app

Forms are one of the most important aspects of any application. It is considered a good UX practice to keep the Save/Submit button disabled until the form contents have not changed. In this blog, we will take a look at how can we accomplish this behaviour in a Nuxt/Vue app.

Contents

1. Creating a Form Template

Let us create a simple form which will help us to understand the concepts of computed and watch properties. In our index.vue in pages directory, let us add the following form template:

<template>
  <form>
    <label>Name</label>
    <input v-model='form.name' />

    <label>Age</label>
    <input v-model='form.age' />

    <button :disabled="!changed">Save</button>
  <form>
</template>

Let us understand the above template. We have bound our form data model to form inputs using v-model. In our Save button, we will disable it if the form fields have not changed.

2. Writing Vuex Code

In this example, we will use Vuex Store’s state, actions and mutations to store state and fetch our form data.

// initialize the state variables
export const state = () => ({
  form: {}
})

export const actions = {
  // action to setup form data
  // we can also do an api call as well
  init({ commit }) {
    const data = {
      name: 'Ravgeet',
      age: '21',
    }

    // commit mutuation for changing the state
    commit('setFormData', data)
  }
}

export const mutations = {
  // mutation to change the state
  setFormData(state, data) {
    state.form = data
  }
}

3. Writing Computed and Watch properties

Our template and Vuex Store are set. Now is the time to implement our application logic in our template’s script. In our pages/index.vue, let us add the following code:

<script>
import _ from 'lodash'

export default {
  data() {
    return {
      changed: false, // useful for storing form change state
      form: {}, // data variable to store current form data binding
    }
  },

  computed: {
    // store the original form data
    originalForm() {
      return this.$store.state.form
    }
  },
  
  watch: {
    // by watching the original form data
    // create a clone of original form data
    // and assign it to the form data variable
    originalForm() {
      this.form = _.cloneDeep(this.originalForm)
    },

    // watch the changes on the form data variable
    form: {
      handler() {
        // using lodash to compare original form data and current form data
        this.changed = !_.isEqual(this.form, this.originalForm)
      },
      // useful to watch deeply nested properties on a data variable
      deep: true,
    },
  },

  created() {
    // dispatch an action to fill the store with data
    this.$store.dispatch('init')
  }
}
</script>

In our computed and watch properties, we need to clone and compare JS objects. Lodash is a great library for working with JS objects and we will install the same by doing:

$ npm install --save lodash

Now that we have written our code, let us understand what is happening in the above code.

  • When the component is created, an action, init is dispatched using created hook. This action causes a mutation in the store and fills the form state variable.

  • The value of the computed property, originalForm is calculated as it is dependent upon form state variable.

  • As the value of originalForm is being watched using watch hook, the code inside it is executed. A deep clone of originalForm is made and assigned to form data variable.

  • Since the value of form is being watched, we use a handler and deep property to run our business logic. We simply check whether the form and originalForm variables are equal using Lodash.

At first, it looks like something very complex is going on but once we break down the things it makes sense.

Results

Let us navigate to our browser and check whether we have been able to achieve our purpose of disabling the form submit button if the form fields have not changed at all.

Tutorial to display notification when user is offline in Nuxt/Vue

Voila! We have successfully implemented our workflow. This adds to the UX of our application and saves the user from the frustration especially in long forms. If you any doubts or appreciation for our team, let us know in the comments below. We would be happy to assist you.

January 27, 2021

Call for Project ideas for Google Summer of Code 2021

It is that time of the year again when we start gathering ideas for Google Summer Code.

This time around we will be posting and discussing proposals in GNOME’s GitLab instance. Therefore, if you have a project idea that fits Google Summer of Code, please file an issue at https://gitlab.gnome.org/Teams/Engagement/gsoc-2021/-/issues/new using the “Proposal” template.

Everybody is welcome to add ideas, but it would be nice to verify whether the ideas are realistic and mentorship for it will be available. We encourage you to discuss your ideas with designers in #gnome-design to get their input and plan collaboration, especially if your ideas are related to one of the core GNOME modules.

Keep in mind that there are a few changes in GSoC this year:

  1. Smaller project size all students participating in the 2021 program will be working on a 175 hour project (instead of a 350 hr project). This change will also result in a few other changes including the student stipend being cut in half.
  2. Shortened coding period – the coding period will be 10 weeks with a lot more flexibility for the mentor and student to decide together how they want to spread the work out over the summer. Some folks may choose to stick to a 17-18 hour a week schedule with their students, others may factor in a couple of breaks during the program (for student and mentor) and some may have students focus 30 hours a week on their project so they wrap up in 6 weeks. This also makes it a lot easier for students with finals or other commitments (weddings, etc.) to adjust their schedules.
  3. 2 evaluations (instead of 3) – There will be an evaluation after 5 weeks and the final evaluation will take place after the 10th week. We are also no longer requiring students complete their first evaluation (though we encourage them to do so), so if a student doesn’t complete the first evaluation they will not automatically be removed from the program. They are still required to complete the final evaluation.
  4. Eligibility requirements – In 2020 there are many ways students are learning and we want to acknowledge that so we will be allowing students who are 18 years old AND currently enrolled (or accepted into) a post-secondary academic program as of May 17, 2021 or have graduated from a post-secondary academic program between December 1, 2020 and May 17, 2021 to apply to the GSoC program.

If you have any doubts, please don’t hesitate to contact the GNOME GSoC Admins on Discourse or https://chat.gnome.org/channel/outreach

** This is a repost from https://discourse.gnome.org/t/call-for-project-ideas-for-google-summer-of-code-2021/5454 to reach a broader audience. Please share! **

January 24, 2021

Outreachy Progress Report

I’m halfway gone into my Outreachy internship at the GNOME Foundation. Time flies so fast right? I’m a little emotional cuz I don’t want this fun adventure to end soo soon. Just roughly five weeks to go!!
Oh well, let’s find out what I’ve been able to achieve over the past eight weeks and what my next steps are…

My internship project is to complete the integration between the GNOME Translation Editor (previously known as Gtranslator) and Damned Lies(DL). This integration involves enabling users to reserve a file for translation directly from the Translation Editor and permitting them to upload po files to DL.

Incase you don’t understand any terminology or not you haven’t heard about these projects before, kindly read this blog post and you’ll clear all your doubts.

Let’s move forward!

So far, here are the things I’ve been able to accomplish:

  1. Setup a button and required functions for the “reserve for translation” feature.
  2. Setup a dialog and associated functions for the “upload file” feature
  3. I added the module sate to the user interface which comes in very handy to enable users know what state a po file is in as on the vertimus workflow. This will permit them know what actions can be performed.
  4. In addition, users can now see custom DL headers on po files downloaded from DL or opened locally and even edit the headers as required from the edit header dialog.
  5. Added some endpoints to the DL REST API to authenticate (Token Authentication) a user who wants to perform the “reserve for translation” or “upload file” operation.

What’s left?

  1. The endpoints I added to the DL REST API need to be approved and merged, so that I can write real queries and complete the functions I’ve created for the above mentioned two features. I got some feedback on the merge request I did, which I’m currently working on.
  2. Ensure that memory has been managed properly and handle security vulnerabilities.
  3. Extensive testing and documentation.
  4. Final evaluation and wrapping up.

The big question is I’m I on track?? Oh well let’s find out

I’ve completed 8 out of 13 weeks of my internship. according to my timeline, I’m supposed to have a version one of both features working properly. I’m about 90% close to achieving this goal.

Since my project is dependent on another project, it’s a little challenging to go as fast as possible. This is because communication needs to be done with the community members managing the DL project and getting feedback or convincing them on why my changes are necessary takes quite some time.

Next Steps:

  • I need to push hard for the DL team to merge my changes in time.
  • Stay focused and complete the tasks I have left.

I’m glad all is well and with some determination and push, I should deliver my project in time