June 20, 2021

An Update on my GSoC project

This is a short description of what I've been working on.


On 7th June, I started working on the first task of my project (Redesigning Health’s MainView). The objective was to create a popup window that contains an AdwViewSwitcherTitle in the header bar which lets the user switch between tabs (Add Activity Data and Add Weight Data). We might add another tab (Water Intake Data).




I am enjoying it so far!


I will post further updates in the coming weeks
















GSoC 21: Contributing to Gnome libsecret

I’m one of the Google Summer of Code (GSoC 2021) interns contributing to the GNOME Foundation. And I'm contributing to the libsecret project.

libsecret is a library for storing and retrieving passwords and other secrets. It communicates with the "Secret Service" using DBus - gnome.org

To simply put, libsecret is a credential/secret/password manager. libsecret has a feature that you can use to store secrets in a file database or simply a file. This file is encrypted. And the key to encrypt this file is derived from the user's login password. This not an ideal situation. Because, the entire security of the file database relies on the user's login password. This situation can be improved if the key[s] are protected by hardware. This is when a TPM comes into play.

Trusted Platform Module (TPM, also known as ISO/IEC 11889) is an international standard for a secure cryptoprocessor, a dedicated microcontroller designed to secure hardware through integrated cryptographic keys - Wikipedia.

If you don't know anything about the TPMs I'd recommend watching, Using the TPM - It's Not Rocket Science (Anymore) - Johannes Holland & Peter Huewe talk on YouTube. TPM is a fantastic tool for everyday cryptographic scenarios. And it's not that hard to use thanks to tpm2-tools. However, talking to a TPM via an API (or simply TPM programming) is not that simple. Actually, it's very similar to rocket science :) And there are no books or good developer resources on TPM programming. The lack of programming resources is a frustrating experience. However, the TPM developer community is fantastic. They have been helping me since day one.

In simple terms, my goal is to extend current libsecret file database encryption-decryption functionality to work with a TPM. So, the TPM will handle key generation, wrapping-unwrapping of keys and key storage processes. This is very exciting work! Honestly, this was not the case in my early stage of contributing to libsecret. I knew nothing about libsecret, computer security, cryptography or TPMs. Thanks to both my mentors and upstream TSS (TPM Software Stack) developers, I'm confidently finding my way around the project. So, thank you Daiki Ueno for guiding me through every step of the way from my initial contribution to making my final project proposal for GSoC. And thank you Anderson Sasaki for helping me out with my questions every single day. Also, I would very much like to thank, upstream TSS developers Peter Huewe, Philip Tricca and Andreas Fuchs for helping me out with all things related to TPMs.

Stay tuned for my next blog post, "Hello World TPM!".

June 17, 2021

GNOME Internet Radio Locator version 11.10 with GeoClue Location support

The latest release of GNOME Internet Radio Locator 11.10 finally features GeoClue Location in GNOME Gitlab Commit d0206f2f095b37e1ad1b3f7ead951ad2ea9f5828, since most people don’t live in Boston (wait a few seconds before your computer location is displayed on the map via GeoClue and click Zoom In/Zoom Out and drag on the map to see and listen to radio stations in the location map view. Click on the map marker labels to listen at your location or search with location text (for example “Cambridge, United Kingdom”) in the blank text input box to switch between the radio stations.

GNOME Internet Radio Locator 11 for GNOME 40 is a Free Software program that allows you to easily locate Free Internet Radio stations by broadcasters on the Internet with the help of map and text search.

GNOME Internet Radio Locator 11 for GNOME 40 is developed on the GNOME 40 desktop platform with GNOME Maps, GeoClue, libchamplain and geocode-lib and it requires at least GTK+ 3.0 and GStreamer 1.0 for audio playback.

GNOME Internet Radio Locator 11 for GNOME 40 is available with map marker popups for Internet radio stations in 110 world cities as well as text-based location search for 187 Internet Radio stations in 102 world cities.

You can either zoom/click on the map marker popups to listen to a station or enter city names in the GUI search input field in order to locate radio stations in the city using the text search with auto-completion.

Wait a few seconds to see your current location on the map in the GNOME Internet Radio Locator application.

You can download it from www.gnomeradio.org and the Fedora 34 RPM packages of version 11.10 of GNOME Internet Radio Locator are now also available for free:

gnome-internet-radio-locator-11.10.tar.xz

gnome-internet-radio-locator.spec

gnome-internet-radio-locator-11.10-1.fc34.src.rpm

gnome-internet-radio-locator-11.10-1.fc34.x86_64.rpm

To install gnome-internet-radio-locator-11.10.tar.xz on Fedora Core 34 in GNOME Terminal, run the following installation command to resolve all dependencies:

sudo dnf install http://www.gnomeradio.org/~ole/fedora/RPMS/x86_64/gnome-internet-radio-locator-11.10-1.fc34.x86_64.rpm

To run GNOME Internet Radio Locator 11 for GNOME 40 from GNOME Terminal, run the command

/usr/bin/gnome-internet-radio-locator

To inspect the source code and build the version 11.10 source tree, run

sudo dnf install gnome-common
sudo dnf install intltool libtool gtk-doc geoclue2-devel yelp-tools
sudo dnf install gstreamer1-plugins-bad-free-devel geocode-glib-devel
sudo dnf install libchamplain-devel libchamplain-gtk libchamplain geoclue2
git clone http://gitlab.gnome.org/GNOME/gnome-internet-radio-locator
cd gnome-internet-radio-locator/
./autogen.sh
sudo make install

Script as a Task using VS Code IDE

VS Code comes with a great feature of specifying tasks and running them through Command Palette. There can be a variety of scripts that we need to run while developing our applications. For example, before releasing a new build, there are a lot of things that need to be done by the release team. Some of them include bumping release version, creating release notes, generating changelog and the list goes on.

In this tutorial, we will learn how to use VS Code Tasks by taking the example of pre-release commands and ensure that no step is missed along the way.

Contents

Prerequisites

  • A Local Git Repository
  • VS Code Editor
  • Linux Environment

1. Writing Pre Release Script

The first thing we need to do is to create a script - in this case, a bash script. In this script, we will define what steps we need to perform as a part of our pre-release operation.

Let us assume that before releasing, we do two operations. First, we create a .version file and add today’s date to it. Then we create an empty commit with a message - do-production-release.

With the steps determined, let us create a pre-release.sh in .vscode directory and add the following code:

#!/bin/sh

date > .version
git commit --allow-empty -m "do-production-release"

We can test run the above script by doing:

bash .vscode/pre-release.sh

Make sure to give proper permissions to the script before running it.

2. Setting Tasks

Now comes the most interesting part of the tutorial. VS Code allows us to specify tasks in tasks.json. The beauty of the VS Code tasks is that we can run them directly from VS Code Command Palette which is especially helpful for non-technical members of our team.

Let us create a tasks.json file in .vscode directory and add the following contents in the file:

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "Pre-Release Setup",
            "type": "shell",
            "command": "bash",
            "args": ["${workspaceFolder}/.vscode/pre-release.sh"]
        }
    ]
}

It is important to understand what we are doing so that we can customize the workflow according to our needs.

label is used to identify the script in the VS Code Command Palette.

"label": "Pre-Release Setup"

type is set to shell since you need to execute a shell script.

"type": "shell"

command is used the specify the base command to which the arguments can be passed.

"command": "bash"

args is an array that provides arguments to the command. ${workspaceFolder} is the internal variable provided by the VS Code. It is the absolute path to our project’s root directory.

"args": ["${workspaceFolder}/.vscode/pre-release.sh"]

3. Running Tasks

Let us open the VS Code Command Palette using Ctrl + Shift + P, type Tasks: Run Task and press Enter.

VS Code Command Palette to run tasks

We will be presented with a list of tasks that we specified in the tasks.json. We will select the Pre-Release Setup and press Enter. We will see the task output in VS Code Integrated Terminal.

VS Code Command Palette to select tasks

Conclusion

We now have a good overview of how we can use VS Code tasks to run our scripts as tasks in a better way. We can also add more tasks like running pre-staging release, running pre-dev release and more.

June 16, 2021

Integrating sandboxed Vala apps with the host system through xdg-desktop-portals

Portals are a mechanism through which applications can interact with the host environment from within a sandbox. They give the ability to interact with data, files, and services without the need to add sandbox permissions.

Examples of capabilities that can be accessed through portals include opening files through a file chooser dialog, or printing. More information about portals can be found in Sandbox Permissions.

Some portals, such as the FileChooser one, provide an almost seamless experience without much extra code on the app side. For other portals, you usually need some code to talk to the portal’s DBus interface or use libportal.

Vala was designed specifically for the development of GNOME apps, and it has some nice syntax-sugar that makes the communication with DBus pretty simple to implement.

GNOME Boxes is written in Vala and, for this reason, instead of consuming libportal, I introduced a small singleton Portal class that centralizes the whole portal communication logic for the app. This turned out to be quite convenient, so I am copy-pasting it in other Vala apps I work on, and sharing this here in case it can be useful to you too. 🙂

This works because in Vala you can define a namespace matching the desired DBus interface name and with annotations, you can bind objects, properties, and methods to a DBus service. See the Vala DBus Client Samples for more examples.

With the Portal singleton, a call to the Background portal requesting permission for the app to run in the background gets as simple as:

var portals = Portals.get_default ();
yield portals.request_to_run_in_background ((response, results) => {
    if (response == 0)
        // do something...
});

Notice that this is an async call and you may pass a callback to handle its response.

Nothing written here is new, but I thought it was worth sharing this snippet to help others make their apps integrate with xdg-desktop-portals and reduce the unnecessary exposition of user data in sandboxed environments.

June 15, 2021

Community Power Part 2: The Process

In part 1 of this series we looked at some common misconceptions about how power works inside the GNOME project and went over the roles and responsibilites of various sub-groups.

With that in place, let’s look at how of a feature (or app, redesign, or other product initiative) goes from idea to reality.

The Why

At the base of everything are the motivations for why we embark on new product initiatives. These are our shared values, beliefs, and goals, rooted in GNOME’s history and culture. They include goals like making the system more approachable or empowering third party developers, as well as non-goals, such as distracting people or introducing unnecessary complexity.

Since people across the project generally already agree on these it’s not something we talk about much day-to-day, but it informs everything we do.

This topic is important for understanding our development process, but big enough to warrant its own separate post in this series. I’ll go into a lot more detail there.

The What

At any given moment there are potentially hundreds of equally important things people working on GNOME could do to further the project’s goals. How do we choose what to work on when nobody is in charge?

This often depends on relatively hard to predict internal and external factors, such as

  • A volunteer taking a personal interest in solving a problem and getting others excited about it (e.g. Alexander Mikhaylenko’s multi-year quest for better 1-1 touchpad gestures)
  • A company giving their developers work time to focus on getting a specific feature done upstream (e.g. Endless with the customizable app grid)
  • The design team coming up with something and convincing developers to make it happen (e.g. the Shell dialog redesign in 3.36)
  • A technological shift presenting a rare opportunity to get a long-desired feature in (e.g. the Libadwaita stylesheet refresh enabling recoloring)

For larger efforts, momentum is key: If people see exciting developments in an area they’ll want to get involved and help make it even better, resulting in a virtuous cycle. A recent example of this was GNOME 40, where lots of contributors who don’t usually do much GNOME Shell UI work pitched in during the last few weeks of the cycle to get it over the line.

If something touches more than a handful of modules (e.g. the app menu migration), the typical approach is to start a formal “Initiative”: This is basically a Gitlab issue with a checklist of all affected modules and information on how people can help. Any contributor can start an initiative, but it’s of course not guaranteed that others will be interested in helping with it and there are plenty of stalled or slow-moving ones alongside the success stories.

The How

If a new app or feature is user-facing, the first step towards making it happen is to figure out the user experience we’re aiming for. This means that at some point before starting implementation the designers need to work through the problem, formulate goals, look at relevant art, and propose a way forward (often in the form of mockups). This usually involves a bunch of iterations, conversations with various stakeholders, and depending on the scale of the initiative, user research.

If the feature is not user-facing but has non-trivial technical implications (e.g. new dependencies) it’s good to check with some experienced developers or the release team whether it fits into the GNOME stack from a technical point of view.

Once there is a more or less agreed-upon design direction, the implementation can start. Depending on the size and scope of the feature there are likely additional design or implementation questions that require input from different people throughout the process.

When the feature starts getting to the point where it can be tested by others it gets more thorough design reviews (if it’s user facing), before finally being submitted for code review by the module’s maintainers. Once the maintainers are happy with the code, they merge it into the project’s main branch.


In the next installment we’ll look at what this power structure and development process mean for individual contributors wanting to work towards a specific goal, such as getting their pet bug fixed or feature implemented.

Until then, happy hacking!

Welcome Red Hat as a GUADEC Sponsor

Red Hat is a Gold Sponsor of GUADEC 2021! We’re pleased to welcome them back to GUADEC for another year. As a Gold Sponsor, they will be hosting office hours on Wednesday, July 21. This will provide an opportunity for attendees to talk directly with Red Hat, about a range of topics, including the many GNOME-related activities they have going on.

“As one of the many active contributors within the vibrant GNOME community, Red Hat is very pleased to also be among the sponsors of this year’s GUADAC event,” said a representative from Red Hat. “Community is about connections, and as we move into a world that is waking up from decreased social contact, those connections are more important than ever. GNOME remains an incredible part of the open source ecosystem, and the conversations made at GUADEC amongst users and contributors are a big reason why GNOME continues to be successful! We are thrilled to be a part of these conversations and look forward to participating in the GUADEC 2021 online event.”

Kristi Progri, lead organizer of GUADEC, says “On behalf of everyone at GUADEC organizing team, I would like to express our sincere gratitude for the generous sponsorship to GUADEC, We’re happy they’re joining us again at GUADEC to help build GNOME and show the community what they are working on.”

About Red Hat

Red Hat is the world’s leading provider of open source software solutions, using a community-powered approach to provide reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT.

About GUADEC

GUADEC is the GNOME community’s largest conference, bringing together hundreds of users, contributors, community members, and enthusiastic supporters together for a week of talks and workshops. It takes place July 21 – 25 and will be online. This year’s keynote speakers are Hong Phuc Dang and Shauna Gordon-McKeon. Registration for GUADEC 2021 is open, please visit guadec.org to sign up.

About GNOME

GNOME is a free and open-source software environment project supported by a non-profit foundation. Together, the community of contributors and the Foundation create a computing platform and software ecosystem, composed entirely of free software, that is designed to be elegant, efficient, and easy to use.

The GNOME Foundation is a non-profit organization that believes in a world where everyone is empowered by technology they can trust. We do this by building a diverse and sustainable free software personal computing ecosystem.

Sovereignty on a Federated System: problems we faced on GNOME’s Matrix instance

This post follows an introduction to Matrix with e-mails, where I explain that Matrix is a federated system. Federation can be either public or private. A public server can communicate with any other server, except the ones which are explicitely avoided. Meanwhile, a private server can only communicate with a selected list of other servers. Private federation is often deployed between entities that can trust each other, for example between universites.

June 14, 2021

2021-06-14 Monday

  • E's birthday - remote silent-monks happy-birthday action, presents at breakfast, dropped them into school.
  • Planning call; mail, lunch, performance team call; re-discovered Android-Studio's love of consuming all memory and burning us deeply into swap.

June 13, 2021

2021-06-13 Sunday

  • All Saints in the morning, back for a pizza lunch, some music. A cream tea with Mary & Nicky - been so long since we saw them; lovely.
  • Subscribed to Musescore, some quartet action. J. took H. & N. off to stay with B&A for a bit. David over for the end of the cream tea, caught up & watched Johnny English with the babes - relaxing.

June 11, 2021

Community Power Part 1: Misconceptions

People new to the GNOME community often have a hard time understanding how we set goals, make decisions, assume responsibility, prioritize tasks, and so on. In short: They wonder where the power is.

When you don’t know how something works it’s natural to come up with a plausible story based on the available information. For example, some people intuitively assume that since our product is similar in function and appearance to those made by the Apples and Microsofts of the world, we must also be organized in a similar way.

This leads them to think that GNOME is developed by a centralized company with a hierarchical structure, where developers are assigned tasks by their manager, based on a roadmap set by higher management, with a marketing department coordinating public-facing messaging, and so on. Basically, they think we’re a tech company.

This in turn leads to things like

  • People making customer service style complaints, like they would to a company whose product they bought
  • General confusion around how resources are allocated (“Why are they working on X when they don’t even have Y?”)
  • Blaming/praising the GNOME Foundation for specific things to do with the product

If you’ve been around the community for a while you know that this view of the project bears no resemblance to how things actually work. However, given how complex the reality is it’s not surprising that some people have these misconceptions.

To understand how things are really done we need to examine the various groups involved in making GNOME, and how they interact.

GNOME Foundation

The GNOME Foundation is a US-based non-profit that owns the GNOME trademark, hosts our Gitlab and other infrastructure, organizes conferences, and employs one full-time GTK developer. This means that beyond setting priorities for said GTK developer, it has little to no influence on development.

Update: As of June 14, the GNOME Foundation no longer employs any GTK developers.

Individual Developers

The people actually making the product are either volunteers (and thus answer to nobody), or work for one of about a dozen companies employing people to work on various parts of GNOME. All of these companies have different interests and areas of focus depending on how they use GNOME, and tend to contribute accordingly.

In practice the line between “employed” contributor and volunteer can be quite blurry, as many contributors are paid to work on some specific things but also additionally contribute to other parts of GNOME in their free time.

Maintainers

Each module (e.g. app, library, or system component) has one or more maintainers. They are responsible for reviewing proposed changes, making releases, and generally managing the project.

In theory the individual maintainers of each module have more or less absolute power over those modules. They can merge any changes to the code, add and remove features, change the user interface, etc.

However, in practice maintainers rarely make non-trivial changes without consulting/communicating with other stakeholders across the project, for example the design team on things related to the user experience, the maintainers of other modules affected by a change, or the release team if dependencies change.

Release Team

The release team is responsible for coordinating the release of the entire suite of GNOME software as a single coherent product.

In addition to getting out two major releases every year (plus various point releases) they also curate what is and isn’t part of the core set of GNOME software, take care of the GNOME Flatpak runtimes, manage dependencies, fix build failures, and other related tasks.

The Release Team has a lot of power in the sense that they literally decide what is and isn’t part of GNOME. They can add and remove apps from the core set, and set system-wide default settings. However, they do not actually develop or maintain most of the modules, so the degree to which they can concretely impact the product is limited.

Design Team

Perhaps somewhat unusually for a free software project GNOME has a very active and well-respected design team (If I do say so myself :P). Anything related to the user experience is their purview, and in theory they have final say.

This includes most major product initiatives, such as introducing new apps or features, redesigning existing ones, the visual design of apps and system, design patterns and guidelines, and more.

However: There is nothing forcing developers to follow design team guidance. The design team’s power lies primarily in people trusting them to make the right decisions, and working with them to implement their designs.

How do things get done then?

No one person or group ultimately has much power over the direction of the project by themselves. Any major initiative requires people from multiple groups to work together.

This collaboration requires, above all, mutual trust on a number of levels:

  • Trust in the abilities of people from other teams, especially when it’s not your area of expertise
  • Trust that other people also embody the project’s values
  • Trust that people care about GNOME first and foremost (as opposed to, say, their employer’s interests)
  • Trust that people are in it for the long run (rather than just trying to quickly land something and then disappear)

This atmosphere of trust across the project allows for surprisingly smooth and efficient collaboration across dozens of modules and hundreds of contributors, despite there being little direct communication between most participants.


This concludes the first part of the series. In part 2 we’ll look at the various stages of how a feature is developed from conception to shipping.

Until then, happy hacking!

Typesetting a full book part II, Scribus

Some time ago I wrote a blog post on what it's like to typeset an entire book using nothing but LibreOffice. One of the comments mentioned that LO does not do a great job of aligning text. This is again probably because it needs to copy MS Word's behaviour, which means greedy line splitting. Supposedly Scribus does this a lot better, but the only way to be really sure was to typeset the whole text with Scribus. So that's what I did (using the latest 1.5 release from Flathub).

Workflow for Scribus

Every program has the things it is good for and things it's not that good for. Scribus' strengths lie in producing output with fairly short pieces of text with precise layout requirements, especially if there are many images. A traditional "single flow of text" is not that, so there are some things you need to plan for.

First of all, a Scribus document should not be created until the text is almost completely finished. Doing big changes (like adding text to existing chapters, changing physical page size etc) can become quite tedious. Scribus also does not do long pieces of text particularly smootly. I tried loading all 350is pages to a single linked frame sequence. It sort of worked, but things got quite laggy quite quickly. Eventually I converged on a layout where every chapter was its own set of linked frames. The text was imported directly from LO files that held one chapter each. The original had just one big LO file, so I had to split it up by hand for the import. If the original had been done with master documents, this would have been simpler.

The table of contents had to be done by hand again. Scribus has support for tables, but they could not be used, because tables drew outlines around each cell and I could not find a way to switch that off. Websearching found several pages with info, but none of them worked. It also turns out that you can not add page references to table cells, only to text frames. No, I don't know why either. The option was greyed out in the menus and trying to sneakily copypaste a page reference from a text frame to a table caused a segfault.

Issues discovered

While LO was surprisingly bug free, Scribus was less so and I encountered many bugs and baffling missing features, such as:
  • Scribus would sometimes create empty text frames far outside the document (i.e. to page 600 on a 300 page document)
  • Text frames got a strange empty character at their end which would cause text overflow warnings, deleting it did not help as the empty characters kept reappearing
  • Adding a page reference to an anchor point would always link to the page where the linked frame sequence started, not where the anchor was placed
  • Text is not hyphenated automatically, only by selecting a text frame and then selecting extras > hyphenate text in the main menu, one would imagine hyphenation being a paragraph style property instead
  • I managed to create an anchor point that does not exist anywhere except the mark list, but deleting it leads to an immediate segfault
None of these obstacles were insurmountable, but they made for a nonsmooth experience. Eventually the work was done and here is how they compare (LO on the left, Scribus on the right).
As you can probably tell, Scribus creates more condensed output. The settings were the same for both programs (automatically translated from LO styles by Scribus, not verified by hand) and LO's output file was 339 pages compared to 326 for Scribus.

Which one should you use then?

Like most things in life, that depends. If your document has a notable amount of mathematics, then you most likely want to go with LaTeX. If the document is something like a magazine or you require the highest typographical quality possible, then Scribus is a good choice. For "plain old books" the question becomes more complicated.

If you need a fully color managed workflow, then Scribus is the only viable option. If the default output of LO is good enough for you, the document has few figures and you are fine with needing to have a great battle at the end to line the images up, LO provides a fairly smooth experience.  You have to use styles properly, though, or the whole thing will end up in tears. LO is especially suitable for documents with lots of levels, headings and cross references between the two. LaTeX is also very good with those, but its unfortunate downside is that defining new styles is really hard. So is changing fonts, so you'd better be happy with Computer Modern. If the document has lots of images, then LaTeX's automatic figure floats make a ton of manual work completely disappear.

Original data

The original source documents as well as the PDF output for both programs can be found in this Github repo

June 10, 2021

The Wondrous World of Discoverable GPT Disk Images

TL;DR: Tag your GPT partitions with the right, descriptive partition types, and the world will become a better place.

A number of years ago we started the Discoverable Partitions Specification which defines GPT partition type UUIDs and partition flags for the various partitions Linux systems typically deal with. Before the specification all Linux partitions usually just used the same type, basically saying "Hey, I am a Linux partition" and not much else. With this specification the GPT partition type, flags and label system becomes a lot more expressive, as it can tell you:

  1. What kind of data a partition contains (i.e. is this swap data, a file system or Verity data?)
  2. What the purpose/mount point of a partition is (i.e. is this a /home/ partition or a root file system?)
  3. What CPU architecture a partition is intended for (i.e. is this a root partition for x86-64 or for aarch64?)
  4. Shall this partition be mounted automatically? (i.e. without specifically be configured via /etc/fstab)
  5. And if so, shall it be mounted read-only?
  6. And if so, shall the file system be grown to its enclosing partition size, if smaller?
  7. Which partition contains the newer version of the same data (i.e. multiple root file systems, with different versions)

By embedding all of this information inside the GPT partition table disk images become self-descriptive: without requiring any other source of information (such as /etc/fstab) if you look at a compliant GPT disk image it is clear how an image is put together and how it should be used and mounted. This self-descriptiveness in particular breaks one philosophical weirdness of traditional Linux installations: the original source of information which file system the root file system is, typically is embedded in the root file system itself, in /etc/fstab. Thus, in a way, in order to know what the root file system is you need to know what the root file system is. 🤯 🤯 🤯

(Of course, the way this recursion is traditionally broken up is by then copying the root file system information from /etc/fstab into the boot loader configuration, resulting in a situation where the primary source of information for this — i.e. /etc/fstab — is actually mostly irrelevant, and the secondary source — i.e. the copy in the boot loader — becomes the configuration that actually matters.)

Today, the GPT partition type UUIDs defined by the specification have been adopted quite widely, by distributions and their installers, as well as a variety of partitioning tools and other tools.

In this article I want to highlight how the various tools the systemd project provides make use of the concepts the specification introduces.

But before we start with that, let's underline why tagging partitions with these descriptive partition type UUIDs (and the associated partition flags) is a good thing, besides the philosophical points made above.

  1. Simplicity: in particular OS installers become simpler — adjusting /etc/fstab as part of the installation is not necessary anymore, as the partitioning step already put all information into place for assembling the system properly at boot. i.e. installing doesn't mean that you always have to get fdisk and /etc/fstab into place, the former suffices entirely.

  2. Robustness: since partition tables mostly remain static after installation the chance of corruption is much lower than if the data is stored in file systems (e.g. in /etc/fstab). Moreover by associating the metadata directly with the objects it describes the chance of things getting out of sync is reduced. (i.e. if you lose /etc/fstab, or forget to rerun your initrd builder you still know what a partition is supposed to be just by looking at it.)

  3. Programmability: if partitions are self-descriptive it's much easier to automatically process them with various tools. In fact, this blog story is mostly about that: various systemd tools can naturally process disk images prepared like this.

  4. Alternative entry points: on traditional disk images, the boot loader needs to be told which kernel command line option root= to use, which then provides access to the root file system, where /etc/fstab is then found which describes the rest of the file systems. Where precisely root= is configured for the boot loader highly depends on the boot loader and distribution used, and is typically encoded in a Turing complete programming language (Grub…). This makes it very hard to automatically determine the right root file system to use, to implement alternative entry points to the system. By alternative entry points I mean other ways to boot the disk image, specifically for running it as a systemd-nspawn container — but this extends to other mechanisms where the boot loader may be bypassed to boot up the system, for example qemu when configured without a boot loader.

  5. User friendliness: it's simply a lot nicer for the user looking at a partition table if the partition table explains what is what, instead of just saying "Hey, this is a Linux partition!" and nothing else.

Uses for the concept

Now that we cleared up the Why?, lets have a closer look how this is currently used and exposed in systemd's various components.

Use #1: Running a disk image in a container

If a disk image follows the Discoverable Partition Specification then systemd-nspawn has all it needs to just boot it up. Specifically, if you have a GPT disk image in a file foobar.raw and you want to boot it up in a container, just run systemd-nspawn -i foobar.raw -b, and that's it (you can specify a block device like /dev/sdb too if you like). It becomes easy and natural to prepare disk images that can be booted either on a physical machine, inside a virtual machine manager or inside such a container manager: the necessary meta-information is included in the image, easily accessible before actually looking into its file systems.

Use #2: Booting an OS image on bare-metal without /etc/fstab or kernel command line root=

If a disk image follows the specification in many cases you can remove /etc/fstab (or never even install it) — as the basic information needed is already included in the partition table. The systemd-gpt-auto-generator logic implements automatic discovery of the root file system as well as all auxiliary file systems. (Note that the former requires an initrd that uses systemd, some more conservative distributions do not support that yet, unfortunately). Effectively this means you can boot up a kernel/initrd with an entirely empty kernel command line, and the initrd will automatically find the root file system (by looking for a suitably marked partition on the same drive the EFI System Partition was found on).

(Note, if /etc/fstab or root= exist and contain relevant information they always takes precedence over the automatic logic. This is in particular useful to tweaks thing by specifying additional mount options and such.)

Use #3: Mounting a complex disk image for introspection or manipulation

The systemd-dissect tool may be used to introspect and manipulate OS disk images that implement the specification. If you pass the path to a disk image (or block device) it will extract various bits of useful information from the image (e.g. what OS is this? what partitions to mount?) and display it.

With the --mount switch a disk image (or block device) can be mounted to some location. This is useful for looking what is inside it, or changing its contents. This will dissect the image and then automatically mount all contained file systems matching their GPT partition description to the right places, so that you subsequently could chroot into it. (But why chroot if you can just use systemd-nspawn? 😎)

Use #4: Copying files in and out of a disk image

The systemd-dissect tool also has two switches --copy-from and --copy-to which allow copying files out of or into a compliant disk image, taking all included file systems and the resulting mount hierarchy into account.

Use #5: Running services directly off a disk image

The RootImage= setting in service unit files accepts paths to compliant disk images (or block device nodes), and can mount them automatically, running service binaries directly off them (in chroot() style). In fact, this is the base for the Portable Service concept of systemd.

Use #6: Provisioning disk images

systemd provides various tools that can run operations provisioning disk images in an "offline" mode. Specifically:

systemd-tmpfiles

With the --image= switch systemd-tmpfiles can directly operate on a disk image, and for example create all directories and other inodes defined in its declarative configuration files included in the image. This can be useful for example to set up the /var/ or /etc/ tree according to such configuration before first boot.

systemd-sysusers

Similar, the --image= switch of systemd-sysusers tells the tool to read the declarative system user specifications included in the image and synthesizes system users from it, writing them to the /etc/passwd (and related) files in the image. This is useful for provisioning these users before the first boot, for example to ensure UID/GID numbers are pre-allocated, and such allocations not delayed until first boot.

systemd-machine-id-setup

The --image= switch of systemd-machine-id-setup may be used to provision a fresh machine ID into /etc/machine-id of a disk image, before first boot.

systemd-firstboot

The --image= switch of systemd-firstboot may be used to set various basic system setting (such as root password, locale information, hostname, …) on the specified disk image, before booting it up.

Use #7: Extracting log information

The journalctl switch --image= may be used to show the journal log data included in a disk image (or, as usual, the specified block device). This is very useful for analyzing failed systems offline, as it gives direct access to the logs without any further, manual analysis.

Use #8: Automatic repartitioning/growing of file systems

The systemd-repart tool may be used to repartition a disk or image in an declarative and additive way. One primary use-case for it is to run during boot on physical or VM systems to grow the root file system to the disk size, or to add in, format, encrypt, populate additional partitions at boot.

With its --image= switch it the tool may operate on compliant disk images in offline mode of operation: it will then read the partition definitions that shall be grown or created off the image itself, and then apply them to the image. This is particularly useful in combination with the --size= which allows growing disk images to the specified size.

Specifically, consider the following work-flow: you download a minimized disk image foobar.raw that contains only the minimized root file system (and maybe an ESP, if you want to boot it on bare-metal, too). You then run systemd-repart --image=foo.raw --size=15G to enlarge the image to the 15G, based on the declarative rules defined in the repart.d/ drop-in files included in the image (this means this can grow the root partition, and/or add in more partitions, for example for /srv or so, maybe encrypted with a locally generated key or so). Then, you proceed to boot it up with systemd-nspawn --image=foo.raw -b, making use of the full 15G.

Versioning + Multi-Arch

Disk images implementing this specifications can carry OS executables in one of three ways:

  1. Only a root file system

  2. Only a /usr/ file system (in which case the root file system is automatically picked as tmpfs).

  3. Both a root and a /usr/file system (in which case the two are combined, the /usr/ file system mounted into the root file system, and the former possibly in read-only fashion`)

They may also contain OS executables for different architectures, permitting "multi-arch" disk images that can safely boot up on multiple CPU architectures. As the root and /usr/ partition type UUIDs are specific to architectures this is easily done by including one such partition for x86-64, and another for aarch64. If the image is now used on an x86-64 system automatically the former partition is used, on aarch64 the latter.

Moreover, these OS executables may be contained in different versions, to implement a simple versioning scheme: when tools such as systemd-nspawn or systemd-gpt-auto-generator dissect a disk image, and they find two or more root or /usr/ partitions of the same type UUID, they will automatically pick the one whose GPT partition label (a 36 character free-form string every GPT partition may have) is the newest according to strverscmp() (OK, truth be told, we don't use strverscmp() as-is, but a modified version with some more modern syntax and semantics, but conceptually identical).

This logic allows to implement a very simple and natural A/B update scheme: an updater can drop multiple versions of the OS into separate root or /usr/ partitions, always updating the partition label to the version included there-in once the download is complete. All of the tools described here will then honour this, and always automatically pick the newest version of the OS.

Verity

When building modern OS appliances, security is highly relevant. Specifically, offline security matters: an attacker with physical access should have a difficult time modifying the OS in a way that isn't noticed. i.e. think of a car or a cell network base station: these appliances are usually parked/deployed in environments attackers can get physical access to: it's essential that in this case the OS itself sufficiently protected, so that the attacker cannot just mount the OS file system image, make modifications (inserting a backdoor, spying software or similar) and the system otherwise continues to run without this being immediately detected.

A great way to implement offline security is via Linux' dm-verity subsystem: it allows to securely bind immutable disk IO to a single, short trusted hash value: if an attacker manages to offline modify the disk image the modified disk image won't match the trusted hash anymore, and will not be trusted anymore (depending on policy this then just result in IO errors being generated, or automatic reboot/power-off).

The Discoverable Partitions Specification declares how to include Verity validation data in disk images, and how to relate them to the file systems they protect, thus making if very easy to deploy and work with such protected images. For example systemd-nspawn supports a --root-hash= switch, which accepts the Verity root hash and then will automatically assemble dm-verity with this, automatically matching up the payload and verity partitions. (Alternatively, just place a .roothash file next to the image file).

Future

The above already is a powerful tool set for working with disk images. However, there are some more areas I'd like to extend this logic to:

bootctl

Similar to the other tools mentioned above, bootctl (which is a tool to interface with the boot loader, and install/update systemd's own EFI boot loader sd-boot) should learn a --image= switch, to make installation of the boot loader on disk images easy and natural. It would automatically find the ESP and other relevant partitions in the image, and copy the boot loader binaries into them (or update them).

coredumpctl

Similar to the existing journalctl --image= logic the coredumpctl tool should also gain an --image= switch for extracting coredumps from compliant disk images. The combination of journalctl --image= and coredumpctl --image= would make it exceptionally easy to work with OS disk images of appliances and extracting logging and debugging information from them after failures.

And that's all for now. Please refer to the specification and the man pages for further details. If your distribution's installer does not yet tag the GPT partition it creates with the right GPT type UUIDs, consider asking them to do so.

Thank you for your time.

Faster image transfer across the network with zsync

Those of us involved in building operating system images using tools such as OpenEmbedded/Yocto Project or Buildroot don't always have a power build machine under our desk or in the same building on gigabit. Our build machine may be in the cloud, or in another office over a VPN running over a slow residential ADSL connection. In these scenarios, repeatedly downloading gigabyte-sized images for local testing can get very tedious.

There are some interesting solutions if you use Yocto: you could expose the shared state over the network and recreate the image, which if the configurations are the same will result in no local compilation. However this isn't feasible if your local machine isn't running Linux or you just want to download the image without any other complications. This is where zsync is useful.

zsync is a tool similar to rsync but optimised for transfering single large files across the network. The server generates metadata containing the chunk information, and then shares both the image and the metadata over HTTP. The client can then use any existing local file as a seed file to speed up downloading the remote file.

On the server, run zsyncmake on the file to be transferred to generate the .zsync metadata. You can also pass -z if the file isn't already compressed to tell it to compress the file first.

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic

$ zsyncmake -z core-image-minimal-*.wic

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 4.7K Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.manifest
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic
-rw-r--r-- 1 ross ross  53M Jun 10 13:45 core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz

Here we have ~420MB of disk image, which compressed down to a slight 53MB, and just ~5KB of metadata. This image compressed very well as the raw image is largely empty space, but for the purposes of this example we can ignore that.

The zsync client downloads over HTTP and has some non-trivial requirements so you can't just use any HTTP server, specifically my go-to dumb server (Python's integrated http.server) isn't sufficient. If you want a hassle-free server then the Node.js package http-server works nicely, or any other proper server will work. However you choose to do it, share both the .zsync and .wic.gz files.

$ npm install -g http-server
$ http-server -p 8080 /path/to/images

Now you can use the zsync client to download the images. Sadly zsync isn't actually magical, so the first download will still need to download the full file:

$ zsync http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.zsync
No relevent local data found - I will be downloading the whole file.
downloading from http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz:
#################### 100.0% 7359.7 kBps DONE

verifying download...checksum matches OK
used 0 local, fetched 55208393

However, subsequent downloads will be a lot faster as only the differences will be fetched. Say I decide that core-image-minimal is too, well, minimal, and build core-image-sato which is a full X.org stack instead of just busybox. After building the the image and metadata we now have a ~700MB image:

-rw-r--r-- 1 ross ross 729M Jun 10 14:17 core-image-sato-fvp-base-20210610125939.rootfs.wic
-rw-r--r-- 1 ross ross 118M Jun 10 14:18 core-image-sato-fvp-base-20210610125939.rootfs.wic.gz
-rw-r--r-- 1 ross ross 2.2M Jun 10 14:19 core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync```

Normally we'd have to download the full 730MB, but with zsync we can just fetch the differences. By telling the client to use the existing core-image-minimal as a seed file, we can fetch the new core-image-sato:

$ zsync -i core-image-minimal-fvp-base-20210610124230.rootfs.wic  http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync
reading seed file core-image-minimal-fvp-base-20210610124230.rootfs.wic
core-image-minimal-fvp-base-20210610124230.rootfs.wic. Target 70.5% complete.
downloading from http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.gz:
#################### 100.0% 10071.8 kBps DONE     

verifying download...checksum matches OK
used 538800128 local, fetched 70972961

By using the seed file, zsync determined that it already has 70% of the file on disk, and downloaded just the remaining chunks.

For incremental builds the differences can be very small when using the Yocto Project, as thanks to the reproducible builds effort there are no spurious changes (such as embedded timestamps or non-deterministic compilation) on recompiles.

Now, obviously I don't recommend doing all of this by hand. For Yocto Project users, as of right now there is a patch queued for meta-openembedded adding a recipe for zsync-curl, and a patch queued for openembedded-core to add zsync and gzsync image conversion types (for IMAGE_FSTYPES, for example wic.gzsync) to generate the metadata automatically. Bring your own HTTP server and you can fetch without further effort.

June 08, 2021

An overhaul of Meson's WrapDB dependency management/package manager service

For several years already Meson has had a web service called WrapDB for obtaining and building dependencies automatically. The basic idea is that it takes unaltered upstream tarballs, adds Meson build definitions (if needed) as a patch on top and builds the whole thing as a Meson subproject. While it has done its job and provided many packages, the UX for adding new versions has been a bit cumbersome.

Well no more! With a lot of work from people (mostly Xavier Claessens) all of WrapDB has been overhauled to be simpler. Instead of separate repos, all wraps are now stored in a single repo, making things easier.  Adding new packages or releases now looks like this:

  • Fork the repo
  • Add the necessary files
  • Submit a PR
  • Await results of automatic CI and (non-automatic :) reviewer comments
  • Fix issues until the PR is merged
The documentation for the new system is still being written, but submissions are already open. You do need the current trunk of Meson to use the v2 WrapDB. Version 1 will remain frozen for now so old projects will keep on building. All packages and releases from v1 WrapDB have been moved to v2, except some old ones that have been replaced by something better (e.g. libjpeg has been replaced by libjpeg-turbo) so starting to use the new version should be transparent for most people.

Submitting new dependencies

Anyone can submit any dependency project that they need (assuming they are open source, of course). All you need to do is to convert the project's build definition to Meson and then submit a pull request as described above. You don't need permission from upstream to submit the project. The build files are MIT licensed so projects that want to provide native build definitions should be able to integrate WrapDB's build definitions painlessly.

Submitting your own libraries

Have you written a library that already builds with Meson and would like to make it available to all Meson users with a single command:

meson wrap install yourproject

The procedure is even simpler than above, you just need to file a pull request with the upstream info. It only takes a few minutes.

June 07, 2021

The Beginning

Hello, I’m Kai. I’m a computer science student at the KIT in Germany. This year I am participating in my second Google Summer of Code at the GNOME foundation to work on Fractal. My mentor is Julian Sparber, who works towards end-to-end encryption in Fractal and already gave me a warm welcome. I created this blog to keep everyone interested updated on my progress over the course of the summer.

Fractal Logo
Fractal Logo

For those who don’t already know, Fractal is a messaging client for the GNOME desktop powered by the Matrix protocol. In the last year the ecosystem on which Fractal is built changed dramatically (GTK 4, matrix-rust-sdk) and the current code base shows its weaknesses. With those considerations, Fractal’s developers came to the conclusion that a rewrite from scratch is the best way forward. This undertaking is called Fractal NEXT.

My goal for the Summer of Code 2021 is to bring Fractal NEXT to feature parity with the current Fractal code base. There has already been a lot of work on the architecture and groundwork for the new Fractal over the last months, but the current implementation is still bare-bone.

The main features remaining to be implemented are room management and account management as well as support for more message types. I will start by implementing the elements required to work with rooms and its members. This will mainly manifest in a room settings panel that allows editing of the room avatar, title and description, inviting and kicking members and managing their power levels. Account management will resemble the account page of the current Fractal with options to edit the user’s avatar, name, third party identifiers and the device list, and also the possibility to deactivate ones Matrix account. Another small but important thing will be the addition of shortcuts for all actions, so Fractal is more accessible, and power users will feel at home.

I am very glad to be able to work on Fractal for this Summer of Code and hope it will be a lot of fun.

Also check out Alejandro’s blog, who, too, contributes to Fractal this summer by implementing multi-account support.

Another year in GSoC and Fractal(-next)

This year I applied for Google Summer of Code again and chose Fractal to work on multi-account support. I got accepted (that’s why I’m writing this), so today I start the coding (and design) period to achieve that.

Any of you who had followed what happened in my internship in 2020 and afterwards might remember that I had the same goal back then. The problem was that the way the app was structured internally made it incredibly difficult to do so without all hell breaking loose. You can remind (or read) the details in my final report last year and what I learned in the process.

After that, I kept on integrating matrix-rust-sdk in Fractal and removing completely the old backend. That was completed in January this year, but after fixing all the shortcomings in the code that dealt with the server (and introducing new bugs related both to Synapse not fully conforming to the official specification and a few bugs in matrix-rust-sdk), I tried to work further to enable the encryption machinery in the backend, but the undertaking to do so and make the UI workable at all was much greater than we expected previously. My #1 concern when I started on this project was maintainability, so I started thinking that probably the only sane way out was to rewrite Fractal, something that I suspected since August when I still was in the middle of GSoC 2020. The fact that GTK 4.0 was just released, its binding crates had better support for using XML templates and subclassing and we already had proof that matrix-rust-sdk could work for our needs made it a very compelling alternative, “just” needing to rewrite the UI from scratch.

This happened more or less when Julian Sparber started working on encryption support, so whatever the choice was, it had to allow him to focus on his task as soon as possible. It was deemed that keeping with the incremental refactors would be much slower for everyone, so Fractal-next got started with proper support from third-party libraries. Julian started working on it, getting a basic chat feature set going on in a few weeks that allowed him to get part of his goal done in parallel. He posted what he did and an overview of the architecture of Fractal-next in his blog. I looked around occasionally just to give some advice at the beginning, mostly concerned with code organization and modules.

One big milestone ahead to make Fractal-next just be Fractal is to reach feature parity. That’s something that Kai will work on as part of his GSoC internship, as he explains in his blog. In the meantime, I will add support for logging in with multiple accounts. I think we won’t clash each other, so we can do our own thing independently.

Hopefully by autumn we should have a really nice release that brings new features that the community has been looking for in a long time and make the project much more future-proof.

HTTP/2 in libsoup3, WebKitGTK, and Epiphany

The latest development release of libsoup 3, 2.99.8, now enables HTTP/2 by default. So lets look into what that means and how you can try it out.

Performance

In simple terms what HTTP/2 provides for improved performance is more efficient network usage when requesting multiple files from a single host. It does this by avoiding making new connections whenever possible and over that single connection allowing multiple requests to happen at the same time.

It is easy to imagine many workloads this would improve, such as flatpak downloading a lot of objects from a single server.

Here are some examples in Epiphany:

gophertiles

This is a benchmark made to directly test the best case for HTTP/2. As you can see in the inspector (which has been improved to show more network information) you can see HTTP/2 creates a single connection and completes in 229ms. HTTP/1 on the other-hand creates 6 connections taking 1.5 seconds. This all happens on a network which is a best case for HTTP/1, a low latency wired gigabit connection; As network latency increases HTTP/2’s lead grows dramatically.

browser screenshot using http2

browser screenshot using http1

Youtube

For a more real world example Youtube is a great demo. It hosts a lot of files for a webpage but it isn’t a perfect benchmark as it still involves multiple hosts that don’t share connections. HTTP/2 still has the slight lead, again versus HTTP/1’s best case.

inspector screenshot using http2

inspector screenshot using http1

Testing

This work is all new and we would really like some testing and feedback. The easiest way to run this yourself is with this custom Epiphany Flatpak (sorry for the slow download server, and it will not work with NVidia drivers).

You can get useful debug output both through the WebKit inspector (ctrl+shift+i) and by running with --env='G_MESSAGES_DEBUG=libsoup-http2;nghttp2'.

Please report any bugs you find to https://gitlab.gnome.org/gnome/libsoup/issues.

June 06, 2021

Troubled Minister

DJ Strobe

Another weekend arrangement made mostly in Polyend Tracker. Video footage assembled from my fpv flights around Liberec over the course of past few weeks.

Listen right here:

Watch a video:

Previously, Previously, Previously, Previously.

Getting ready for GSoC 2021

Hi, I’m Manuel Genovés, a Spanish physicist, somewhat artist and programmer by accident. Some of you may know me for Apostrophe, a nice little app I maintain for quite some time now. This year I decided to further step up my involvement with the GNOME project so I signed up for the GSoC program.

I’ll be working on an animation framework for libadwaita, the new GNOME library. It’ll allow apps to add semantic movement to their components, and hopefully it’ll make the current code more maintainable.

Mentoring me is Alexander Mikhaylenko, one of the creators of said library. I hope to learn a lot from them and from the work I’ll be able to do on this project.

On the Sustainability of the GNOME Foundation

This blog post was originally a question and answer on GNOME’s Discourse to discuss how candidates to the board would be able to help making the GNOME Foundation sustainable. Following a blog post by GNOME Foundation’s president Robert McQueen about The Next Steps for the GNOME Foundation, GNOME Designer and Foundation’s board member Allan Day opened a discussion for the board to issue recommendations to the GNOME Foundation members when voting for a candidate.

June 05, 2021

Turning over a new leaf with GNOMEHello, GNOME! 👋This will be the first post of many that document...

Turning over a new leaf with GNOME

Hello, GNOME! 👋

This will be the first post of many that document my journey of being a Google Summer of Code 2021 student at the GNOME Foundation.

I’m really excited about this opportunity and plan on learning a lot during this period.

To introduce myself to the community I’ll be answering the 5Ws and 1H.


Who am I? 🧑

I’m a sophomore undergrad student pursuing a Computer Science degree but more importantly,

I’m a person who loves getting stuck on a problem and then falling down the rabbit hole and diving deeper and deeper until I find a solution

(so yeah, a project’s issues page is where you’ll find me lurking 😂)


How did I get involved with GNOME? ❤️

I’ve been a GNOME Desktop user in the past, I have always loved how clean, consistent and well thought out it is along with all its applications.

Contributing to such a project has always been something on my mind and GSoC finally presented me with an opportunity to do so.


So far, I’ve made minor contributions to the settings daemon, the control centre, GLib and the GNOME shell.

None of which would have been possible without help from the amazing maintainers!


What project will I be working on? 💻

I will be working on implementing active resource management in GNOME,

it was one of the [existing project ideas](https://gitlab.gnome.org/Teams/Engagement/gsoc-2021/-/issues/21).

I’ll be working under the guidance of Benjamin Berg and Florian Müllner.


What we are trying to achieve is a fair distribution of resources among applications.

This has been made possible by using cgroups and building upon the previous work done in systemd and uresourced.


Why am I working on this project? 🤔

It sounds cool! doesn’t it?


A more serious answer would be that I wanted to work on something at the backend and contribute to GNOME using the basic skills I have.

I have just started exploring the idea of resource management and it fascinates me! The guidance from my mentors helps me stay on track and keep learning new stuff.


When and where will all of this take place? 📅

I have started working on stuff a bit early, for now, all of the work is taking place in the temporary repositories I have created on GNOME’s GitLab instance.

Later on, I’ll move this code to official repositories as a part of the settings daemon or uresourced.

After the basic structure is in place we’ll be experimenting with more ideas and potentially contributing to more projects!

June 04, 2021

GNOME LATAM 2021 was a real blast!

This year, motivated by the success of virtual events like GNOME Asia and GNOME Onboard Africa, we decided to organize a GNOME LATAM (virtual) conference. The event was a success, with a nice mix of Spanish and Portuguese-speaking presenters. The recordings are now available and (if you understand Spanish or Portuguese) I highly encourage you to check what the Latin American GNOMies are up to. 🙂

  • Juan Pablo Ugarte, from Argentina, that most of you GNOME people know from his work on Glade, had an interesting talk showing his new project: “Cambalache UI Maker”: A modern Glade replacement for GTK4. Juan hasn’t open sourced it yet, but you’ll see it when he pops up in Planet GNOME.
  • Claudio Wunder, from Germany, that you may know from the GNOME engagement team, did a presentation about the engagement team’s work in GNOME and discussed the challenges of managing online communities with its cultural differences and all. Claudio studied in Brazil and speaks Portuguese fluently.
  • Daniel Garcia Moreno, from Spain, that you may know from Endless and Fractal, had a talk sharing his experiences mentoring in GSoC and Outreachy. This was also a good opportunity to introduce the programs to the Latin American community, which is underrepresented in FOSS.
  • me, from Brazil :D, presented a “Developing native apps with GTK talk” where I write up a simple web browser in Python, with GTK and WebKitGtk, while I comment on the app development practices we use in GNOME, and present our tooling such as DevHelp, GtkInspector, Icon Browser, GNOME Builder, Flatpak, etc…
  • Martín Abente Lahaye, from Paraguay, that you may know from GNOME, Sugar Labs, Endless, and Flatseal, had a presentation about GNOME on phones. He commented on the UX of GNOME applications and Phosh in phones, and highlighted areas where things can be improved.
  • Cesar Fabian Orccon Chipana, from Perú, former GSoC intern for GNOME, GStreamer, did an extensive demo of GStreamer pipelines, explaining GStreamer concepts and all. He had super cool live demos!
  • Rafael Fontenelle, from Brazil, is a coordinator of the pt_BR translation team for many years and translates a huge portion of GNOME himself. He did a walk-through of the GNOME translation processes, sharing tips and tricks.
  • Daniel Galleguillos + Fernanda Morales, from Chile, from the GNOME Engagement team, presented design work for the GNOME engagement team. Showing tools and patterns they use for doing event banners, swag, social media posts, and all. Daniel was also responsible for editing the event recordings. Thanks a lot, Daniel!
  • Fabio Duran Verdugo and Matías Rojas-Tapia, from Chile, a long-time GNOME member, presented Handibox. An accessibility tool they are working on at their university to help users with motor impairment use desktop computers. Inspiring!
  • Georges Basile Stavracas Neto, from Brazil, you may know from Endless and GNOME Shell, presented a very nice summary about the GNOME design philosophy and the changes in GNOME Shell 40 and their plans for the future.
  • The event was opened and closed by Julita Inca Chiroque, from Peru, a long-time GNOME Foundation member. Thanks a lot, Julita!

I hope we can make this a tradition and have a GNOME LATAM edition yearly! Thanks a lot to all attendees!

Mike Lindell's Cyber "Evidence"

Mike Lindell, notable for absolutely nothing relevant in this field, today filed a lawsuit against a couple of voting machine manufacturers in response to them suing him for defamation after he claimed that they were covering up hacks that had altered the course of the US election. Paragraph 104 of his suit asserts that he has evidence of at least 20 documented hacks, including the number of votes that were changed. The citation is just a link to a video called Absolute 9-0, which claims to present sufficient evidence that the US supreme court will come to a 9-0 decision that the election was tampered with.

The claim is that Lindell was provided with a set of files on the 9th of January, and gave these to some cyber experts to verify. These experts identified them as packet captures. The video contains scrolling hex, and we are told that this is the raw encrypted data from the files. In reality, the hex values correspond very clearly to printable ASCII, and appear to just be the Pennsylvania voter roll. They're not encrypted, and they're not packet captures (they contain no packet headers).

20 of these packet captures were then selected and analysed, giving us the tables contained within Exhibit 12. The alleged source IPs appear to correspond to the networks the tables claim, and the latitude and longitude presumably just come from a geoip lookup of some sort (although clearly those values are far too precise to be accurate). But if we look at the target IPs, we find something interesting. Most of them resolve to the website for the county that was the nominal target (eg, 198.108.253.104 is www.deltacountymi.org). So, we're supposed to believe that in many cases, the county voting infrastructure was hosted on the county website.

Unfortunately we're not given the destination port, but 198.108.253.104 isn't listening on anything other than 80 and 443. We're told that the packet data is encrypted, so presumably it's over HTTPS. So, uh, how did they decrypt this to figure out how many votes were switched? If Mike's hackers have broken TLS, they really don't need to be dealing with this.

We're also given some background information on how it's impossible to reconstruct packet captures after the fact (untrue), or that modifying them would change their hashes (true, but in the absence of known good hash values that tells us nothing), but it's pretty clear that nothing we're shown actually demonstrates what we're told it does.

In summary: yes, any supreme court decision on this would be 9-0, just not the way he's hoping for.

Update: It was pointed out that this data appears to be part of a larger dataset. This one is even more dubious - it somehow has MAC addresses for both the source and destination (which is impossible), and almost none of these addresses are in actual issued ranges.

comment count unavailable comments

Bzip2's experimental repository is changing maintainership

Bzip2's stable repository is maintained at Sourceware by Mark Wielaard. In 2019 I started maintaining an experimental repository in GitLab, with the intention of updating the build system and starting a Rust port of bzip2. Unfortunately I have left this project slip by.

The new maintainer of the experimental repository for Bzip2 is Micah Snyder. Thanks, Micah, for picking it up!

June 02, 2021

The first steps in GSoC

I am starting a new blog series, for covering my GSoC’21 journey with GNOME Foundation. It’s already been two weeks since I received the acceptance email of GSoC. My project focuses on improving tracker support for custom ontologies. In this blog I’m going to talk about how I applied for GSoC and introduce the project on witch I’ll be working this summer.

First, let me introduce my self. I’m Abanoub Ghadban, a fourth year student at faculty of computer engineering from Egypt. I started my journey in GSoC in December 2020 when one of my friend who participated in GSoC last year told me about the experience he gained while working on his project with GNOME. I get started with GNOME apps easily thanks to the GNOME new comers guide. I started by looking at the basics of GLIB and GObject, I found the GLIB/GTK book very useful. The concepts I learned from the book and documentations became much clearer after looking at how they are used in GNOME apps. I started exploring gnome-photos app, then I searched for a “new comers” issue and solved it in this merge request. The maintainer of gnome-photos was very helpful in solving the threats he found in my code. Also, I investigated some issues in natuilus, glib and tracker. I decided to apply for a project related to tracker. The mentors were very helpful in guiding me to choose the project and write the proposal.

Currently, we are in the community bonding period, during this time participants should start communicating with their mentors to start planing ahead how they should start when the time comes. The first thing I’ve done after celebrating :D, was getting in touch with the mentors. We talked about the resources that can help me to get started with the project, how I can prepare my development environment and how we can communicate with each others.

The goal of my project is improving tracker support for custom ontologies. That is done by:

  • Fixing crashes that happen when tracker tries to parse an invalid ontology.
  • Adding support to the ontology parser for the out of order definitions in the ontology file.
  • TrackerNamespaceManager should support custom ontologies more easily (details).

So, here are the things I’ve done so far:

  • Cloned tracker repository, built it using both GNOME builder and meson.
  • Installed dev dependencies and configured VS Code to open tracker project in it. Honestly, I found it more useful than GNOME builder :).
  • Looked at the architecture of tracker and tracker-miners and how the architecture changed from tracker2 to tracker3.
  • Reading tracker documentations about how to create new ontologies.
  • Debugging tracker using gdb, I used it to find out how the ontology files are parsed.
  • Reading tracker documentations about TrackerNamespaceManager and TrackerResource.

Guess this is a good start, but still there is much to do for the upcoming days. Hope every thing works fine during this internship, GSoC here we GO.

Producing a trustworthy x86-based Linux appliance

Let's say you're building some form of appliance on top of general purpose x86 hardware. You want to be able to verify the software it's running hasn't been tampered with. What's the best approach with existing technology?

Let's split this into two separate problems. The first is to do as much as we can to ensure that the software can't be modified without our consent[1]. This requires that each component in the boot chain verify that the next component is legitimate. We call the first component in this chain the root of trust, and in the x86 world this is the system firmware[2]. This firmware is responsible for verifying the bootloader, and the easiest way to do this on x86 is to use UEFI Secure Boot. In this setup the firmware contains a set of trusted signing certificates and will only boot executables with a chain of trust to one of these certificates. Switching the system into setup mode from the firmware menu will allow you to remove the existing keys and install new ones.

(Note: You shouldn't use the trusted certificate directly for signing bootloaders - instead, the trusted certificate should be used to sign another certificate and the key for that certificate used to sign your bootloader. This way, if you ever need to revoke the signing certificate, you can simply sign a new one with the trusted parent and push out a revocation update instead of having to provision new keys)

But what do you want to sign? In the general purpose Linux world, we use an intermediate bootloader called Shim to bridge from the Microsoft signing authority to a distribution one. Shim then verifies the signature on grub, and grub in turn verifies the signature on the kernel. This is a large body of code that exists because of the use cases that general purpose distributions need to support - primarily, booting on arbitrary off the shelf hardware, and allowing arbitrary and complicated boot setups. This is unnecessary in the appliance case, where the hardware target can be well defined, where there's no need for interoperability with the Microsoft signing authority, and where the boot configuration can be extremely static.

We can skip all of this complexity using systemd-boot's unified Linux image support. This has the format described here, but the short version is that it's simply a kernel and initramfs linked into a small EFI executable that will run them. Instructions for generating such an image are here, and if you follow them you'll end up with a single static image that can be directly executed by the firmware. Signing this avoids dealing with a whole host of problems associated with relying on shim and grub, but note that you'll be embedding the initramfs as well. Again, this should be fine for appliance use-cases, but you'll need your build system to support building the initramfs at image creation time rather than relying on it being generated on the host.

At this point we have a single image that can be verified by the firmware and will get us to the point of a running kernel and initramfs. Unless you've got enough RAM that you can put your entire workload in the initramfs, you're going to want a filesystem as well, and you're going to want to verify that that filesystem hasn't been tampered with. The easiest approach to this is to use dm-verity, a device-mapper layer that uses a hash tree to verify that the filesystem contents haven't been modified. The kernel needs to know what the root hash is, so this can either be embedded into your initramfs image or into the kernel command line. Either way, it'll end up in the signed boot image, so nobody will be able to tamper with it.

It's important to note that a dm-verity partition is read-only - the kernel doesn't have the cryptographic secret that would be required to generate a new hash tree if the partition is modified. So if you require the ability to write data or logs anywhere, you'll need to add a new partition for that. If this partition is unencrypted, an attacker with access to the device will be able to put whatever they want on there. You should treat any data you read from there as untrusted, and ensure that it's validated before use (ie, don't just feed it to a random parser written in C and expect that everything's going to be ok). On the other hand, if it's encrypted, remember that you can't just put the encryption key in the boot image - an attacker with access to the device is going to be able to dump that and extract it. You'll probably want to use a TPM-sealed encryption secret, which will be discussed later on.

At this point everything in the boot process is cryptographically verified, and so should be difficult to tamper with. Unfortunately this isn't really sufficient - on x86 systems there's typically no verification of the integrity of the secure boot database. An attacker with physical access to the system could attach a programmer directly to the firmware flash and rewrite the secure boot database to include keys they control. They could then replace the boot image with one that they've signed, and the machine would happily boot code that the attacker controlled. We need to be able to demonstrate that the system booted using the correct secure boot keys, and the only way we can do that is to use the TPM.

I wrote an introduction to TPMs a while back. The important thing to know here is that the TPM contains a set of Platform Configuration Registers that are large enough to contain a cryptographic hash. During boot, each component of the boot process will generate a "measurement" of other security critical components, including the next component to be booted. These measurements are a representation of the data in question - they may simply be a hash of the object being measured, or the hash of a structure containing various pieces of metadata. Each measurement is passed to the TPM, along with the PCR it should be measured into. The TPM takes the new measurement, appends it to the existing value, and then stores the hash of this concatenated data in the PCR. This means that the final PCR value depends not only on the measurement, but also on every previous measurement. Without breaking the hash algorithm, there's no way to set the PCR to an arbitrary value. The hash values and some associated data are stored in a log that's kept in system RAM, which we'll come back to later.

Different PCRs store different pieces of information, but the one that's most interesting to us is PCR 7. Its use is documented in the TCG PC Client Platform Firmware Profile (section 3.3.4.8), but the short version is that the firmware will measure the secure boot keys that are used to boot the system. If the secure boot keys are altered (such as by an attacker flashing new ones), the PCR 7 value will change.

What can we do with this? There's a couple of choices. For devices that are online, we can perform remote attestation, a process where the device can provide a signed copy of the PCR values to another system. If the system also provides a copy of the TPM event log, the individual events in the log can be replayed in the same way that the TPM would use to calculate the PCR values, and then compared to the actual PCR values. If they match, that implies that the log values are correct, and we can then analyse individual log entries to make assumptions about system state. If a device has been tampered with, the PCR 7 values and associated log entries won't match the expected values, and we can detect the tampering.

If a device is offline, or if there's a need to permit local verification of the device state, we still have options. First, we can perform remote attestation to a local device. I demonstrated doing this over Bluetooth at LCA back in 2020. Alternatively, we can take advantage of other TPM features. TPMs can be configured to store secrets or keys in a way that renders them inaccessible unless a chosen set of PCRs have specific values. This is used in tpm2-totp, which uses a secret stored in the TPM to generate a TOTP value. If the same secret is enrolled in any standard TOTP app, the value generated by the machine can be compared to the value in the app. If they match, the PCR values the secret was sealed to are unmodified. If they don't, or if no numbers are generated at all, that demonstrates that PCR 7 is no longer the same value, and that the system has been tampered with.

Unfortunately, TOTP requires that both sides have possession of the same secret. This is fine when a user is making that association themselves, but works less well if you need some way to ship the secret on a machine and then separately ship the secret to a user. If the user can simply download the secret via some API, so can an attacker. If an attacker has the secret, they can modify the secure boot database and re-seal the secret to the new PCR 7 value. That means having to add some form of authentication, along with a strong binding of machine serial number to a user (in order to avoid someone with valid credentials simply downloading all the secrets).

Instead, we probably want some mechanism that uses asymmetric cryptography. A keypair can be generated on the TPM, which will refuse to release an unencrypted copy of the private key. The public key, however, can be exported and stored. If it's acceptable for a verification app to connect to the internet then the public key can simply be obtained that way - if not, a certificate can be issued to the key, and this exposed to the verifier via a QR code. The app then verifies that the certificate is signed by the vendor, and if so extracts the public key from that. The private key can have an associated policy that only permits its use when PCR 7 has an appropriate value, so the app then generates a nonce and asks the user to type that into the device. The device generates a signature over that nonce and displays that as a QR code. The app verifies the signature matches, and can then assert that PCR 7 has the expected value.

Once we can assert that PCR 7 has the expected value, we can assert that the system booted something signed by us and thus infer that the rest of the boot chain is also secure. But this is still dependent on the TPM obtaining trustworthy information, and unfortunately the bus that the TPM sits on isn't really terribly secure (TPM Genie is an example of an interposer for i2c-connected TPMs, but there's no reason an LPC one can't be constructed to attack the sort usually used on PCs). TPMs do support encrypted communication channels, but bootstrapping those isn't straightforward without firmware support. The easiest way around this is to make use of a firmware-based TPM, where the TPM is implemented in software running on an ancillary controller. Intel's solution is part of their Platform Trust Technology and runs on the Management Engine, AMD run it on the Platform Security Processor. In both cases it's not terribly feasible to intercept the communications, so we avoid this attack. The downside is that we're then placing more trust in components that are running much more code than a TPM would and which have a correspondingly larger attack surface. Which is preferable is going to depend on your threat model.

Most of this should be achievable using Yocto, which now has support for dm-verity built in. It's almost certainly going to be easier using this than trying to base on top of a general purpose distribution. I'd love to see this become a largely push button receive secure image process, so might take a go at that if I have some free time in the near future.

[1] Obviously technologies that can be used to ensure nobody other than me is able to modify the software on devices I own can also be used to ensure that nobody other than the manufacturer is able to modify the software on devices that they sell to third parties. There's no real technological solution to this problem, but we shouldn't allow the fact that a technology can be used in ways that are hostile to user freedom to cause us to reject that technology outright.
[2] This is slightly complicated due to the interactions with the Management Engine (on Intel) or the Platform Security Processor (on AMD). Here's a good writeup on the Intel side of things.

comment count unavailable comments

June 01, 2021

Next steps for the GNOME Foundation

As the President of the GNOME Foundation Board of Directors, I’m really pleased to see the number and breadth of candidates we have for this year’s election. Thank you to everyone who has submitted their candidacy and volunteered their time to support the Foundation. Allan has recently blogged about how the board has been evolving, and I wanted to follow that post by talking about where the GNOME Foundation is in terms of its strategy. This may be helpful as people consider which candidates might bring the best skills to shape the Foundation’s next steps.

Around three years ago, the Foundation received a number of generous donations, and Rosanna (Director of Operations) gave a presentation at GUADEC about her and Neil’s (Executive Director, essentially the CEO of the Foundation) plans to use these funds to transform the Foundation. We would grow our activities, increasing the pace of events, outreach, development and infrastructure that supported the GNOME project and the wider desktop ecosystem – and, crucially, would grow our funding to match this increased level of activity.

I think it’s fair to say that half of this has been a great success – we’ve got a larger staff team than GNOME has ever had before. We’ve widened the GNOME software ecosystem to include related apps and projects under the GNOME Circle banner, we’ve helped get GTK 4 out of the door, run a wider-reaching program in the Community Engagement Challenge, and consistently supported better infrastructure for both GNOME and the Linux app community in Flathub.

Aside from another grant from Endless (note: my employer), our fundraising hasn’t caught up with this pace of activities. As a result, the Board recently approved a budget for this financial year which will spend more funds from our reserves than we expect to raise in income. Due to our reserves policy, this is essentially the last time we can do this: over the next 6-12 months we need to either raise more money, or start spending less.

For clarity – the Foundation is fit and well from a financial perspective – we have a very healthy bank balance, and a very conservative “12 month run rate” reserve policy to handle fluctuations in income. If we do have to slow down some of our activities, we will return to a “steady state” where our regular individual donations and corporate contributions can support a smaller staff team that supports the events and infrastructure we’ve come to rely on.

However, this isn’t what the Board wants to do – the previous and current boards were unanimous in their support of the idea that we should be ambitious: try to do more in the world and bring the benefits of GNOME to more people. We want to take our message of trusted, affordable and accessible computing to the wider world.

Typically, a lot of the activities of the Foundation have been very inwards-facing – supporting and engaging with either the existing GNOME or Open Source communities. This is a very restricted audience in terms of fundraising – many corporate actors in our community already support GNOME hugely in terms of both financial and in-kind contributions, and many OSS users are already supporters either through volunteer contributions or donating to those nonprofits that they feel are most relevant and important to them.

To raise funds from new sources, the Foundation needs to take the message and ideals of GNOME and Open Source software to new, wider audiences that we can help. We’ve been developing themes such as affordability, privacy/trust and education as promising areas for new programs that broaden our impact. The goal is to find projects and funding that allow us to both invest in the GNOME community and find new ways for FOSS to benefit people who aren’t already in our community.

Bringing it back to the election, I’d like to make clear that I see this – reaching the outside world, and finding funding to support that – as the main priority and responsibility of the Board for the next term. GNOME Foundation elections are a slightly unusual process that “filters” our board nominees by being existing Foundation members, which means that candidates already work inside our community when they stand for election. If you’re a candidate and are already active in the community – THANK YOU – you’re doing great work, keep doing it! That said, you don’t need to be a Director to achieve things within our community or gain the support of the Foundation: being a community leader is already a fantastic and important role.

The Foundation really needs support from the Board to make a success of the next 12-18 months. We need to understand our financial situation and the trade-offs we have to make, and help to define the strategy with the Executive Director so that we can launch some new programs that will broaden our impact – and funding – for the future. As people cast their votes, I’d like people to think about what kind of skills – building partnerships, commercial background, familiarity with finances, experience in nonprofit / impact spaces, etc – will help the Board make the Foundation as successful as it can be during the next term.

May 31, 2021

Beginning my GSoC Journey

This was my reaction after finding out that I was selected for GSoC :)

source: Giphy

I am starting a new blog series, for covering my GSoC’21 journey with GNOME Foundation. This is going to be an introductory blog where I will talk about the project on which I’ll be working this summer. Before we get started let me introduce myself to the folks reading from the GNOME planet. I am Nishit Patel, an undergraduate Computer Engineering student from India.

I began my pre GSoC journey back in November 2020 when I opened my first MR in tracker project. It was a small bug fix in the README.md file which I came across while setting up my local environment. Later, I began keeping a watch on the #tracker IRC and used to ask maintainers for help whenever I was stuck at something. Maintainers were very helpful and polite with their prompt replies even if I was asking some stupid question that was already addressed somewhere in the documentation. One thing that I noticed is it is better to first google, and check the docs before asking the question as it saves the maintainers precious time, and you also get to learn something new in the process.

Project details

I will be working on the Tracker-miner project which is an indexer, and also used for extracting the metadata from different file formats. Tracker currently doesn’t store the creation time in the database as it was historically not tracked on UNIX file systems, and it’s not part of POSIX specification. However, Since kernel version 4.11 the new statx system call provides the file creation timestamp. Based on this, the project aims to provide the support of storing file creation time in the tracker database and later, provide the feature of searching by creation time in Nautilus.

I will be writing short posts as and when I get time in between so

source: Giphy

May 29, 2021

Record Live Audio immediately with GNOME Gingerblue 0.4.1

GNOME Gingerblue 0.4.1 is available and builds/runs on GNOME 40 systems such as Fedora Core 34.

It supports immediate, live audio recording in compressed Ogg encoded audio files stored in the private $HOME/Music/ directory from the microphone/input line on a computer or remote audio cards through USB connection through PipeWire (www.pipewire.org) with GStreamer (gstreamer.freedesktop.org) on Fedora Core 34 (getfedora.org).

See the GNOME Gingerblue project (www.gingerblue.org) for screenshots, Fedora Core 34 x86_64 RPM package and GNU autoconf installation package (https://download.gnome.org/sources/gingerblue/0.4/gingerblue-0.4.1.tar.xz) for GNOME 40 systems and https://gitlab.gnome.org/ole/gingerblue.git for the GPLv3 source code in my GNOME Git repository.

Gingerblue music recording session screen. Click “Next” to begin session.

The default name of the musician is extracted from g_get_real_name(). You can edit the name of the musician and then click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip the details).

Type the name of the musical song name. Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip any of the details).

Type the name of the musical instrument. The default instrument is “Guitar”. Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip any of the details).

Type the name of the audio line input. The default audio line input is “Mic” ( gst_pipeline_new("record_pipe") in GStreamer). Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip the details).

Enter the recording label. The default recording label is “GNOME” (Free label). Click “Next” to continue ((or “Back” to start all over again) or “Finish” to skip the details).

Notice the immediate, live recording file. The default immediate, live recording file name falls back to the result of g_strconcat(g_get_user_special_dir(G_USER_DIRECTORY_MUSIC), "/", g_get_real_name(), " - ", gtk_entry_get_text(GTK_ENTRY(song_entry)), ".ogg", NULL) in gingerblue/src/gingerblue-main.c

Click on “Cancel” once in GNOME Gingerblue to stop immediate recording and click on “Cancel” once again to exit the application (or Ctrl-c in the terminal).

You’ll find the recorded audio files in g_get_user_special_dir(G_USER_DIRECTORY_MUSIC) (usually $HOME/Music/) on GNOME 40 systems configured in the American English language.

Redesigning Health's MainView

 

A Brief about my GSoC project


We will redesign the Health app and add a few more features. The motivation for this redesign is to display important information on the main view and the other data must be easily accessible to the user. 

Data can be step-count, calories burnt, weight measurements etc.


This project is written in rust and it uses gtk-rs.

This was one of the existing project ideas.

The following sections describe the subparts of the project.


Creating a new home page for the application

We will display the current day’s data along with buttons that take us to the other tabs

Another change that we will make is that we will change the way data is added. It will now be a popup window that has tabs that will let the user choose the type of data to add (in our case the type of data will either be “Activity” or “Weight”) and add the required data. We will most likely showcase this feature for QUADEC.


Activity view and Weight view

We should display the activities and weights and let the user control the duration of history, i.e. only view activities this week/month etc.

Step-view graph

This view currently shows a line graph of steps-taken vs days. We will implement a feature that allows the user to filter activities so that they can analyze how each activity/group of activities contributes to their step count.


calories burned view

We are introducing a new tab: calorie view. We plan to use a split bar graph to show the calories burnt per day. The split bar graph idea will let the user see the breakdown of calories burnt by activity. 


Notifications

This can be thought of as a sub-project. Here we send the user 2 types of notifications, one type of notification is to remind the user to drink water(based on their weight, workout duration etc), and the other type is to remind the user to complete the step goal for that day.

People involved

I will be guided by my mentor @Cogitri. We will also take inputs from the design team, since our project has a lot of UI design tasks.


May 28, 2021

Friends of GNOME Update – May 2021

Welcome to the May 2021 Friends of GNOME Update

Cherry blossoms with a grey sky  in the background
“Cherry Blossom” by shioshvili is licensed under CC BY-SA 2.0

LAS

The Linux App Summit took place May 13 – 15. Taking advantage of its virtual nature, the event had a long break in the middle of the proceedings in order to better accommodate attendees across time zones. Congratulations and thanks to the whole LAS team!

GUADEC

The call for GUADEC birds of a feather sessions, lightning talks, and workshops is now open. These will take place July 23 – 24, after the talks.

Birds of a Feather (BoF) sessions are up to two hours. These provide a time for people with shared interests to get together to talk about them. These can be working sessions and/or discussion sessions.

Lightning talks are ten minute talks. If you’re an inexperienced speaker or nervous on a stage, lightning talks are a great opportunity to try out speaking in a more relaxed setting. If you have an idea you want to try out, a narrower topic to explore, or you want to start a conversation, consider giving a lightning talk!

A workshop is a hands on session where people will be learning and working together.

You can submit an idea today!

Community Engagement Challenge Feedback

Did you follow, participate in, or otherwise engage with the CEC? Please give your feedback and fill out this survey!

Seeking University Outreach Ambassadors

The University Outreach program serves two purposes: helping universities adopt GNOME technologies and helping students get involved with GNOME. The GNOME Africa community is currently recruiting for university outreach across Africa. Fill out this form to learn more!

Interns!

We can never adequately express how excited we are about GNOME interns. This year we have two Outreachy interns and 12 Google Summer of Code interns.

Our Outreachy interns are Veena Nagar and Madds H. GSoC interns include Abanoub Ghadban, Maximiliano Sandoval, Manuel Genovés, Kai A. Hiller, Nishal Kulkarni, Alejandro Domínguez, Nishit Patel, zbrown, Ivan Molodetskikh, visvesh subramanian, Arijit Kundu, and Dhanuka Warusadura.

GNOME Gifts

We’ve had a few changes at the GNOME Shop, including a new water bottle.

Misc Updates We Like

Thank you!

Thank you for your support! Whether you’re a Friend of GNOME, contributor, users, or casually interested in our work, we appreciate your time and interest in building a great GNOME community!

May 27, 2021

Hello there!

My name is Veena Nagar. I’m an Computer Science and Engineering graduate from National Institute of Technology Karnataka, INDIA.

Currently, I am working as an Outreachy Intern for May’21 tenure with GNOME community under the mentorship of Philip Chimento on the project “Make GNOME asynchronous!”. The next three months of the internship are going to be a great learning experience, and I’m really looking forward to it! My mentor at GNOME has been very welcoming, and I’m so glad to be selected as an intern here for the summer cohort.

What motivated me to apply to Outreachy

You should fill out an initial application, regardless of your experience level.

Outreachy is a program that provide internship opportunities to work in Free and Open Source Software (FOSS). Outreachy internships are open to all applicants around the world. Internship focuses on programming, design, documentation, marketing, or other kinds of contributions. Interns work remotely and are not required to relocate. Interns are paid a comfortable stipend. Outreachy is open to women (both cis and trans), people of other gender identities that are minorities in open source (trans men, and gender-queer people) . This internship is offered twice a year and you do not have to be a student to apply for it. However, you must be available for a full-time, 40 hours a week during the internship period.

In the month of February, a good friend of mine from college mentioned the Outreachy internship opportunity to me. I checked out the Outreachy internship web-page, read a couple of past interns posts, and then applied for it. But due to academic work I couldn’t contribute to any organisation before the contribution period.

So, When I got shortlisted for the second round, I was so excited and immediately started my search for an organisation to contribute in. I was completely new to the open source, struggling between which project/organisation to choose, preferred not to waste time and decided to take a friends help, who has contributed to open source before. She told me about GNOME and It was offering a project related Git, Python and JavaScript. Luckily, I had sufficient knowledge of all the required tech-stack for that project. So, I introduced myself in the forum and started the contribution with “Newcomers” issues then jumped into the actual (coding related) issues alongside continued my research on the project, I chose to work on. I kept myself busy with open source contribution till the day, the final applicants were to be called out. I have listed down all my contributions in the next blog.

May 25, 2021

Reminder: SoupSessionSync and SoupSessionAsync default to no TLS certificate verification

This is a public service announcement! The modern SoupSession class is secure by default, but the older, deprecated SoupSessionSync and SoupSessionAsync subclasses of SoupSession are not. If your code uses SoupSessionSync or SoupSessionAsync and does not set SoupSession:tls-database, SoupSession:ssl-use-system-ca-file, or SoupSession:ssl-ca-file, then you get no TLS certificate verification. This is almost always worth requesting a CVE.

SoupSessionSync and SoupSessionAsync have both been deprecated since libsoup 2.42 was released back in 2013, so surely they are no longer commonly used, right? Right? Of course not, so please check your applications and make sure they’re actually doing TLS certificate verification. Thanks!

Introduction to Shell ScriptingRecently I made two presentations on getting started with shell...

Introduction to Shell Scripting

VIT - Linux Users' Group - Session 1 and 2 [An Introduction to Shell Scripting]

Recently I made two presentations on getting started with shell scripting and its applications. Answering questions like what is shell scripting? How will it be useful? How does one get started with shell scripting?


All the code can be found in this Github Repository -

May 24, 2021

GNOME Foundation Board Elections 2021

The election process for the GNOME Foundation Board of Directors is currently underway. Three positions are up for election this year, and Foundation members are able to nominate themselves until May 31st.

I’ve been sitting on the board since 2015, and have been acting as chair for the past two years. In this post, I’m going to talk a bit about how the board has evolved in the past year, and what sitting on the board involves.

Board Evolution

As I’ve talked about previously, the GNOME Foundation Board has been evolving over recent years. When I first joined the board in 2015 we had a staff of one and a half, and the board was busy keeping the organisation running. We approved conference proposals, helped to organise events, dealt with legal issues when they arose, and attempted to manage our staff.

Nowadays we thankfully don’t have to do these things, because we have an Executive Director and their staff to do them for us. This has allowed the board to increasing move into the role that a board is supposed to have: that is, governance and oversight.

This is of vital importance since, to have a successful Foundation, we need to have a group which is responsible for taking a hard look at its strategy, plans and organisational health. This iisn’t something that a board that is focused on day-to-day operations is able to do.

Recent Changes

Over the past year, the evolution of the Foundation Board has continued, and the board has undergone a number of organisational changes.

First, we switched from having weekly one-hour meetings to monthly two-hour meetings. This has given the board the capacity to have more in-depth discussions. It has also allowed us to structure our year around key board functions. Each meeting now has a particular focus, and there’s a schedule for the year, with quarterly sessions on strategy and finance.

The second major change is that we have created three new board committees, which are primarily made up of directors. These are:

Executive Committee: this monitors operations and meets more frequently than the board. It has executive power and can therefore act on behalf of the board at short notice, should it be required. This committee is traditionally staffed by the Executive Director (or CEO) and board president. It is currently made up of Neil (Executive Director), Rob (President) and myself (Chair).

Governance Committee: this committee exists to monitor and develop the board itself. It does things like checking that the board is living up to its legal and regulatory commitments, running assessment exercises, and developing the board through training and recruitment. The governance committee is currently made up of Kat, Philip, Felipe and myself.

Finance Committee: as you’d expect, the finance committee is responsible for monitoring the Foundation’s finances, proposing budgets and financial policies, and preparing finance reports. The finance committee is currently staffed by Shaun (Treasurer), Rob (President), Neil (Executive Director), and Rosanna (Director of Operations).

These committees are fairly standard for non-profit boards, and allow the board to delegate specific functions to sub-groups, which then report back to the board as a whole. They make sure that important tasks get done, and they also free up full board meetings to have higher-level conversations.

Being a Director

Due to the board’s ongoing development, the role of directors has changed a bit recently.

In many respects, sitting on the board is less involved as it was in the past. It’s no longer the case that you have a weekly meeting, and might have a long list of “operational” tasks to carry out. Instead, it’s a monthly two hour meeting, with the potential for another one-hour committee meeting every month. You might need to pick up tasks, but they are fewer in number than previously.

On the flip side, being on the Foundation Board nowadays involves living up to the standard expectations for any directors. As a director, you have personal legal obligations to ensure that the Foundation is being run effectively and in accordance with the relevant laws and regulations. To do this, you need to do things like review our paperwork and ensuring that we have the correct policies and procedures in place.

Recruits Needed

Sitting on the Board of Directors is an important role, and is a really valuable way ito contribute to both open source and the GNOME project. The time requirements are relatively modest, and it’s a fantastic way to gain new skills and experience. Personally, I’ve found being on the board to be hugely rewarding and educational, and I’ve really enjoyed my time as a director!

A good director is someone who can reliably attend meetings and take on tasks, has an eye for detail, and is able to effectively articulate themselves.

Nowadays we are also looking to our directors to bring additional experience and knowledge. So, if you have management experience, or non-profit experience, then your nomination would be especially welcome.

If this sounds like something that you can help with, we’d love to see your nomination for the coming election! You should also feel free to reach out to any of the current directors: we’d all be very happy to talk more about what being a director involves.

May 23, 2021

What am I?

Polyend Tracker

Having a hardware tracker seems a little odd of a choice, but a small Polish company, Polyend has really won me over.

Beautiful things work better and the machine is up there with Elektron boxes. There are more compact machines on the horizon, but over the weekend I was able to sit down outside and lay the basics of a track down. I have later finalized it in a DAW, based on the track stems you can export. Not sitting behind a computer and still arrange electronic music is a great luxury.

So without further ado, here’s my weekend track:

Previously, Previously, Previously.

May 21, 2021

Deploying Prometheus/Grafana, learning metrics

In the Cockpit team we recently started to generate and export metrics about our CI, and collect/graph them by a Red Hat internal Prometheus and Grafana instance. But I am not happy with this yet, as it does not yet answer all the questions that we have for it. Also, it is not accessible outside of Red Hat. On today’s Red Hat Day of Learning I wanted to get to know how to deploy these components and learn more about the PromQL language.

May 20, 2021

New HIG

In recent weeks, I’ve been working on a major update to the GNOME Human Interface Guidelines (HIG). The motivations for this work were varied. The HIG is fairly out of date, both in relation to contemporary design practice, as well as GNOME platform capabilities, and so needed updating. But we also wanted to improve the quality of the design guidance that we offer, and do a much better job at integrating our design docs with the rest of the developer platform.

As part of these changes, large parts of HIG have been restructured, and the vast majority of the content has been rewritten to some extent. It has also been expanded to cover a greater range of topics, including app naming, app icons, UI styling, accessibility and tooltips.

We’ve also changed how the HIG is hosted. It now has its own project on Gitlab, uses a lightweight markup language, and has its own website which is automatically updated to reflect the latest changes. This makes it a much more attractive proposition for contributors, and we’re already seeing the number of contributors increase.

A New Era

The new HIG comes at an exciting time for the GNOME platform — there have been some truly amazing developments recently!

First, GNOME design conventions have matured significantly and, I think, are in a really strong place. In the past we maybe didn’t have all the pieces in place, but now it feels like we do, and have a much more comprehensive, integrated and consistent design system.

Second, the platform itself has seen some major improvements, with the release of GTK 4 and the development of libadwaita and libhandy. These include a big set of features that designers and developers can take advantage of, like new widgets for tabs and drop-down lists, a new widget for placeholders, and GTK 4’s high-performance list and grid views.

Third, thanks to Bilal and Zander, we now have a whole collection fantastic design apps, including App Icon Preview, Symbolic Preview, Icon Library, Color Palette and Typography.

Finally, we also have new tooling for API documentation, thanks to Emmanuele and gi-docgen.

If you put all of this together, I think you’ve got a recipe for a really great developer experience. The new HIG aims to bring all these different pieces together: it documents the current state of the design system, references all the latest widgets, tells you when to use the design apps, and links to the correct places in the new developer docs.

Design >< Code

The old HIG documents design conventions which require developers to either use cut and paste widgets, or to write their own custom UI. Often it isn’t clear how this can be done. Not a great developer experience.

The new version does away with this, and has a closer relationship with the developer platform. Every design pattern has a corresponding widget which is included in the developer platform, which the HIG explicitly references. This means that, if you’re using the HIG, you can be confident that everything it describes can be easily implemented.

As part of this effort to bring the HIG closer to the platform, there have been a few terminology changes. For example, “grids” are now “flow boxes”, and “presentation dialogs” have become “secondary windows”. This will hopefully help to ensure that designers and developers are talk the same language.

There’s also been a lot of back and forth with platform developers, to ensure that the design guidelines are in tune with technical reality (which hasn’t always been the case, historically).

Notable Changes

Most of the changes in the new HIG shouldn’t be disruptive: most of them are additions or restructuring of existing advice. However, if you are responsible for an app, there are some changes which it’s worth being aware of.

Widget Updates

As mentioned, the HIG contains advice on using new widgets, and there are a number of places where it recommends using a new widget in place of an old one:

Design Pattern Old Widget New Widget
View switchers Linked toggle buttons AdwSwitcherBar/HdySwitcherBar
Drop-down lists GtkComboBox, or custom-build your own GtkDropDown
Tabs GtkNoteBook AdwTabBar / HdyTabBar

There’s also a number of new widgets that we recommend where none previously existed. This includes:

Utility Panes

Utility panes is a new term for an old concept: a sidebar which includes navigation, information or settings for the current view. It has been introduced to differentiate between sidebars which contain supplementary content for the view, as opposed to sidebars that act as the top-level navigation of a window (the latter being what we actually call a sidebar).

This distinction becomes particularly relevant when it comes to responsive behaviour since, when the window becomes narrow, a sidebar needs to collapse to become the top-level view you go back to, whereas a utility pane needs to behave so that it’s secondary to the main view.

Tooltips

The previous HIG was silent on the subject of tooltips, which led to a fair amount of inconsistency. This has been corrected with the new version. The advice boils down to: show tooltips for all the controls in header bars. Otherwise, use them with restraint, but do so consistently.

Tooltip labels should be similar to button labels (for example, "Fullscreen") as opposed to being written as descriptions (for example, "Press to switch to fullscreen").

Placeholders

The old HIG documents two types of placeholders: initial state placeholders, and empty state placeholders. The former were colourful and engaging, and intended to support onboarding, while the latter were muted and non-distracting, and for cases when a view was, well, empty.

The previous advice was a bit on the complicated side: it said to use both types of placeholder in the same view, depending on circumstances. A user would see the initial state placeholder on first run, but if they populated the app and then removed all the content, they’d get the empty placeholder.

For the new HIG, we’ve simplified the advice. We still have two types of placeholder, but we advise to just use one for each view. So, if you have a main view which uses a more engaging placeholder, then that’s what should be shown whenever that view is empty.

Accessibility

As part of the rewrite, I’ve reviewed GNOME’s old accessibility UI design guide, and incorporated any relevant material directly into the HIG. This means that we’ll be able to retire the UI section of the accessibility guide, and accessibility will be integrated into a standard design guidelines, as it should be. If you happen to still use the old accessibility guide, it’s recommended that you switch to the HIG instead.

Next Steps

The new version of the HIG is yet to officially replace the old one, and is being hosted in a temporary location. Some of us are going to be working on an updated developer.gnome.org site in the near future; once that’s done we’ll publish the new HIG alongside it.

There are a few critical pieces that the HIG will need in order to shine. First, it will need a stable libadwaita release, since the HIG is primarily targeting GTK 4. Second, it needs a demo app, with examples of the key design patterns. Neither of these are likely to be ready for the initial publication of the new HIG, but they are both in progress and hopefully won’t be too far behind.