May 23, 2020

My experience applying to Outreachy (back in september, 2019)!

May 23, 2020

I always saw this incomplete, half written draft about my experience applying to Outreachy internship, lying in one corner of my folder. At first I thought I should just delete it because I’m already done with the internship & I’ve already written my wrap-up post describing a little about my experience. But then I just couldn’t do it only.

So, today, I gave it an another reading, finally added the left overs, and decided to post it (more for my-own-self). Because now I know Outreachy was once-in-a-lifetime experience for me. And I’m never going to feel the same way again like how I felt during the first week of my outreachy internship.

Here is what I wrote some 5 months back, with a little more of what I added today. Now that I have added experience from the time after I finished the internship too, it has some proper more reliable answers I guess.


December 09, 2019

I should’ve ideally written this post on the first day of my Outreachy Internship. But I felt like having a little more experience being the intern, might be a little more insightful.

So, here I am (after completing a whole first week into the Outreachy program), finally writing about my experience of “applying to Outreachy”.

Before I do start with exactly the outreachy-centric experience, I want to first lay down certain things.

how I stepped into this open-source culture!

It starts way before when I hadn’t even heard of the Outreachy program. I was in 3rd year of my graduation (back in 2018, I am a graduate now) when a classmate of mine introduced me to the DGPLUG summer training program. I joined the summer training (in the month of june), and thus joined my first ever community IRC channel, #dgplug on freenode. And it was also the first time ever when I got introduced to this extremely vast culture of Free and Open Source software (& even hardware now). From there after, I met a lot of people (mostly virtually & some of them in-person just recently in PyCon India’19) working on a wide range of huge upstream open-source projects.

That’s how the Open Source started in my life.


Now having stated that, let’s move to what I am actually supposed to share here. (I’ll mostly write in terms of answering the questions below.)

What motivated me to apply to Outreachy?

I remember the day when kushal gifted me the raspberry pi 4 board. And I asked back if I could gift something to him as a token of gratitude. What he answered that day, was my biggest motivation behind applying to outreachy. He replied, “the only gift I’ll receive from you, will be the number of patches you will submit to any upstream open-source project.” From there on, I was constantly hunting opportunities to start contributing to some open source project. The first some contributions were to the lymworkbook itself, then I moved to Debian project for a few months. And then finally, when I got some experience of how things work in upstream open-source projects, I decided I’ll apply for the dec-march’19 round of outreachy.

Thanks jasonbraganza. You helped me throughout. :)


What made me passionate about the project I applied to?

GNOME Foundation is one among the most reputed open-source projects. The time when I was looking at the list of the projects available for the internship, I was quite overwhelmed by the names of the huge organisations listed there. So, the only filter I kept for myself was to look for a project that invited total beginners, a project that had a technical stack that might help me in long term, and definitely one thing that I still remember, that I was looking for a project which might help me learn something that I was always scared of even trying. So, the project gtranslator just gave me all my checkpoints. It invited beginners who could learn during the internship, then the technical stack was mostly GTK in Pure C (something very new for me then) and finally C language was the one that haunted me all throughout my whole life (I mean ok, 5 years out of 22).

And very soon I realised, I chose the best project for me. :)


How did I find information?

Internet!

I got all my information from internet. I read a lot of blogs from the outreachy organisers, read lots of tweets on the Outreachy-internships twitter handle. Scanned through past outreachy alums blogs. I did a lot of personal research & I’d my aim quite clear to me, by then. I knew what I wanted to achieve out of this program. I knew how big & valuable it is, if I get to be a part of a huge open source project, and I knew how enriching it is to be a part of a community of people who’re excellent in whatever things they’re doing on & off.

Although that time, the idea was to just learn & gather as much as I can so that later I can put all that learning into finding a full time position.

But trust me. I got a lot more than that.

I got really awesome mentors, got a really awesome community, and got to work on a really awesome project. I built up this whole lot of self confidence in me. I inculcated this habit of asking wherever & whenever I’m stuck. I just got everything from there, everything which once seem simply impossible to me.

And yes, later I got a full time job too. (Although don’t get mistaken here, the job had directly nothing to do with the Outreachy program. But the idea was that those soft skills which I developed during the Outreachy period, seamlessly helped me throughout.) :)


Who did I ask questions from when I was stuck?

I asked my mentors for all my questions. For other non-project related stuff, I approached the Outreachy organisers for help. I remember I reached out to a couple of alums too, when I had questions regarding some forms that we needed to fill during the first 2 weeks of the internship (payment & tax related stuff). Everyone was super nice & kind, and everyone helped me to their level best.


How did my mentor help me during the application process?

My mentor Daniel Garcia Moreno (danigm) helped me thouroughly. From giving me pointers to getting acquainted with the codebase on the very first day of the initial contribution period, to later giving me simple issues to solve in order to actually getting started with it, then giving really nice thorough reviews on all my small contributions. He literally provided me the best possible help that a mentor could do. I thank him profusely for his time that he selflessly gave me throughout the entire Outreachy internship period.

Daniel Mustieles García (dmustieles) being the most kind person, he had always been a source of inspiration & motivation even after I finished my internship.

I just can’t thank them enough, in my little set of words. :)


What would I tell someone who is worried about applying to Outreachy?

It’s not that hard as it looks from outside. If you really want to learn and are motivated enough to devote a good span of your time towards learning more, then don’t give it a single second thought. Just apply!

Write exactly what you want to achieve out of the program in the application, be honest with your time and work commitments and always remember there are really kind people sitting on the other side, who are, by all means, willing to give you a chance to change your lives into a better one.

May 22, 2020

Silverblue: pretty good family OS

I’m the go-to IT guy in the family, so my relatives rely on me when it comes to computers and software on them. In the past I also helped them with computers with Windows and macOS, but at some point I just gave up. I don’t know those systems well enough to effectively administer them and I don’t even have much interest in them. So I asked them to decide: you either use Linux which I know and can effectively help you with or ask someone else for help.

Long story short: I (mostly remotely) support quite a few Fedora (Linux of my choice) users in my family now. It’s a fairly easy task. Usually after I set up the machine I don’t hear from the user very often. Just once 6 months and a year typically when I visit them I upgrade the machine to the new release and check whether everything works. But Fedora upgrades became so easy and reliable that recently I usually just found out that they had already done it by themselves.

But there was still one recurring problem: even though they performed upgrades because it was probably a big enough thing to catch their attention they didn’t act on normal updates and I often found them with outdated applications such as Firefox.

I could set up automated DNF updates running in the background, but it’s really not the safest option. And that’s where Fedora Silverblue comes to rescue. Applications run as flatpaks which can be and in fact are by default updated automatically in the background. And it’s pretty safe because the updates are atomic and the app is not affected until you restart it.

The same goes for system updates. rpm-ostree can prepare the update in the background and the user switches to it once the computer is restarted.

So I thought: the user base of Silverblue typically consists of developers and power users used to the container-based workflow, but hey, it could actually be a pretty good system for the users I support in my family.

I got an opportunity to try it out some time ago. I visited my mom and decided to upgrade her laptop to Fedora 32. Everything would have gone well if my son hadn’t pulled the power cord out during the process. The laptop is old and has a dead battery, so it resulted in an immediate shutdown. And that’s never good during a system upgrade. Instead of manually fixing broken packages which is a lengthy process I decided to install Silverblue.

The fruits of it came a week later. My mom called me that she was experiencing some graphical glitches and hangs with Fedora 32. Probably some regression in drivers/mesa. It’s a T400s from 2009 and neither Intel nor anyone else does any thorough regression testing on such old models I suppose. On the standard Fedora Workstation my mom would have been screwed because there is no easy way back.

But it’s a different story on Silverblue. I just sent her one command over Telegram:

rpm-ostree rebase fedora/31/x86_64/silverblue

She copy-pasted it to Terminal, pressed Enter and 5 minutes later she was booting into Fedora 31.

And the best thing about it is not that it’s so easy to upgrade and rollback in Silverblue, but that the apps are not affected by that. I know that if I let my mom rollback to Fedora 31 she will find her applications there just like she left them in Fedora 32. The same versions, same settings…

P.S. my mom’s laptop is from 2009, but Fedora Workstation/Silverblue flies on it. Who would have thought that GNOME Shell animations could be smooth on an 11-year-old laptop. Kudos to everyone who helped to land all the performance optimizations in the last several releases!

Handy 1 Alpha 1 and Migrating to GNOME

Handy 0.80.0

A few days ago we released the first alpha of Handy 1, known as Handy 0.80.0. It comes with tons of new features, such as:

  • HdyWindow and its companion widgets, a free-form unified window that Alexander presented in a blog post;
  • HdyDeck, that you can picture as a swipeable and spatialization-aware stack;
  • HdyViewSwitcherTitle, a simpler way to implement a view switcher in a titlebar;
  • overhauled HdyActionRow and HdyExpanderRow;
  • many smaller widget refinements;
  • overhauled theming support, implemented with SASS and supporting dark variants and per-theme stylesheets;
  • vastly improved Glade support;
  • a cleaned up API, see the migration documentation.

Keep in mind this is an alpha release of a new major version, it breaks compatibility with previous versions, and more API and ABI breaking alphas will be released.

We plan to release betas starting from 0.90.0, following the GNOME 3.38 development schedule, and 1.0.0 alongside GNOME 3.38. This means that if you application follows the GNOME schedule and still targets GTK 3, you can safely start migrating to Handy 1 via that first alpha version.

We Just Moved

Handy just moved to GNOME’s GitLab, you will now find it at gitlab.gnome.org/GNOME/libhandy, and find its documentation at gnome.pages.gitlab.gnome.org/libhandy. Remember to update the URLs in your projects!

The latest stable version is 0.0.13, and it can now be found here. You can follow the evolution of the 0.0.* versions on the libhandy-0-0 branch here, but we are clearly focusing on the upcoming 1.0 version.

I am sure you will love these changes! 😀

May 21, 2020

App Icon Preview 2.0.0 released 🎉

App Icon Preview is the very first utility made by the GNOME Design Tooling team, created originally by Zander Brown a year ago. Since then, we have been crafting other small yet useful utilities for the GNOME design team.

The original application was written in Vala and I wanted to port it to Rust for various reasons:

  • We needed librsvg::SvgHandle.set_stylesheet to hide a few elements that are part of the new icon template
  • After looking at the various SVG cleaning libraries/scripts out there, svgcleaner was already in Rust and using it would allow us to tweak the cleaning options to whatever fits the best for the design team usage
  • The rest of the design tools were already in Rust and we can have some shared library for the common icons rendering stuff someday
  • I love Rust
App Icon Preview
App Icon Preview previewing an exported icon

 

So I have started working on a Rust port for the latest two months whenever the time allowed. Yesterday, I got some motivation to finish it finally! 

Here's a quick list of the new expected features/bug fixes other than the Rust port:

  • We finally support previewing exported SVG icons, and as you might expect, creating nightly icons for them as well
  • The SVG icons are now optimized when exported
  • The baseplate/grid are now hidden from the template
  • A new fancy icon!
App Icon Preview
App Icon Preview's new shiny icon!

As the new release consisted of a complete rewrite along with bug fixes and some new improvements/features, we would appreciate some testing. You can grab it from Flathub and report your issues on this Gitlab repository

Enjoy!

May 20, 2020

Friends of GNOME Update May 2020

Welcome to the May 2020 Friends of GNOME Update!

A photo of Marina Zhurakhinskaya in front of a photo of a group of women from the GNOME Women's Dinner.
“Women party at guadec” by gonzalemario is licensed under CC BY 2.0

GNOME on the Road

We might be at home, but we still have a lot to say!

Community Engagement Challenge Update

Free software projects want to grow their contributor bases, and one of the most important ways to do that is to make it easy (and hopefully even fun) to get involved. We’ve teamed up with Endless to work on the Community Engagement Challenge, to get more people involved in GNOME. There is the opportunity for over $20,000 USD in prizes. Entries for ideas for projects to engage new and potential community members are open until July 1.

GNOME Social Hours

Deb Nicholson joined us for the Social Hour this month. She is an expert in free software policy and communities. The conversation focused on how roadmapping and delegation can help grow a healthy community.

There will be another Social Hour in early June, which will be announced on Discourse.

What counts as “GNOME”?

The Board of Directors proposed a new system for how GNOME software is defined. It includes official GNOME software, which will be decided by the release team, and GNOME Circle, apps using the GNOME platform and libraries which extend it. The conversation is happening right now on Discourse, so please join in and share your thoughts.

GTK4 Updates

Core GTK Developer Emmanuele Bassi continues to work on GTK4, including on accessibility and documentation.

Infrastructure Updates

The Infrastructure Team has recently updated a lot of really important software for us, including Discourse, Indico, and Rocket.chat. On a personal note, I am extremely excited about their work updating etherpad.gnome.org, a collaboration resource I use every day.

Interns, Interns, Interns: GSoC and GSoD

We are so pleased to share that we have selected our Google Summer of Code interns! We are very excited to work with these 14 interns on their projects, which cover everything refreshing gnome.org, UI improvements, and development of adaptive technology. Mentors come from across the project, and include Brand Manager Caroline Henriksen.

We will also be participating in Google Season of Docs. There are three mentors, including Emmanuele, ready to work with technical writers who want to help make GNOME a better project through better documentation. You can read about our projects.

Posts from the Community

Thank you!

Thank you for supporting us in the ways you do! If you’re not already a Friend of GNOME, please consider joining to support the activities of the Foundation and its staff.

We’re currently running a Spring fundraiser geared towards growing GNOME in Africa and WebKitGTK development. Tell your friends!

 

Media in GTK 4

Showing moving pictures is ever more important. GTK 4 will make it easier for GTK apps to show animations; be that a programmatic animation, a webm file or a live stream.

Everything is paintable

Before looking at animations, it is worth spending a little bit of time on the underlying abstractions that GTK uses for content that can be drawn. In GTK 2 and 3, that was mainly GdkPixbuf: you load a file, and you get a block of pixel data (more or less in a single format).  If you wanted to animate it, there is GdkPixbufAnimation, but it is fair to say that it was not a very successful API.

GTK 4 brings a new API called GdkPaintable that was inspired by the CSS Houdini effort. It is very flexible—anything that you can plausibly draw can be a GdkPaintable. The content can be resizable (like svg), or change over time (like webm).

Widgets that typically show image content, like GtkImage or GtkPicture know how to use paintables. And many things that in the past would have produced pixel data in some form can now be represented as paintables: textures, icons, and even widgets.

If you have more specialized needs, anything that can be captured in a GtkSnapshot can be turned into a paintable with gtk_snapshot_to_paintable(). If you make a custom widget that wants to draw a paintable, that is very straightforward. Just call gdk_paintable_snapshot().

Getting animated

As I’ve said earlier, paintables can change their content over time. All it takes is for them to emit the ::contents-changed signal, and widgets like GtkPicture will do the right thing and update their display.

So, where do we get a GdkPaintable that changes its content? We can load it from a file, using GTK 4’s builtin GtkMediaFile api. This is a high-level api, comparable to GstPlayer: you stuff in a uri, and you get an object that has a play() function and a pause() function, and works as a paintable.

GTK ships with two implementations of GtkMediaFile, one using gstreamer and another one using ffmpeg. Since we don’t want to make either of these a hard dependency of GTK,  they are loadable modules.

You can open the GTK inspector to find out which one is in use:

Keeping control

The GtkMediaFile API is what gtk4-widget-factory demos with the animated GTK logo on its frontpage:

As you can see, it is not just a moving picture, there are media controls there too – you get these for free by using the GtkVideo widget.

Beyond the basics

Loading animations from files is maybe not that exciting, so here is another example that goes a little further. It is a little weekend project that combines GtkVideo, libportal and pipewire to demonstrate how to show a video stream in a GTK application.

The bad news is that we haven’t found a permanent home for the supporting glue code yet (a GstSink, a GdkPaintable and a GtkMediaStream). It doesn’t fit into GTK since, as mentioned, we don’t want to depend on gstreamer, and it doesn’t fit into gstreamer since GTK 4 isn’t released yet. We will certainly work that out before too long, since it is very convenient to turn a gstreamer pipeline into a paintable with a few lines of code.

The good news is that the core of the code is just a few lines:

fd = xdp_portal_open_pipewire_remote_for_camera (portal);
stream = gtk_gst_media_stream_new_for_pipewire_fd (fd, NULL);
gtk_video_set_media_stream (video, stream);

 

What I learned about Linux device development: Part I

I have been reading the firsts three chapters of the Linux Device Drivers book during this week. I have been reading this book in parallel with other book of 600 pages (related to Peruvian economy history since 1889), so the progress I have made on this week is productive enough to me. I have also put in on practice the implementation of the scull driver.

The mentioned book is a little bit outdated of course because there are new ways to do things that were added later in the kernel. So the purpose of this post is to show these outdated parts, what is the new way to do it, and also to extend some things that I did not see covered in the book.

Do not use mknod When registering a char region with register_chrdev_region you have to pass to it a specific major number and a minor number (in the first argument). The book suggests “your drivers should almost certainly be using alloc_chrdev_region rather than register_chrdev_region”.

In fact, it’s better to let the kernel to pick a free major number for you. The annoying part with it is that once you created your driver and loaded it up with the insmod command, the device is not automatically added to the /dev directory. The book tells that when distributing a driver you will have to provide a script that creates the device files manually (usually called MAKEDEV), in this case your script should manually create: /dev/scull0, /dev/scull1, /dev/scull2 and /dev/scull3. For that you do:

mknod /dev/${device}0 c $major 0
mknod /dev/${device}1 c $major 1
mknod /dev/${device}2 c $major 2
mknod /dev/${device}3 c $major 3

However, if you created your device dynamically as the book suggested and you have to tell mknod the major number, how can you tell what major number to use? The book does it by looking at /proc/devices and parsing the major number from it: that’s annoying but understandable since that book targeted the 2.6.10 kernel version.

Instead…

The new method to do this is through udev. What the kernel has is a pair of functions called class_create and device_create.

So do this:

static struct class *scull_class = NULL;

/* ... */

scull_class = class_create (THIS_MODULE, "scull");
if (IS_ERR (scull_class)) {
    res = PTR_ERR (scull_class);
    goto error;
}

This piece of code will create a folder /sys/class/scull/ in the sysfs virtual file system.

Then… for each minor, in this case 0, 1, 2, 3 (4 devices):

static struct scull_dev *scull_devices = NULL;

/* ... */

device = device_create (scull_class, NULL, devno, NULL, "scull%d", minor);
if (IS_ERR (device)) {
    res = PTR_ERR (device);
    goto error;
}

This will automatically create the folders scull0scull3 in /sys/class/scull/.

$ ls /sys/class/scull/
scull0  scull1  scull2  scull3

and when that happens the device nodes /dev/scull0, …, /dev/scull3 will be created.

And of course do not forget to clean up your device and class :

/* For each i in 0, 1, 2, 3 */
device_destroy (scull_class, MKDEV (scull_major, scull_minor + i));
class_destroy (scull_class);

Device permissions

If you have followed the above recommendations you will realize that you need superuser permissions to open the files /dev/scull0, …, /dev/scull3. The script I mentioend also took care of assigning permissions manually. You can specify permissions programatically, too. That is what the tty device does.

So once you have created the class scull_class you can do the following

scull_class->devnode = scull_devnode;

where scull_devnode is defined like:

static char *
scull_devnode (struct device * dev, umode_t *mode)
{
  if (!mode)
    return NULL;

  *mode = 0666;
  return NULL;
}


Adding support for O_APPEND The scull device example, at least until what I saw did not support append text to the end of the file which is an operation performed with echo foo >> /dev/scull0 for example.

In your .write virtual function, (wrapped with a mutex) do the following:

static ssize_t
scull_write (struct file * filp, const char __user * buf, size_t count,
    loff_t * f_pos)

	/* ... */
	
	if ((filp->f_flags & O_APPEND)
	    *f_pos = dev->size;

Now the file position is right at the end, writing should happen from there.


That is all for now. I am not an expert on this area, yet. I plan in the future to continue studying about polling, interruptions, and after that I hope that in the next at most three weeks I can create a simple and stupid mouse driver from scratch.

EDIT: The book mentions about the existance of udev but it does not use it in the scull example. I have just realized it, and it is on the page 403.

Playing back arbitrary frames with appsrc

If you have used GStreamer you may have used source elements like filesrc or v4l2src. Both of them use an existing source to play back a video, for example, the former takes as an input a video file from the source and the latter takes input from the camera. But, imagine you want to create a video by hand, something like. For example, videotestsrc, the element that displays a test (card) pattern, creates this pattern by filling a buffer by hand.

I will not talk about creating an element similar to videotestsrc. I will show an example program in which you tell programatically to GStreamer what to display at a given timestamp or frame.


The following example, will display

appsrc-example-colors.gif

  • from second 0 to 1: a blue frame
  • from second 1 to 2: a green frame
  • from second 2 to 3: a red frame

and repeat it for the next frames.


First, in line 104, we use the appsrc element, this element will allow us to push buffers by hand. We need to specify the caps we will use: caps=video/x-raw,format=RGBx,width=640,height=460:

Now in line 108, we connect to the signal “need-data”, which will be emitted as soon the appsrc internal queue starts running out of data. Because initially there is no data in the queue, it will be emitted as soon as possible. When this signal gets emited, we will start pushing data (in this case the frames of solid blue, green and red colors), and this is what is done in the function push_data of line 24. But before that, see that if we would not have connected to the “need-data” signal, we would have seen a black screen.

In line 109, we connect to the signal “enough-data”. This signal will be emitted when the appsrc inetrnal queue size exceeds the maximum allowed queue size (given by the property “max-bytes”).

For this example, I should haved probably increased the “max-bytes” property since the queue maximum size will be always exceeded. I will left that for you. The queue maximum size will be exceeded because we are pushing frames of 640 * 460 * 4 (width * height * 4 [RGBx] channels) = 1177600, and the default queue “max-bytes” property is 200000.

Ok, let’s go to the signal handler of “need-data”, line 24, the function push_data. Here is the interesting part. In line 35, firstly a buffer is allocated with the size of width * height * 4. Then, in lines 36-37, I have to timestamp the buffers. The first frame will have a timestamp of 0; the second frame, of 1 second; the third one, of 2 seconds… and so on. The duration assigned, in this example, is of 1 second for all the frames. This will make to play back 1 frame for 1 second, then the next second, the other frame and so on. Try playing with timestamp and duration and you will change the rate at which these frames are displayed.

Now the part were we start to write data to the buffer starts. We have to map the buffer in GST_MAP_WRITE mode for that (line 40). Then we start to write data. I picked RGBx to make it easier to write data, so I can cast the data as guint32 array and assign each item in the array to a color (lines 44-46). See that guint32 has 4 bytes, but we are just using 3 colors: in RGBx, the x “channel” is ignored.

Once the buffer is filled, we need to emit the “push-buffer” so the buffer is taken by the appsrc element. If we do not do that, we will see a black screen.


That is all for now. Sorry if there are things not used in the code, this is an old example and I did not have too much time to change it.

Now as task, try this with RGB or other colorspaces from here here.

I made this very quick and small example when working on a MR for gst-plugins-base to be able to apply whatever effect just in a region of a given frame (GstVideoRegionOfInterestMeta). I needed to see if what I did worked in videos without GstVideoMeta like in this example.

May 19, 2020

Help Grow WebKitGTK

We’re asking you to help us set our priorities when you donate to the GNOME Foundation this spring. You have the option of asking us to focus on building GNOME in Africa or WebKitGTK development.

A banner that reads "Growing Together" with photos of the GNOME community and GNOME logo.

WebKitGTK is not only an exciting project for GNOME, but a necessary step in preparing for our GTK4 release. We’ve been growing the project, with a new release just the other day! We have a lot more development to do, and it’s something we are hoping to prioritize. You can let us know if you think WebKitGTK should be a priority by donating today and marking your donation in support of WebKitGTK development.

WebKitGTK is a rendering engine for projects that need any kind of web integration. It can handle HTML/CSS applications and web browsers, and is useful for everything from desktop computers to mobile devices like phones and tablets. We believe the web is for everyone, and we support this belief by making accessibility one of the project’s core principles.

Right now, the main focus is cleaning up the project to make the port to GTK4 smoother. In addition to ensuring there are fast paths for efficient rendering, moving existing users, and incorporating user requirements, this will make it easier for future contributors to find new pathways to get involved.

In order to accomplish this, we could use funds in a number of ways:

  • paying for developer time;
  • hiring an intern to work on WebKitGTK;
  • supporting hackfests; or
  • purchasing equipment necessary for development.

We’re asking you to help us grow the WebKitGTK project. It’s a necessary step in the development of GTK4. WebKitGTK will help build a free web by helping more people create the tools they and others need. So please, donate today and vote for WebKitGTK.

g_assert_no_errno() and GLib 2.65.1

It’s the start of a new GLib release cycle, and so it’s time to share what people have been contributing so far. GLib 2.65.1 will be out soon, and it will contain a new test macro, g_assert_no_errno(). This checks that a POSIX-style function (like, say, rmdir()) succeeds when run. If the function fails (and indicates that by returning a negative integer) then g_assert_no_errno() will print out the error message corresponding to the current value of errno.

This should slightly simplify tests which have to use POSIX-style functions which don’t support GError.

Thanks to Simon McVittie for his help in putting it together and getting it tested and merged.

xisxwayland checks for Xwayland ... or not

One of the more common issues we encounter debugging things is that users don't always know whether they're running on a Wayland or X11 session. Which I guess is a good advertisement for how far some of the compositors have come. The question "are you running on Xorg or Wayland" thus comes up a lot and suggestions previously included things like "run xeyes", "grep xinput list", "check xrandr" and so on and so forth. None of those are particularly scriptable, so there's a new tool around now: xisxwayland.

Run without arguments it simply exits with exit code 0 if the X server is Xwayland, or 1 otherwise. Which means use can use it like this:


$ cat my-xorg-only-script.sh
#!/bin/bash

if xisxwayland; then
echo "This is an Xwayland server!";
exit 1
fi

...
Or, in the case where you have a human user (gasp!), you can ask them to run:

$ xisxwayland --verbose
Xwayland: YES
And even non-technical users should be able to interpret that.

Note that the script checks for Xwayland (hence the name) via the $DISPLAY environment variable, just like any X application. It does not check whether there's a Wayland compositor running but for most use-cases this doesn't matter anyway. For those where it matters you get to write your own script. Congratulations, I guess.

May 18, 2020

Patching Vendored Rust Dependencies

Recently I had a difficult time trying to patch a CVE in librsvg. The issue itself was simple to patch because Federico kindly backported the series of commits required to fix it to the branch we are using downstream. Problem was, one of the vendored deps in the old librsvg tarball did not build with our modern rustc, because the code contained a borrow error that was not caught by older versions of rustc. After finding the appropriate upstream fix, I tried naively patching the vendored dep, but that failed because cargo tries very hard to prevent you from patching its dependencies, and complains if the dependency does not match its checksum in Cargo.lock. I tried modifying the checksum in Cargo.lock, but then it complains that you modified the Cargo.lock. It seems cargo is designed to make patching dependencies as difficult as possible, and that not much thought was put into how cargo would be used from rpmbuild with no network access.

Anyway, it seems the kosher way to patch Rust dependencies is to add a [patch] section to librsvg’s Cargo.toml, but I could not figure out how to make that work. Eventually, I got some help: you can edit the .cargo-checksum.json of the vendored dependency and change “files” to an empty array, like so:

diff --git a/vendor/cssparser/.cargo-checksum.json b/vendor/cssparser/.cargo-checksum.json
index 246bb70..713372d 100644
--- a/vendor/cssparser/.cargo-checksum.json
+++ b/vendor/cssparser/.cargo-checksum.json
@@ -1 +1 @@
-{"files":{".cargo-ok":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",".travis.yml":"f1fb4b65964c81bc1240544267ea334f554ca38ae7a74d57066f4d47d2b5d568","Cargo.toml":"7807f16d417eb1a6ede56cd4ba2da6c5c63e4530289b3f0848f4b154e18eba02","LICENSE":"fab3dd6bdab226f1c08630b1dd917e11fcb4ec5e1e020e2c16f83a0a13863e85","README.md":"c5781e673335f37ed3d7acb119f8ed33efdf6eb75a7094b7da2abe0c3230adb8","build.rs":"b29fc57747f79914d1c2fb541e2bb15a003028bb62751dcb901081ccc174b119","build/match_byte.rs":"2c84b8ca5884347d2007f49aecbd85b4c7582085526e2704399817249996e19b","docs/.nojekyll":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","docs/404.html":"025861f76f8d1f6d67c20ab624c6e418f4f824385e2dd8ad8732c4ea563c6a2e","docs/index.html":"025861f76f8d1f6d67c20ab624c6e418f4f824385e2dd8ad8732c4ea563c6a2e","src/color.rs":"c60f1b0ab7a2a6213e434604ee33f78e7ef74347f325d86d0b9192d8225ae1cc","src/cow_rc_str.rs":"541216f8ef74ee3cc5cbbc1347e5f32ed66588c401851c9a7d68b867aede1de0","src/from_bytes.rs":"331fe63af2123ae3675b61928a69461b5ac77799fff3ce9978c55cf2c558f4ff","src/lib.rs":"46c377e0c9a75780d5cb0bcf4dfb960f0fb2a996a13e7349bb111b9082252233","src/macros.rs":"adb9773c157890381556ea83d7942dcc676f99eea71abbb6afeffee1e3f28960","src/nth.rs":"5c70fb542d1376cddab69922eeb4c05e4fcf8f413f27563a2af50f72a47c8f8c","src/parser.rs":"9ed4aec998221eb2d2ba99db2f9f82a02399fb0c3b8500627f68f5aab872adde","src/rules_and_declarations.rs":"be2c4f3f3bb673d866575b6cb6084f1879dff07356d583ca9a3595f63b7f916f","src/serializer.rs":"4ccfc9b4fe994aab3803662bbf31cc25052a6a39531073a867b14b224afe42dd","src/size_of_tests.rs":"e5f63c8c18721cc3ff7a5407e84f9889ffa10e66da96e8510a696c3e00ad72d5","src/tests.rs":"80b02c80ab0fd580dad9206615c918e0db7dff63dfed0feeedb66f317d24b24b","src/tokenizer.rs":"429b2cba419cf8b923fbcc32d3bd34c0b39284ebfcb9fc29b8eb8643d8d5f312","src/unicode_range.rs":"c1c4ed2493e09d248c526ce1ef8575a5f8258da3962b64ffc814ef3bdf9780d0"},"package":"8a807ac3ab7a217829c2a3b65732b926b2befe6a35f33b4bf8b503692430f223"}
\ No newline at end of file
+{"files":{},"package":"8a807ac3ab7a217829c2a3b65732b926b2befe6a35f33b4bf8b503692430f223"}

Then cargo will stop complaining and you can patch the dependency. Success!

May 17, 2020

Survival Analysis for Deep Learning Tutorial for TensorFlow 2

A while back, I posted the Survival Analysis for Deep Learning tutorial. This tutorial was written for TensorFlow 1 using the tf.estimators API. The changes between version 1 and the current TensorFlow 2 are quite significant, which is why the code does not run when using a recent TensorFlow version. Therefore, I created a new version of the tutorial that is compatible with TensorFlow 2. The text is basically identical, but the training and evaluation procedure changed.

The complete notebook is available on GitHub, or you can run it directly using Google Colaboratory.

Notes on porting to TensorFlow 2

A nice feature of TensorFlow 2 is that in order to write custom metrics (such as concordance index) for TensorBoard, you don’t need to create a Summary protocol buffer manually, instead it suffices to call tf.summary.scalar and pass it a name and float. So instead of

from sksurv.metrics import concordance_index_censored
from tensorflow.core.framework import summary_pb2
c_index_metric = concordance_index_censored(…)[0]
writer = tf.summary.FileWriterCache.get(output_dir)
buf = summary_pb2.Summary(value=[summary_pb2.Summary.Value(
tag="c-index", simple_value=c_index_metric)])
writer.add_summary(buf, global_step=global_step)

you can just do

from sksurv.metrics import concordance_index_censored
with tf.summary.create_file_writer(output_dir):
c_index_metric = concordance_index_censored(…)[0]
summary.scalar("c-index", c_index_metric, step=step)

Another feature that I liked is that you can now iterate over an instance of tf.data.Dataset and directly access the tensors and their values. This is much more convenient than having to call make_one_shot_iterator first, which gives you an iterator, which you call get_next() on to get actual tensors.

Unfortunately, I also encountered some negatives when moving to TensorFlow 2. First of all, there’s currently no officially supported way to produce a view of the executed Graph that is identical to what you get with TensorFlow 1, unless you use the Keras training loop with the TensorBoard callback. There’s tf.summary.trace_export, which as described in this guide sounds like it would produce the graph, however, using this approach you can only view individual operations in TensorBoard, but you can’t inspect what’s the size of input and output tensors of an operation. After searching for while, I eventually found the answer in an Stack overflow post, and, as it turns out, that is exactly what the TensorBoard callback is doing.

Another thing I found odd is that if you define your custom loss as a subclass of tf.keras.losses.Loss, it insists that there are only two inputs y_true and y_pred. In the case of Cox’s proportional hazards loss the true label comprises an event indicator and an indicator matrix specifying which pairs in a batch are comparable. Luckily, the contents of y_pred don’t get checked, so you can just pass a list, but I would prefer to write something like

loss_fn(y_true_event=y_event, y_true_riskset=y_riskset, y_pred=pred_risk_score)

instead of

loss_fn(y_true=[y_event, y_riskset], y_pred=pred_risk_score)

Finally, although eager execution is now enabled by default, the code runs significantly faster in graph mode, i.e. annotating your model’s call method with @tf.function. I guess you are only supposed to use eager execution for debugging purposes.

May 16, 2020

Now I am a member of the The Document Foundation

The Document Foundation logo

Well, it took me some months to tell it here but The Document Foundation has honored me admitting my membership request:

Dear Ismael Olea,

We are pleased to inform you that, as of 2020-01-01, your membership has been officially filed. You are now acknowledged as member of The Document Foundation according to its bylaws.

Kind Regards,

The Document Foundation Membership Committee

Little things that makes you a bit happier. Thank you!

May 15, 2020

GSoC 2020 - Gnome

Gnome and Google Summer of Code

If you are wondering about how I ended up here —

Seeing my colleagues applying for GSoC this year, I started my search and made some contributions to the Gnome’s projects. So I put an application for one of the Gnome’s projects (only one application, in case you were wondering). On the 4th of May 23:45 IST, I received the acceptance email from GSoC community (Yay!).

The project I applied to is libhandy (a library of GUI widgets for phones). In there I’ll be working on implementing a new widget (more description here).

Thanks to Adrien Plazas (my mentor for the programme) and Alexander Mikhaylenko for all their support along the way.

Being new to open source, I’m looking forward to new experiences.

Congratulations to fellow GSoCers!

Update on GNOME documentation project and infra

As you may have noticed, GNOME was recently accepted as a participating organization in the Season of Docs 2020 program (thanks Kristi Progri, Shaun McCance, and Emmanuele Bassi for your help with this).

While we are eagerly awaiting potential participants to apply for the program and start contributing as documentation writers to GNOME user and developer documentation projects, I also wanted to summarize recent updates from the GNOME documentation infrastructure area.

Back in January this year when conferences were not solely virtual, Shaun McCance, David King and yours truly managed to meet right before FOSDEM 2020 in Brussels, Belgium for two days of working on a next-gen site generator for GNOME user documentation.

As largely unmaintained library-web running behind help.gnome.org remains one of the biggest pain points in the GNOME project, we have a long-term plan to replace it with Pintail. This generator written by Shaun builds Ducktype or Mallard documentation directly from Git repos, surpassing the need to handle Autotools or Meson or any other build system in a tarball,  as opposed to library-web, which, for historical reasons, depends on released tarballs generated with Autotools only, with no support for Meson.

With the help from the awesome GNOME Infrastructure team, we managed to get a test instance up and running at help-web.openshift.gnome.org for everybody to review. The sources are hosted at gitlab.gnome.org/Infrastructure/help.gnome.org. Please keep in mind that this all is very much a work in progress.

We summarized some of the top priority issues to resolve as a follows:

  • Finalize site design to be on par with what users can found on help.gnome.org currently.
  • Add translation support for the site, so users can access localized content.
  • Figure out the right GNOME stable branching scheme support to be used by the site.
    Initially, we plan to make latest stable and unstable (master) versions for each GNOME module available. We want to have branching and linking configuration automated, without the need to manually reconfigure documentation modules for every new branch or release, just like in library-web. However, adding that support to Pintail requires some non-trivial code to be written and we had David looking into that particular area.

With the limited amount of time we had during the pre-FOSDEM days, we still managed to make a considerable progress towards having a documentation site replacement ready for deployment. But as it is common with volunteer-based projects, the pace of work often slows down once the intense hacking session is over.

I think we all realize that this area really needs more help from others in the community, be it Infrastructure, web design, localization, or simply community members submitting feedback. Please check out the help-web project on GNOME GitLab and leave comments, or, even better, submit merge requests.

Thank you Shaun and David for your time and help!

May 14, 2020

Converting to encrypted swap

I’m working on a firmware platform security specification which we will announce soon. Most of the things we test are firmware protections the user cannot actually change, but there are some runtime checks we do to make sure we can actually trust the results from the kernel. For instance, if you load unknown random modules into the kernel (which means it becomes “tainted”) you can’t actually trust the values reported. Another basic sanity check we do is checking for encrypted swap space.

My Lenovo P50 was installed with Fedora 29ish, a long time ago, with encrypted /home and unencrypted swap. It’s been upgraded quite a few times and I’m not super keen on re-installing it now. I wanted to upgrade to encrypted swap so I could pass the same requirements that I’m going to be asking people to meet.

Please don’t just copy and paste the below, as you will have a different swap partition to me. If you choose the wrong partition you will either overwrite your data or your root partition, so be careful. Caveat emptor, and all that.

So, lets get started. Lets turn off the existing swap partition:

[root@localhost ~]# cat /proc/swaps
Filename				Type		Size	Used	Priority
/dev/nvme0n1p4                          partition	5962748	0	-2
[root@localhost ~]# swapoff /dev/nvme0n1p4

Lets overwrite the existing partition with zeros, as it might have data that we’d consider private:

dd if=/dev/zero of=/dev/nvme0n1p4 bs=102400

We then need to change /etc/fstab from

# Created by anaconda on Mon Dec  9 09:05:10 2019
...
UUID=97498951-0a49-4110-b838-dd90d02ea11f none                    swap    defaults        0 0
...

to

...
/dev/mapper/swap                          none                    swap    defaults        0 0    
...

We then need to append to /etc/crypttab:

swap /dev/nvme0n1p4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256

Reboot, and then cat /proc/swaps will show you using a dm device. Done!

The Web Platform Tests project

Web Browsers and Test Driven Development

Working on Web browsers development is not an easy feat but if there’s something I’m personally very grateful for when it comes to collaborating with this kind of software projects, it is their testing infrastructure and the peace of mind that it provides me with when making changes on a daily basis.

To help you understand the size of these projects, they involve tens of millions of lines of code (Chromium is ~25 million lines of code, followed closely by Firefox and WebKit) and around 200-300 new patches landing everyday. Try to imagine, for one second, how we could make changes if we didn’t have such testing infrastructure. It would basically be utter and complete chao​s and, more especially, it would mean extremely buggy Web browsers, broken implementations of the Web Platform and tens (hundreds?) of new bugs and crashes piling up every day… not a good thing at all for Web browsers, which are these days some of the most widely used applications (and not just ‘the thing you use to browse the Web’).

The Chromium Trybots in action
The Chromium Trybots in action

Now, there are all different types of tests that Web engines run automatically on a regular basis: Unit tests for checking that APIs work as expected, platform-specific tests to make sure that your software runs correctly in different environments, performance tests to help browsers keep being fast and without increasing too much their memory footprint… and then, of course, there are the tests to make sure that the Web engines at the core of these projects implement the Web Platform correctly according to the numerous standards and specifications available.

And it’s here where I would like to bring your attention with this post because, when it comes to these last kind of tests (what we call “Web tests” or “layout tests”), each Web engine used to rely entirely on their own set of Web tests to make sure that they implemented the many different specifications correctly.

Clearly, there was some room for improvement here. It would be wonderful if we could have an engine-independent set of tests to test that a given implementation of the Web Platform works as expected, wouldn’t it? We could use that across different engines to make sure not only that they work as expected, but also that they also behave exactly in the same way, and therefore give Web developers confidence on that they can rely on the different specifications without having to implement engine-specific quirks.

Enter the Web Platform Tests project

Good news is that just such an ideal thing exists. It’s called the Web Platform Tests project. As it is concisely described in it’s official site:

“The web-platform-tests project is a cross-browser test suite for the Web-platform stack. Writing tests in a way that allows them to be run in all browsers gives browser projects confidence that they are shipping software which is compatible with other implementations, and that later implementations will be compatible with their implementations.”

I’d recommend visiting its website if you’re interested in the topic, watching the “Introduction to the web-platform-tests” video or even glance at the git repository containing all the tests here. Here, you can also find specific information such as how to run WPTs or how to write them. Also, you can have a look as well at the wpt.fyi dashboard to get a sense of what tests exists and how some of the main browsers are doing.

In short: I think it would be safe to say that this project is critical to the health of the whole Web Platform, and ultimately to Web developers. What’s very, very surprising is how long it took to get to where it is, since it came into being only about halfway into the history of the Web (there were earlier testing efforts at the W3C, but none that focused on automated & shared testing). But regardless of that, this is an interesting challenge: Filling in all of the missing unified tests, while new things are being added all the time!

Luckily, this was a challenge that did indeed took off and all the major Web engines can now proudly say that they are regularly running about 36500 of these Web engine-independent tests (providing ~1.7 million sub-tests in total), and all the engines are showing off a pass rate between 91% and 98%. See the numbers below, as extracted from today’s WPT data:

Chrome 84 Edge 84 Firefox 78 Safari 105 preview
Pass Total Pass Total Pass Total Pass Total
1680105 1714711 1669977 1714195 1640985 1698418 1543625 1695743
Pass rate: 97.98% Pass rate: 97.42% Pass rate: 96.62% Pass rate: 91.03%

And here at Igalia, we’ve recently had the opportunity to work on this for a little while and so I’d like to write a bit about that…

Upstreaming Chromium’s tests during the Coronavirus Outbreak

As you all know, we’re in the middle of an unprecedented world-wide crisis that is affecting everyone in one way or another. One particular consequence of it in the context of the Chromium project is that Chromium releases were paused for a while. On top of this, some constraints on what could be landed upstream were put in place to guarantee quality and stability of the Chromium platform during this strange period we’re going through these days.

These particular constraints impacted my team in that we couldn’t really keep working on the tasks we were working on up to that point, in the context of the Chromium project. Our involvement with the Blink Onion Soup 2.0 project usually requires the landing of relatively large refactors, and these kind of changes were forbidden for the time being.

Fortunately, we found an opportunity to collaborate in the meantime with the Web Platform Tests project by analyzing and trying to upstream many of the existing Chromium-specific tests that haven’t yet been unified. This is important because tests exist for widely used specifications, but if they aren’t in Web Platform Tests, their utility and benefits are limited to Chromium. If done well, this would mean that all of the tests that we managed to upstream would be immediately available for everyone else too. Firefox and WebKit-based browsers would not only be able to identify missing features and bugs, but also be provided with an extra set of tests to check that they were implementing these features correctly, and interoperably.

The WPT Dashboard
The WPT Dashboard

It was an interesting challenge considering that we had to switch very quickly from writing C++ code around the IPC layers of Chromium to analyzing, migrating and upstreaming Web tests from the huge pool of Chromium tests. We focused mainly on CSS Grid Layout, Flexbox, Masking and Filters related tests… but I think the results were quite good in the end:

As of today, I’m happy to report that, during the ~4 weeks we worked on this my team migrated 240 Chromium-specific Web tests to the Web Platform Tests’ upstream repository, helping increase test coverage in other Web Engines and thus helping towards improving interoperability among browsers:

  • CSS Flexbox: 89 tests migrated
  • CSS Filters: 44 tests migrated
  • CSS Masking: 13 tests migrated
  • CSS Grid Layout: 94 tests migrated

But there is more to this than just numbers. Ultimately, as I said before, these migrations should help identifying missing features and bugs in other Web engines, and that was precisely the case here. You can easily see this by checking the list of automatically created bugs in Firefox’s bugzilla, as well as some of the bugs filed in WebKit’s bugzilla during the time we worked on this.

…and note that this doesn’t even include the additional 96 Chromium-specific tests that we analyzed but determined were not yet eligible for migrating to WPT (normally because they relied on some internal Chromium API or non-standard behaviour), which would require further work to get them upstreamed. But that was a bit out of scope for those few weeks we could work on this, so we decided to focus on upstreaming the rest of tests instead.

Personally, I think this was a big win for the Web Platform and I’m very proud and happy to have had an opportunity to have contributed to it during these dark times we’re living, as part of my job at Igalia. Now I’m back to working on the Blink Onion Soup 2.0 project, where I think I should write about too, but that’s a topic for a different blog post.

Credit where credit is due

IgaliaI wouldn’t want to finish off this blog post without acknowledging all the different contributors who tirelessly worked on this effort to help improve the Web Platform by providing the WPT project with these many tests more, so here it is:

From the Igalia side, my whole team was the one which took on this challenge, that is: Abhijeet, Antonio, Gyuyoung, Henrique, Julie, Shin and myself. Kudos everyone!

And from the reviewing side, many people chimed in but I’d like to thank in particular the following persons, who were deeply involved with the whole effort from beginning to end regardless of their affiliation: Christian Biesinger, David Grogan, Robert Ma, Stephen Chenney, Fredrik Söderquist, Manuel Rego Casasnovas and Javier Fernandez. Many thanks to all of you!

Take care and stay safe!

May 13, 2020

Putting container updates on a diet

For infrastructure reasons the Fedora flatpaks are delivered using the container registry, rather than OSTree (which is normally used for flatpaks). Container registries are quite inefficient for updates, so we have been getting complaints about large downloads.

I’ve been working on ways to make this more efficient. That would help not only the desktop use-case, as smaller downloads are also important in things like IoT devices. Also, less bandwidth used could lead to significant cost savings for large scale registries.

Containers already have some features designed to save download size. Lets take a look at them in more detail to see why that often doesn’t work.

Consider a very simple Dockerfile:

FROM fedora:32
RUN dnf -y install httpd
COPY index.html /var/www/html/index.html
ENTRYPOINT /usr/sbin/httpd

This will produce an container image that looks like this:

The nice thing about this setup is that if you change the html layer and re-deploy you only need to download the last layer, because the other layers are unchanged.

However, as soon as one of the other layer changes you need to download the changed layer and all layers below it from scratch. For example, if there is a security issue and you need to update the base image all layers will change.

In practice, such updates actually change very little in the image. Most files are the same as the previous version, and the few that change are still similar to the previous version. If a client is doing an update from the previous version the old files are available, and if they could be reused that would save a lot of work.

One complexity of this is that we need to reconstruct the exact tar file that we would otherwise download, rather than the extracted files. This is because we need to checksum it and verify the checksum to protect against accidental or malicious modifications. For containers,  the checksum that clients use is the checksum of the uncompressed tarball. Being uncompressed is fortunate for us, because reproducing identical compression is very painful.

To handle such reconstruction, I wrote a tool called tar-diff which does exactly what is needed here:

$ tar-diff old.tar.gz new.tar.gz delta.tardiff
$ tar xf old.tar.gz -C extracted/
$ tar-patch delta.tardiff extracted/ reconstructed.tar
$ gunzip new.tar.gz
$ cmp new.tar reconstructed.tar

I.e. it can use the extracted data from an old version, together with a small diff file to reconstruct the uncompressed tar file.

tar-diff uses knowledge of the tar format, as well as the bsdiff binary diff algorithm and zstd compression to create very small files for typical updates.

Here are some size comparisons for a few representative images. This shows the relative size of the deltas compared to the size of the changed layers:

Red Hat Universal Base Image 8.0 and 8.1
fluent/fluentd, a Ruby application on top of a small base image
OpenShift Enterprise prometheus releases
Fedora 30 flatpak runtime updates

These are some pretty impressive figures. Its clear from this that some updates are really very small, yet we are all downloading massive files anyway. Some updates are larger, but even for those the deltas are in the realm of 10-15% of the original size. So, even in the worst case deltas are giving around 10x improvement.

For this to work we need to store the deltas on a container registry and have a way to find the deltas when pulling an image. Fortunately it turns out that the OCI specification is quite flexible, and there is a new project called OCI artifacts specifying how to store other types of binary data in a container.

So, I was able to add support for this in skopeo and podman, allowing it both to generate deltas and use them to speed up downloads. Here is a short screen-cast of using this to generate and use deltas between two images stored on the docker hub:

All this is work in progress and the exact details of how to store deltas on the repository is still being discussed. However, I wanted to give a heads up about this because I think it is some really powerful technology that a lot of people might be interested in.

This Month in Mutter & GNOME Shell | April 2020

A bit later than usual, but nonetheless here the changes that happened during April on GNOME Shell and Mutter.

GNOME Shell

The command-line extensions tool received a round of improvements, and now reports extension errors better. Switching the scale of a monitor now should update all interface elements properly on GNOME Shell. Another quality of life improvement that landed was the inclusion of ASCII alternatives in the search index, which for example allows “eteindre” match “éteindre” (French for “power off”).

GNOME Shell now integrates with the parental controls technology being developed across the GNOME stack. If there are user restrictions in place, GNOME Shell now filters the applications that are not supposed to be used by the particular user.

One important improvement that landed during April is the rewrite of GNOME Shell’s calendar daemon. This updated version should prevent a lot of heavy background processing of events. Given the extents of the improvements, this is being considered for backporting to GNOME 3.36, but the size of the changes are also considerable. Testing and validation would be appreciated.

April then ended with the release of both GNOME Shell 3.36.2 as well as 3.37.1.

Mutter

On Mutter side, we say improvements to various parts of Clutter, such as ClutterOffscreenEffect, paint nodes, ClutterActorMeta, and various gestures. All these improvements are tiny steps to a cleaner, more maintanable codebase, and thus are much welcomed.

The most prominent addition to Mutter during April was the introduction of Wayland fullscreen unredirect. This code has been under review for many months, and required coordination between different parts of the stack, to work properly. Unfortunately, because it requires a very recent version of Xwayland (1.20.8) containing the fixes necessary for it to work properly, it is not suitable for backporting.

Improvements to the screencasting code landed in preparation for further improvements to GNOME Shell’s built-in screen recorder. We hope to be able to have a single code path for capturing the screen contents, regardless of whether the consumer is the Desktop portal, or the built-in recorder.

Also an issue many users had ran into where the Super key did not work as it should when using multiple keyboard layouts in the X11 session was fixed!. A handful of other bug fixes and improvements was made for the GNOME 3.36 stable branch was also included in the 3.36.2 release in the end of the month.

Like GNOME Shell, April ended with Mutter’s 3.37.1 release as well.

The need for some sort of a general trademark/logo license

As usual I am not a lawyer and this is not legal advice.

The problem of software licenses is fairly well understood and there are many established alternatives to choose from based on your needs. This is not the case for licenses governing assets such as images and logos and especially trademarks. Many organisations, such as Gnome and the Linux Foundation have their own trademark policy pages, but they seem to be tailored to those specific organizations. There does not seem to be a kind of a "General project trademark and logo license", for lack of a better term, that people could apply to their projects.

An example case: Meson's logo

The Meson build system's name is a registered trademark. In addition it has a logo which is not. The things we would want to do with it include:
  • Allow people to use to logo when referring to the project in an editorial fashion (this may already be a legal right regardless, in some jurisdictions at least, but IANAL and all that)
  • Allow people to use the logo in other applications that integrate with Meson, in particular IDEs should be able to use it in their GUIs
  • People should be allowed to change the image format to suit their needs, logos are typically provided as SVGs, but for icon use one might want to use PNG instead
  • People should not be allowed to use the logos and trademarks in a way that would imply they are endorsing any particular product or service
  • People should not be allowed to create and sell third party merchandising (shirts etc) using the logo
  • Achieve all this while maintaining compliance with DFSG, various corporate legal requirements, established best practices for trademark protection and all that.
Getting all of these at the same time is really hard. As an example the Creative Commons licenses can be split in two based on whether they permit commercial use. All those that do permit it fail because they (seem to) permit the creation and sales of third party merchandise. Those that prohibit commercial are problematic because they prevent companies from shipping a commercial IDE product that uses the logo to identify Meson integration (which is something we do want to support, that is what a logo is for after all). This could also be seen as discriminating against certain fields of endeavour, which is contrary to things like the GPL's freedom zero and DFSG guideline #6.

Due to this the current approach we have is that logo usage requires individual permission from me personally. This is an awful solution, but since I am just a random dude on the Internet with a lawyer budget of exactly zero, it's about the only thing I can do. What would be great is if the entities who do have the necessary resources and expertise would create such a license and would then publish it freely so FOSS projects could just use it just as easily as picking a license for their code.

May 12, 2020

Finally Landed on Planet GNOME

Hi to the people of planet GNOME!

Should I start with a deep introduction? Not sure! Okay, let me start from my first time with Linux. I installed my first Linux when I was around 17, It was OpenSUSE. I just burned iso and booted, HAHAHA It was a magnetic disk era. After some years I was getting deep into Linux. I consider Linux as an Icecream. Lots of flavors to eat. Eat whatever you like. Or make your own flavor. 4-5 years ago I was jumping over multiple distros. I tried multiple linux distros. But now I'm settled on a custom build Debian distro. My first encounter with GNOME was on Fedora. I still love Fedora. But Debian is ultra-fast with only selected packages and easy to make its flavor. This is my short Linux story.

This year I'm selected into GSOC. Yay. well, the fun part is Its GNOME. I applied in only one Org. from the list, cause It was GNOME or nothing.

I was connected to one of my mentor month before GSOC start. He told me to fix some newcomers' issues on GitLab. That was my first contribution to GNOME.

After some time I committed almost 10-12 commits. I forgot to explain what I did, I was contributing to the Sound Recorder app. It was coded long ago with an old code base and the whole UI part was hardcoded. I cleaned lots of code and implemented all widgets to glade UI files. Which was essential to making future changes in application UI. I also cleaned all the code and updated old implementations. All of these done before I selected into GSOC.

I'm ending blog here, There's a lot more to explore on this planet.

I'm not sure whats I'm gonna write in the next blog. Is it development blog or general talking, God knows.

Big thanks to my mentors Bilal Elmoussaoui and Felipe Borges, always available when I needed them.

May 11, 2020

2020-05-11 Monday

  • Planning call, E-mail left & right; pleased to see the nice Demo Servers to make it much easier for people to try out an integration using any server, with more easy integrations coming.
  • Plugged away at admin & E-mail much of the day until late; Mondays!

Enforcing locking with C++ nonmovable types

Let's say you have a struct with some variable protected by a mutex like this:

struct UnsafeData {
  int x;
  std::mutex ;
};

You should only be able to change x when the mutex is being held. A typical solution is to make x private and then create a method like this:

void UnsafeData::set_x(int newx) {
  // WHOOPS, forgot to lock mutex here.
  x = newx;
}

It is a common mistake that when code is changed, someone, somewhere forgots to add a lock guard. The problem is even bigger if the variable is a full object or a handle that you would like to "pass out" to the caller so they can use it outside the body of the struct. This caller also needs to release the lock when it's done.

This brings up an interesting question: can we implement a scheme which only permits safe accesses to the variables in a way that the users can not circumvent [0] and which has zero performance penalty compared to writing optimal lock/unlock function calls by hand and which uses only standard C++?

Initial approaches

The first idea would be to do something like:

int& get_x(std::lock_guard<std::mutex> &lock);

This does not work because the lifetimes of the lock and the int reference are not enforced to be the same. It is entirely possible to drop the lock but keep the reference and then use x without the lock by accident.

A second approach would be something like:

struct PointerAndLock {
  int *x;
  std::lock_guard<std::mutex> lock;
};

PointerAndLock get_x();

This is better, but does not work. Lock objects are special and they can't be copied or moved so for this to work the lock object must be stored in the heap, meaning a call to new. You could pass that in as an out-param but those are icky. That would also be problematic in that the caller creates the object uninitialised, meaning that x points to garbage values (or nullptr). Murpy's law states that sooner or later one of those gets used incorrectly. We'd want to make these cases impossible by construction.

The implementation

It turns out that this has not been possible to do until C++ added the concept of guaranteed copy elision. It means that it is possible to return objects from functions via neither copy or a move. It's as if they were automatically created in the scope of the calling function. If you are interested in how that works, googling for "return slot" should get you the information you need.  With this the actual implementation is not particularly complex. First we have the data struct:

struct Data {
    friend struct Proxy;
    Proxy get_x();

private:
    int x;
    mutable std::mutex m;
};

This struct only holds the data. It does not manipulate it in any way. Every data member is private, so the struct itself and its Proxy friend can poke them directly. All accesses go via the Proxy struct, whose implementation is this:

struct Proxy {
    int &x;

    explicit Proxy(Data &input) : x(input.x), l(input.m) {}

    Proxy(const Proxy &) = delete;
    Proxy(Proxy &&) = delete;
    Proxy& operator=(const Proxy&) = delete;
    Proxy& operator=(Proxy &&) = delete;


private:
    std::lock_guard<std::mutex> l;
};

This struct is not copyable or movable. Once created the only things you can do with it are to access x and to destroy the entire object. Thanks to guaranteed copy elision, you can return it from a function, which is exactly what we need.

The creating function is simply:

Proxy Data::get_x() {
    return Proxy(*this);
}

Using the result feels nice and natural:

void set_x(Data &d) {
    // d.x = 3 does not compile
    auto p = d.get_x();
    p.x = 3;
}

This has all the requirements we need. Callers can only access data entities when they are holding the mutex [1]. They do not and in deed can not release the mutex accidentally because it is marked private. The lifetime of the variable is tied to the life time of the lock, they both vanish at the exact same time. It is not possible to create half initialised or stale Proxy objects, they are always valid. Even better, the compiler produces assembly that is identical to the manual version, as can be seen via this handy godbolt link.

[0] Unless they manually reinterpret cast objects to char pointers and poke their internals by hand. There is no way to prevent this.

[1] Unless they manually create a pointer to the underlying int and stash it somewhere. There is no way to prevent this.

Test driving Flathub mirror for users in China

One of the reasons Flathub is relatively fast regardless of where it’s used is CDN service provided by Fastly. This is not a good thing for users from China though, where Fastly, and thus Flathub, is blocked. Similar services are operating in China, but being an open source project, it’s easy to guess our budget is close to zero.

Felix Yan, a fellow Arch developer, suggested some VPS providers that are considered “China-friendly”. In the end, I configured two new servers in Seoul using Oracle Cloud free tier.

As Flathub enforces the remote URL for historical reasons, switching to a different address requires performing some manual changes from the terminal:

  1. Check with flatpak remote-list whether Flathub remote is configured globally (system) or for the current user only (user). It may be both.
  2. Open /var/lib/flatpak/repo/config for system installations or ~/.local/share/flatpak/repo/config for user installation in a text editor.
  3. Find [remote “flathub”] section. Change url to https://sel.flathub.org/repo/. Add new line below with following content: url-is-set=true

The entire section should look similar to:

[remote “flathub”]
url=https://sel.flathub.org/repo/
url-is-set=true
xa.title=Flathub
gpg-verify=true
gpg-verify-summary=true
xa.comment=Central repository of Flatpak applications
xa.description=Central repository of Flatpak applications
xa.icon=https://dl.flathub.org/repo/logo.svg
xa.homepage=https://flathub.org/

All subsequent Flatpak operations will go through one of the mirrors/proxies in Seoul now. I have also submitted a pull request which will make the process less manual.

If you are located in China, please give it a try and leave feedback in related GitHub issue. We’ll keep it running and mention in the setup instructions if it works well.

May 10, 2020

2020-05-10 Sunday

  • Up late, relaxed, sung & had a sermon later, Pizza lunch, out for a walk around a nearby golf-course; Parks & Rec.

GSoC 2020 with Epiphany

Fast-forwarding almost an entire school year since my last post, I am very happy to be writing this next one now as I’m participating again in Google Summer of Code 🙂

A student is allowed to participate in GSoC only twice in a lifetime. I’m thinking they chose this rule as an “only once in a lifetime” type of rule would have sounded too dramatic.

Humor aside, I have actually learned from a colleague over at university that Google implemented this rule because veteran students would keep returning every year taking most of the slots and thus leaving very little room for newcomers to make an entrance. Considering I’m a returning student myself, I can imagine that at the time I could have very easily been part of the problem 🙂

This year I will be working on Epiphany, also known as GNOME Web (or simply just Web), which is a GNOME-flavored browser.

Firstly, I’ll take a few sentences to congratulate the GNOME community on two fronts:

  1. The icons used in their applications are very nice 🙂 I’m no artist but one can imagine drawing them is no trivial feat so here’s a thanks to everyone who spent time on drawing these lovely pictures.
  2. The internal names used for applications are very interesting. I have heard about Epiphany on Matrix and I had no idea it’s a browser but the name sounded quite intriguing and made me curiously wonder “what could an app named like that be ?” 🙂

A few technical details

My last year’s GSoC project was focused on one large feature inside GNOME Games. In contrast to that, this year’s project consists of several smaller enhancements inside some of Epiphany’s components.

Specifically, I’ll be hacking inside the Preferences dialog, the History dialog and the Bookmarks popover.

I’ll also be working with a different programming language, as Games is written in Vala, whereas Epiphany is written in C. Working with C is definitely a bit more difficult than working with Vala. Thankfully, Builder’s code indexing combined with GNOME’s rich documentation does make it a learning experience rather than an impossible challenge.

Epiphany uses an external library called WebKit to actually render the web pages (this would be considered the heavy lifting core of any browser). The Epiphany project itself is responsible for all the other features of a browser, such as providing an interface for having multiple tabs, managing bookmarks, history, changing various settings and the list goes on.

Lastly, I’ll use this paragraph to send a thanks message to this year’s project mentors Michael Catanzaro and Jan-Michael Brummer for their guidance and help 🙂

Thanks for reading ! Have a great day and stay safe ! 🙂

May 09, 2020

6 months later: nautilus co-maintainership and GSoC mentorship

It’s been a little over six months since my last blog post. It’s not like nothing happened since; I’ve just not got used to this yet.

As Ondřej Holý has previously blogged, the (now old) news are that he has invited me to be co-maintainer of the Files app. I was hesitant at first. I’m not sure if it was what’s called imposter syndrome, but I did worry I was not qualified to be maintainer, as I have no formal education on software engineering. I’ve started to overcome my doubts while attending GUADEC 2019, thanks to everyone who encouraged me, and I’ve finally cleared them thanks to Ondřej’s invitation and support. Now I’m happy to have accepted the challenge.

Later another challenge arose: becoming a mentor for a Google Summer of Code project. The first few times I was asked if I would be a mentor, I’ve dismissed it as not having time, but the actual reason was I believed I would not come up with a good project idea. As it turned out, I actually had already written a project idea, I had just not realized it until Felipe Borges told me it was a valid idea. And this past week the project has been accepted. Today I’ve had a great conversation with my student, Apoorv Sachan, as our first scheduled IRC meeting. Now I’m also happy to have accepted this challenge too.

Now, enough with boring personal experiences, right? Okay, okay, I hear you, I promise next post will have screenshots of new developments in the Files app!

My GSoC Proposal Got Accepted For GNOME

I’m very happy to announce that my proposal for GSoC for The GNOME git client app “gitg” got accepted.

You can check out my project here.

I’ve been very passionate about the open source community since the very first day I used Linux and learnt about it. I always wanted to contribute to open source projects especially to software I use daily.

I’m in my senior year in my College, pursuing a Bachelor’s degree in Computer Engineering and Software Systems at Ain Shams University, So I saw this as a great opportunity to get more experience with the help of a mentor and an awesome community to enhance my skills, while also helping in improving, making an impact to and extending software I really like and appreciate it’s working flow, simplicity, and elegance like GNOME.

So I began this year by studying and learning more about the technologies used to develop GNOME Software e.g. GObject, Glib, etc. Then around February I saw the ideas list at GNOME website and I was really interested in gitg ideas, So I contacted gitg IRC channel, talked with the Project Maintainer (Alberto Fanjul, “albfan” on IRC) and told him that I’m interested, he presented me with a newcomers’ issue and I made a MR for it. Then I created a pet project just to play around more with libgit2-glib library, which is a wrapper for the libgit2 APIs.

Between the application period and the announcement of the accepted students, I’ve been trying to read more in detail the code base of gitg. I also tried to contribute to a feature I really like, that exists in most diff tools, which is highlighting the changes within the modified lines. I’ve made a MR for the progress I made so far, it’s still a WIP though, so I still need to try improving and testing it.

I’m really excited and looking forward to working with this awesome community, and for the upcoming weeks. I’m looking forward to being a part of GNOME and contributing to it.

Tracker 2.99.1 and miners released

TL;DR: $TITLE, and a call for distributors to make it easily available in stable distros, more about that at the bottom.

Sometime this week (or last, depending how you count), Tracker 2.99.1 was released. Sam has been doing a fantastic series of blog posts documenting the progress. With my blogging frequency I’m far from stealing his thunder :), I will still add some retrospect here to highlight how important of a milestone this is.

First of all, let’s give an idea of the magnitude of the changes so far:


[carlos@irma tracker]$ git diff origin/tracker-2.3..origin/master --stat -- docs examples src tests utils |tail -n 1
788 files changed, 20475 insertions(+), 66384 deletions(-)

[carlos@irma tracker-miners]$ git diff origin/tracker-miners-2.3..origin/master --stat -- data docs src tests | tail -n 1
354 files changed, 39422 insertions(+), 6027 deletions(-)

What did happen there? A little more than half of the insertions in tracker-miners (and corresponding deletions in tracker) can be attributed to code from libtracker-miner, libtracker-control and corresponding tests moving to tracker-miners. Those libraries are no longer public, but given those are either unused or easily replaceable, that’s not even the most notable change :).

The changes globally could be described as “things falling in place”, Tracker got more cohesive, versatile and tested than it ever was, we put a lot of care and attention to detail, and we hope you like the result. Let’s break down the highlights.

Understanding SPARQL

Sometime a couple years ago, I got fed up after several failed attempts at implementing support for property paths, this wound up into a rewrite of the SPARQL parser. This was part of Tracker 2.2.0 and brought its own benefits, ancient history.

Getting to the point, having the expression tree in the new parser closely modeled after SPARQL 1.1 grammar definition helped getting a perfect snapshot of what we don’t do, what we don’t do correctly and what we do extra. The parser was made to accept all correct SPARQL, and we had an `_unimplemented()` define in place to error out when interpreting the expression tree.

But that also gave me something to grep through and sigh, this turned into many further reads of SPARQL 1.1 specs, and a number of ideas about how to tackle them. Or if we weren’t restricted by compatibility concerns, as for some things we were limited by our own database structure.

Fast forward to today, the define is gone. Tracker covers the SPARQL 1.1 language in its entirety, warts and everything. The spec is from 2013, we just got there 7 years late :). Most notably, there’s:

  • Graphs: In a triple store, the aptly named triples consist of subject/predicate/object, and they belong within graphs. The object may point to elements in other graphs.

    In prior versions, we “supported graphs” in the language, but those were more a property of the triple’s object. This changes the semantics slightly in appearance but in fundamental ways, eg. no two graphs may have the same triple, and the ownership of the triple is backwards if subject and object are in different graphs.

    Now the implementation of graphs perfectly matches the description, and becomes a good isolated unit to let access in the case of sandboxing.

    We also support the ADD/MOVE/CLEAR/LOAD/DROP wholesome operations on graphs, to ease their management.

  • Services: The SERVICE syntax allows to federate portions of your query graph pattern to external services, and operate transparently on that data as if local. This is not exactly new in Tracker 2.99.x, but now supports dbus services in addition to http ones. More notes about why this is key further down.
  • New query forms, DESCRIBE/CONSTRUCT: This syntax sits alongside SELECT. DESCRIBE is a simple form to get RDF triples fully describing a resource, CONSTRUCT is a more powerful data extraction clause that allows serializing arbitrary portions of the triple set, even all of it, and even across RDF schemas.

Of all 11 documents from the SPARQL recommendations, we are essentially missing support for HTTP endpoints to entirely pass for a SPARQL 1.1 store. We obviously don’t mean to compete wrt enterprise-level databases, but we are completionists and will get to implementing the full recommendations someday :).

There is no central store

The tracker-store service got stripped of everything that makes it special. You were already able to create private stores, making those public via DBus is now one API call away. And its simple DBus API to perform/restore backups is now superseded by CONSTRUCT and LOAD syntax.

We have essentially democratized triple stores, in this picture (and a sandboxed world) it does not make sense to have a singleton default one, so the tracker-store process itself is no more. Each miner (Filesystem, RSS) has its own store, made public on its DBus name. TrackerSparqlConnection constructors let you specifically create a local store, or connect to a specific DBus/HTTP service.

No central service? New paradigm!

Did you use to store/modify data in tracker-store? There’s some bad news: It’s no longer for you to do that, scram from our lawn!

You are still much welcome to create your own private store, there you can do as you please, even rolling something else than Nepomuk.

But wait, how can you keep your own store and still consume data indexed by tracker miners? Here comes the SERVICE syntax to play, allowing you to deal with miner data and your own altogether. A simple hypothetical example:

# Query favorite files
SELECT ?u {
  SERVICE <dbus:org.freedesktop.Tracker3.Miner.Files> {
    ?u a nfo:FileDataObject
  }
  ?u mylocaldata:isFavorite true
}

As per the grammar definition, the SERVICE syntax can only be used from Query forms, not Update ones. This is essentially the language conspiring to keep a clear ownership model, where other services are not yours to modify.

If you are only interested in accessing one service, you can use tracker_sparql_connection_bus_new and perform queries directly to the remote service.

A web presence

It’s all about appearance these days, that’s why newscasters don’t switch the half of the suit they wear. A long time ago, we used to have the tracker-project.org domain, the domain expired and eventually got squatted.

That normally sucks on itself, for us it was a bit of a pickle, as RDF (and our own ontologies) stands largely on URIs, that means live software producing links out of our control, and it going to pastes/bugs/forums all over the internet. Luckily for us, tracker-project.org is a terrible choice of name for a porn site.

We couldn’t simply do the change either, in many regards those links were ABI. With 3.x on the way, ABI was no longer a problem, Sam did things properly so we have a site, and a proper repository of ontologies.

Nepomuk is dead, long live Nepomuk

Nepomuk is a dead project. Despite its site being currently alive, it’s been dead for extended periods of time over the last 2 years. That’s 11.5M EUR of your european taxpayer money slowly fading away.

We no longer consider we should consider it “an upstream”, so we have decided to go our own. After some minor sanitization and URI rewriting, the Nepomuk ontology is preserved mostly as-is, under our own control.

But remember, Nepomuk is just our “reference” ontology, a swiss army knife for whatever a might need to be stored in a desktop. You can always roll your own.

Tracker-miner-fs data layout

For sandboxing to be any useful, there must be some actual data separation. The tracker-miner-fs service now stores things in several graphs:

  • tracker:FileSystem
  • tracker:Audio
  • tracker:Video
  • tracker:Documents
  • tracker:Software

And commits further to the separation between “Data Objects” (e.g. files) and “Information Elements” (e.g. what its content represents). Both aspects of a “file” still reference each other, but simply used to be the same previously.

The tracker:FileSystem graph is the backbone of file system data, it contains all file Data Objects, and folders. All other graphs store the related Information Elements (eg. a song in a flac file).

Resources are interconnected between graphs, depending on the graphs you have access to, you will get a partial (yet coherent) view of the data.

CLI improvements

We have been doing some changes around our CLI tools, with tracker shifting its scope to being a good SPARQL triple store, the base set of CLI tools revolves around that, and can be seen as an equivalent to sqlite3 CLI command.

We also have some SPARQL specific sugar, like tracker endpoint that lets you create transient SPARQL services.

All miner-specific subcommands, or those that relied implicitly on their details did move to the tracker-miners repo, the tracker3 command is extensible to allow this.

Documentation

In case this was not clear, we want to be a general purpose data storage solution. We did spend quite some time improving and extending the developer and ontology documentation, adding migration notes… there’s even an incipient SPARQL tutorial!

There is a sneak preview of the API documentation at our site. It’s nice being able to tell that again!

Better tests

Tracker additionally ships a small helper python library to make it easy writing tests against Tracker infrastructure. There’s many new and deeper tests all over the place, e.g. around new syntax support.

Up next…

You’ve seen some talk about sandboxing, but nothing about sandboxing itself. That’s right, support for it is in a branch and will probably be part of 2.99.2. Now the path is paved for it to be transparent.

We currently are starting the race to update users. Sam got some nice progress on nautilus, and I just got started at shaving a yak on a cricket.

The porting is not completely straightforward. With few nice exceptions, a good amount of the tracker code around is stuck in some time frozen “as long as it works”, cargo-culted state. This sounds like a good opportunity to modernize queries, and introduce the usage of compiled statements. We are optimist that we’ll get most major players ported in time, and made 3.x able to install and run in parallel in case we miss the goal.

A call to application developers

We are no longer just “that indexer thingy”. If you need to store data with more depth than a table. If you missed your database design and relational algebra classes, or don’t miss them at all. We’ve got to talk :), come visit us at #tracker.

A call to distributors

We made tracker and tracker-miners 3.x able to install and run in parallel to tracker 2.x, and we expect users to get updated to it over time.

Given that it will get reflected in nightly flatpaks, and Tracker miners are host services, we recommend that tracker3 development releases are made available or easy to install in current stable distribution releases. Early testers and ourselves will thank you.

Presenting Our Google Summer of Code Students!

Google has published the list of students accepted in the Google Summer of Code program. The accepted students work on open-source software. Pending monthly evaluations, the students receive a stipend from Google. Like last year, we’re mentoring three students!

Abhishek Kumar Singh  will improve the Media Library. The current implementation will be initially simplified to deal with a single Gtk.FlowBox container. Asset tagging will provide the info to display in a new Folder View. The stretch goal is to display the assets by date.

Ayush Mittal will improve the Render experience. The current UI will be simplified, allowing the user to select a meaningful preset. Specifying the export quality for the officially supported encoders will be possible using a slider widget. The advanced options will still be available but not directly visible.

Vivek R will implement face/object tracking and blurring. A new GStreamer plugin will allow tracking a specified region in the video using OpenCV. The obtained tracking information is presented to the user to be reviewed and adjusted in a new UI Perspective. The user can apply the adjusted positions to a blur effect applied to the clip.

Please get in touch if you are interested in working with us. You can find us in https://riot.im/app/#/room/#pitivi:matrix.org

May 08, 2020

Wayland doesn't support $BLAH

Gather round children, it's analogy time! First, some definitions:

  • Wayland is a protocol to define the communication between a display server (the "compositor") and a client, i.e. an application though the actual communication is usually done by a toolkit like GTK or Qt.
  • A Wayland Compositor is an implementation of a display server that (usually but not necessary) handles things like displaying stuff on screen and handling your input devices, among many other things. Common examples for Wayland Compositors are GNOME's mutter, KDE's KWin, weston, sway, etc.[1]

And now for the definitions we need for our analogy:

  • HTTP is a protocol to define the communication between a web server and a client (usually called the "browser")
  • A Web Browser is an implementation that (sometimes but not usually) handles things like displaying web sites correctly, among many other things. Common examples for Web Browsers are Firefox, Chrome, Edge, Safari, etc. [2]

And now for the analogy:

The following complaints are technically correct but otherwise rather pointless to make:

  • HTTP doesn't support CSS
  • HTTP doesn't support adblocking
  • HTTP doesn't render this or that website correctly
And much in the same style, the following complaints are technically correct but otherwise rather pointless to make:
  • Wayland doesn't support keyboard shortcuts
  • Wayland doesn't support screen sharing
  • Wayland doesn't render this or that application correctly
In most cases you may encounter (online or on your setup), saying "Wayland doesn't support $BLAH" is like saying "HTTP doesn't support $BLAH". The problem isn't in with Wayland itself, it's a missing piece or bug in the compositor you're using.

Likewise, saying "I don't like Wayland" is like saying "I don't like HTTP".The vast majority of users will have negative feelings towards the browser, not the transport protocol.

[1] Because they're implementations of a display server they can speak multiple protocols and some compositors can also be X11 window managers, much in the same way as you can switch between English and your native language(s).[2] Because they're implementations of a web browser they can speak multiple protocols and some browsers can also be FTP file managers, much in the same way as... you get the point

May 07, 2020

Fractal: Google Summer of Code 2020

I'm glad to say that I'll participate again in the GSoC, as mentor. This summer we'll try to implement multi-account support in Fractal.

The selected student is Alejandro Dominguez (aledomu), that is collaborating with Fractal since the Seville Hackfest in December 2018, doing a great work in the backend. Alejandro is young developer with a lot of energy and ideas, so it's great to have this kind of people working on GNOME.

I've not spend much time lately developing Fractal, the time and energy is limited, but I'll try to use this GSoC mentorship to go back to the active Fractal development. My objective during this summer will be to stabilize, fix bugs and improve the performance and create a new release at the end of the GSoC, because we've a lot of new functionality in master, but I didn't spend the time to do the release.

The google summer of code is a great opportunity for free software, it give us a full time working student, during three months, working on free software, so it's really appreciated in a community where a lot of work is volunteer work.

There are a total of 14 projects selected for this GSoC round for gnome, you can take a look to the full list in the GSoC page.

GNOME is not the default for Fedora Workstation

We recently had a Fedora AMA where one of the questions asked is why GNOME is the default desktop for Fedora Workstation. In the AMA we answered why GNOME had been chosen for Fedora Workstation, but we didn’t challenge the underlying assumption built into the way the question was asked, and the answer to that assumption is that it isn’t the default. What I mean with this is that Fedora Workstation isn’t a box of parts, where you have default options that can be replaced, its a carefully procured and assembled operating system aimed at developers, sysadmins and makers in general. If you replace one or more parts of it, then it stops being Fedora Workstation and starts being ‘build your own operating system OS’. There is nothing wrong with wanting to or finding it interesting to build your own operating systems, I think a lot of us initially got into Linux due to enjoying doing that. And the Fedora project provides a lot of great infrastructure for people who want to themselves or through teaming up with others build their own operating systems, which is why Fedora has so many spins and variants available.
The Fedora Workstation project is something we made using those tools and it has been tested and developed as an integrated whole, not as a collection of interchangeable components. The Fedora Workstation project might of course over time replace certain parts with other parts over time, like how we are migrating from X.org to Wayland. But at some point we are going to drop standalone X.org support and only support X applications through XWayland. But that is not the same as if each of our users individually did the same. And while it might be technically possible for a skilled users to still get things moved back onto X for some time after we make the formal deprecation, the fact is that you would no longer be using ‘Fedora Workstation’. You be using a homebrew OS that contains parts taken from Fedora Workstation.

So why am I making this distinction? To be crystal clear, it is not to hate on you for wanting to assemble your own OS, in fact we love having anyone with that passion as part of the Fedora community. I would of course love for you to share our vision and join the Fedora Workstation effort, but the same is true for all the other spins and variant communities we have within the Fedora community too. No the reason is that we have a very specific goal of creating a stable and well working experience for our users with Fedora Workstation and one of the ways we achieve this is by having a tightly integrated operating system that we test and develop as a whole. Because that is the operating system we as the Fedora Workstation project want to make. We believe that doing anything else creates an impossible QA matrix, because if you tell people that ‘hey, any part of this OS is replaceable and should still work’ you have essentially created a testing matrix for yourself of infinite size. And while as software engineers I am sure many of us find experiments like ‘wonder if I can get Fedora Workstation running on a BSD kernel’ or ‘I wonder if I can make it work if I replace glibc with Bionic‘ fun and interesting, I am equally sure we all also realize what once we do that we are in self support territory and that Fedora Workstation or any other OS you use as your starting point can’t not be blamed if your system stops working very well. And replacing such a core thing as the desktop is no different to those other examples.

Having been in the game of trying to provide a high quality desktop experience both commercially in the form of RHEL Workstation and through our community efforts around Fedora Workstation I have seen and experienced first hand the problems that the mindset of interchangeable desktop creates. For instance before we switched to the Fedora Workstation branding and it was all just ‘Fedora’ I experienced reviewers complaining about missing features, features had actually spent serious effort implementing, because the reviewer decided to review a different spin of Fedora than the GNOME one. Other cases I remember are of customers trying to fix a problem by switching desktops, only to discover that while the initial issue they wanted fix got resolved by the switch they now got a new batch of issues that was equally problematic for them. And we where left trying to figure out if we should try to fix the original problem, the new ones or maybe the problems reported by users of a third desktop option. We also have had cases of users who just like the reviewer mentioned earlier, assumed something was broken or missing because they where using a different desktop than the one where the feature was added. And at the same time trying to add every feature everywhere would dilute our limited development resources so much that it made us move slow and not have the resources to focus on getting ready for major changes in the hardware landscape for instance.
So for RHEL we now only offer GNOME as the desktop and the same is true in Fedora Workstation, and that is not because we don’t understand that people enjoy experimenting with other desktops, but because it allows us to work with our customers and users and hardware partners on fixing the issues they have with our operating system, because it is a clearly defined entity, and adding the features they need going forward and properly support the hardware they are using, as opposed to spreading ourselves to thin that we just run around putting on band-aids for the problems reported.
And in the longer run I actually believe this approach benefits those of you who want to build your own OS to, or use an OS built by another team around a different set of technologies, because while the improvements might come in a bit later for you, the work we now have the ability to undertake due to having a clear focus, like our work on adding HiDPI support, getting Wayland ready for desktop use or enabling Thunderbolt support in Linux, makes it a lot easier for these other projects to eventually add support for these things too.

Update: Adam Jacksons oft quoted response to the old ‘linux is about choice meme’ is also a required reading for anyone wanting a high quality operating system

May 06, 2020

Its Happening!!

The email

It’s happening!! My GSoC proposal to GNOME has been accepted! I’m very thankful to my mentor, Alexander Mikhaylenko (@alexm aka @exalm), for having me on the project. I’ve already learned so much with his guidance. And I can’t wait to have a great time in the coming months.

I will be working on GNOME Games. It’s a video game launcher + emulator for several video game platforms. My work is to implement game collections which will allow users to create, view and manage user creatable and auto generatable collection of games (like albums in photo viewers). Games is mostly written in Vala, and is packaged as flatpak. You can get it from here.

An adventure ahead. So here we go!


Introduction

Hello everyone! I’m Neville and I’m an all-things-computer enthusiast. And this is my my blog where I tell the tales of my journey into open source!!

I’ve always been interested to be part of the open source community and to contribute to the countless cool projects. Now that I’ve started contributing to open source projects it feels very exciting to be part of it! The community is very nice and helpful. And I feel like I’m learning a lot in the process. And there’s a whole journey ahead me.

So, welcome and stay tuned!!


May 05, 2020

gedit and gCSVedit on the Microsoft Store

A short blog post to let you know that:

gedit is now on the Microsoft Store! gedit for Windows. Yes, it works well, although as always there is room for improvement. It is just the beginning to have sources of funding that would make full-time development of gedit possible in the long run.

I have also put gCSVedit on the Microsoft Store: gCSVedit for Windows. gCSVedit is a simple text editor for CSV/TSV/… files.

Virtual Fedora 32 release party

We’ve been organizing Fedora release parties for the Czech community since Fedora 15 (normally in Prague and Brno, once in KoÅ¡ice, Slovakia), but in those coronavirus times it seemed like we were out of luck. Not quite. We’ve decided to organize a virtual release party everyone can join from the comfort (and safety) of their homes.

Originally I was planning to use Jitsi.org with streaming to Youtube. Speakers would join the call on Jitsi.org and attendees would watch it on Youtube and comment under the Youtube stream or in our Telegram chat. But the stream was one minute delayed behind the call which didn’t promise an interactive event.

In the end we were offered a solution from Czech Technical University (BigBlueButton running on powerful physical hardware and with a really good connectivity) and went for it which turned out to be a great decision. I have never had a better video call experience. It was the first time I could fully utilize my FullHD webcam, there were virtually no delays and BBB could hold 8 webcam streams in parallel and 40 participants in total without a hiccup. Afterwards people told me that when I was demoing GNOME 3.36 the GNOME Shell effects looked almost as smooth as performed on the local machine.

Fedora 32 release party in full swing.

We put together a program of 4 talks on Fedora topics and had an open discussion afterwards. Most people used the integrated chat to comment and ask questions, a few besides the speakers used their voice which is something I expected because most people feel too intimidated to speak in front of strangers who they can’t even see.

The event lasted for 3 hours and would have probably continued if I hadn’t had to stop it because I had to put kids to bed. The kids were the biggest challenge for me. Our offices were closed, so were public universities, so I couldn’t find any quiet private space to join the call from. So I was moderating the event and delivering my talk with my kids constantly crying in the background or demanding my attention. But I somehow managed and it was not a complete disaster.

What was originally an improvised solution turned out to be a pretty good experience. Participants said they would like to do it again or perhaps combine it with the physical release parties. Although the virtual event can’t deliver the same experience as in-person one, it brings some sense of equal opportunity. No matter if you’re stuck home with small kids, or with disability, or if you’re a young student living in countryside far away from a big city, you can join. In-person events are pretty selective in this.

We also have a recording from the event in case you’re interested (it’s in Czech).

If you can’t organize the traditional Fedora release party for your local community, don’t hesitate and organize a virtual one!

May 04, 2020

Freeplane now published at Flathub

As I advanced in April the Freeplane mind mapping software is now accepted and published at Flathub: https://flathub.org/apps/details/org.freeplane.App.

Flathub screenshot.

Freeplane is a Mind Mapping, Knowledge Management, Project Management. Develop, organize and communicate your ideas and knowledge in the most effective way.

Freeplane is a fork of Freemind and it is in active development. Now it’s ready for install in any Linux system with just point’n’click through, for example, GNOME Software or any other flatpak compatible software installation manager.

GNOME Software screenshot.

Enjoy!

Let’s welcome our 2020 GSoC interns!

It is Google Summer of Code season again and this year the GNOME project is lucky to have 14 new interns working on various projects ranging from improvements in our app ecosystem to core components and infrastructure.

The first period, from May 4 to June 1, is the Community Bonding period. Interns are expected to flock into our communication channels, ask questions, and get to know our project’s culture. Please, join me in welcoming our students and making sure they feel at home in our community.

This year we will be using Discourse as our main communication channel regarding the program, therefore if you are a mentor or intern, please make sure to check https://discourse.gnome.org/c/community/outreach for announcements. Feel free to create new topics if you have any questions. The GNOME GSoC admins will be monitoring the Outreach category and answering any doubts you might have.

Here is the list of interns and their respective projects https://summerofcode.withgoogle.com/organizations/5428225724907520/#projects

Tips for students

First of all, congratulations! This is just the beginning of your GNOME journey. Our project is almost 23 years old and likely older than some of you, but our community gets constantly renewed with new contributors passionate about software freedom. I encourage you to take some time to watch the recordings of Jonathan Blandford’s “The History of GNOME” talk in GUADEC 2017 so you can grasp how we have grown and evolved since 1997.

The first thing you want to do after celebrating your project’s acceptance is to contact your mentor (if they haven’t contacted you first).

Second, introduce yourself on our “Say Hello” topic! Don’t forget to mention that you are in GSoC 2020, the project you will be working on, and who’s your mentor.

Third, set up a blog where you will be logging your progress and talking directly to the broader community. In case you need help with that, ask your mentors or the GSoC admins. Intern blogs get added to Planet GNOME, which is a feed aggregator of posts written by dozens of GNOME Foundation members.

Many of us read Planet GNOME daily! Besides, some of our active contributors have participated as interns in past editions of GSoC. You can dig for their blogposts and get a better sense of how these progress reports are written.

It is totally normal for you to have questions and doubts about the program, to help with that, we will be hosting a Q&A on May 12 at 17:00 UTC in our RocketChat channel. All of you will receive an invitation by email this week too. If you can’t make it, feel free to join the channel at any time and ask questions there as well.

If during your internship you have a problem with your mentor (lack of communication, or misunderstandings, or deep disagreements, or
anything else), don’t hesitate to report that to the GNOME GSoC administrators.

Last but not least, have fun!