July 17, 2019

Friends of GNOME Update – July 2019

Welcome to the July 2019 Friends of GNOME Update!

New Board of Directors

Members of the GNOME Foundation voted in the annual elections for this year’s board of directors. The current board consists of:

    • Allan Day
    • Carlos Soriano
    • Federico Mena Quintero
    • Robert McQueen
    • Philip Chimento
    • Britt Yazel
    • Tristan Van Berkom

Congratulation to the new board! You can learn more about the board and what it does online.

Where are we going?

OSCON takes place in Portland, OR, USA July 17 – 18th, and we’ll be there! Feel free to stop by the booth and come and say hello. There is also a hackfest July 18 – 21st, also in Portland, OR. The Engagement team, the Documentation team, and the GTK team are all currently scheduled to participate.

Programs coordinator Kristi Progri will be at DebConf 19 in Curitiba, Brazil later this month as well.

Of course, we’ll be at GUADEC, August 23 – 28th in Thessaloniki, Greece! Registration is now open. We hope to see you there!

What we’ve been up to

GTK development

We’re moving forward with exciting new things for GTK, including completing the consistent layout manager for GTK 4. We’re working on an API to make creating custom layouts easier. Focusing on usability across machines, we’ve put a significant amount of work into memory usage, to help things run more smoothly on small and low-powered devices.

We’re using GNOME!

Flatpak.org was running on Google Analytics, but that is no more! We are now using GNOME Matomo.

Inclusion, Diversity, and GNOME

GNOME is launching a Diversity and Inclusion (D&I) initiative to help the community become even better. They are working on revamping some web pages, working on the wiki, and putting together some special workshops and events to help people find their places within the community.

Check out the Annual Report!

Thanks to contributors, the board, and staff, we have a beautiful annual report that highlights what happened during the 2018 fiscal year. You can read it online.

## Meet the GNOMEies

This month we highlighted Sriram Ramkrishna. known around free and open source software communities as Sri.

Thank you!

Thanks for reading! If you’d like this email delivered directly to your inbox, please become a Friend of GNOME and further support the project!

Meet Sriram Ramkrishna

Sriram Ramkrishna, frequently known as Sri, is perhaps GNOME’s oldest contributor. He’s been around the community for almost as long as it’s been around!

Can you tell us a bit more about yourself?

I’m one of the oldest members of GNOME having recently past my 50th birthday. I started in GNOME in late 1997, at the time I was a storage engineer working for Intel. I remember feeling amused when someone in GNOME heard my background and asked whether Intel was going to be involved. They weren’t, but it turns out they did later. In fact, it’s because of GNOME that my work life changed from being a simple engineer to a multi-faceted person with not just technical skills but soft skills.

I’m well known in a number of other communities — free software community primarily, but also corporate open source thanks working 20 years at Intel.

What’s your role within the GNOME community?

I primarily do engagement work — social media, public relations, and talks in the community. But I also try help solve specific problems within the project. One current project I’m working on is to help improve the GNOME extensions. I have an on-going project to help with developer documentation using HotDoc. That’s been somewhat lagged and I hope to find time to help lead that effort again.

Why did you get involved in GNOME?

Miguel was a charismatic leader, and attracted me that way. Plus I hate C++, and GNOME was C based. :D But more than that, GNOME was a project that if you think about it was audacious in its purpose. Building a desktop in 1997 around an operating system that was primitive in terms of user experience, tooling, and experience. I wanted to be part of that.

Why are you still involved with GNOME?

Because GNOME is always a forward thinking project. There is still a lot of exciting potential and it’s like we’re only now getting started. The past 20 years was all about getting to the stage so that we can start doing some real innovation. We’ve reached parity with OSX and Windows — mainstream desktops. But now we can leverage the power of ideas even further.

What are you working on now?

Well, right now I’m involved in building a market for Linux applications. It’s no more audacious than the concept of GNOME itself. Five years ago, I had this idea that now that we had come up with ubiquitous app technology, that we can start working on building models that allow for compensation for free software developers, application stores so that developers can know how popular their apps are, and build relationships with the users who use their applications. A lot of this is encapsulated in a conference called Libre Application Summit. We did two iterations of that, and this year we’re expanding the scope and changing the name. Linux Application Summit will be a joint collaboration with KDE and hopefully distros in the future to help create the conditions needed to build modern, useful applications on a free software platform.

What are you excited about right now — either in GNOME or free and open source software in general?

Other than the conference. I’m generally excited about where GNOME is going. I think we have challenges to overcome and I’m excited about overcoming those challenges. In the FOSS community in general, there are challenges with encroachment by big business who I think are still trying to figure out how to exploit the labor of developers and we should ever be vigilant that we keep things fair and balanced between all parties.

What is a major challenge you see for the future of GNOME?

I think for GNOME as a platform, our challenge is to make sure that we have relevant documentation for users and developers. If there is one effort that I wish we could all participate in, it is that. It comes down to how low the barrier of entry is. How one picks one platform over the other is almost always depends on how quickly you can put together an application. Building a library of code, videos, and documentation is what will make GNOME successful. The second thing is that projects like GNOME Builder will also be critical to our success. I’m excited by the idea that I can build an application and have it be easily distributed everywhere and I don’t have to use arcane tools to do it.

What do you think GNOME should focus on next?

Documentation I think is going to be important, building relationships with other organizations and a very active foundation that will put their resources into building a solid infrastructure. So it’s not just one thing, but many.

Edited for content.

libinput's new thumb detection code

The average user has approximately one thumb per hand. That thumb comes in handy for a number of touchpad interactions. For example, moving the cursor with the index finger and clicking a button with the thumb. On so-called Clickpads we don't have separate buttons though. The touchpad itself acts as a button and software decides whether it's a left, right, or middle click by counting fingers and/or finger locations. Hence the need for thumb detection, because you may have two fingers on the touchpad (usually right click) but if those are the index and thumb, then really, it's just a single finger click.

libinput has had some thumb detection since the early days when we were still hand-carving bits with stone tools. But it was quite simplistic, as the old documentation illustrates: two zones on the touchpad, a touch started in the lower zone was always a thumb. Where a touch started in the upper thumb area, a timeout and movement thresholds would decide whether it was a thumb. Internally, the thumb states were, Schrödinger-esque, "NO", "YES", and "MAYBE". On top of that, we also had speed-based thumb detection - where a finger was moving fast enough, a new touch would always default to being a thumb. On the grounds that you have no business dropping fingers in the middle of a fast interaction. Such a simplistic approach worked well enough for a bunch of use-cases but failed gloriously in other cases.

Thanks to Matt Mayfields' work, we now have a much more sophisticated thumb detection algorithm. The speed detection is still there but it better accounts for pinch gestures and two-finger scrolling. The exclusion zones are still there but less final about the state of the touch, a thumb can escape that "jail" and contribute to pointer motion where necessary. The new documentation has a bit of a general overview. A requirement for well-working thumb detection however is that your device has the required (device-specific) thresholds set up. So go over to the debugging thumb thresholds documentation and start figuring out your device's thresholds.

As usual, if you notice any issues with the new code please let us know, ideally before the 1.14 release.

July 16, 2019

A personal story about 10× development

During the last few days there has been an ongoing Twitter storm about 10× developers. And like all the ones before it (and all the future ones that will inevitably happen) the debate immediately devolved into name calling and all the other things you'd except from Twitter fights. This blog post is not about that. Instead it is about a personal experience about productivity that I had to experience closer than I would have liked.

Some years ago I was working for company X on product Y. All in all it was quite a nice experience. We had a small team working on a code base that was pretty good. It had nice tests, not too many bugs, and when issues did arise they were usually easy to fix. Eventually the project was deemed good enough and we were transferred to work on different projects.

I have no idea what our "industry standard performance multiplier" was when we worked on that project, but for the sake of argument let's call it 1×.

The project I got transferred to was the thing of nightmares. It was a C++ project and all the bad things that have ever been said about C++ were true about that code base. There was not much code but it was utterly incomprehensible. There were massively deep inheritance hierarchies, , compilation speed was measured in minutes for even the most trivial changes, and so on. It was managed by an architecture astronaut that, as one is wont to do, rewrote existing mature libraries as header only template libraries that were buggy and untested (one could even say untestable).

Thus overnight I went from being a 1× down to being a 0.1× or possibly even a 0.01× developer. Simply trying to understand what a specific function was supposed to do took hours. There was, naturally, a big product launch coming up so we needed to get things finished quickly. All in all it was a stressful, frustrating and unpleasant situation to be in. And that was not the worst of it.

After a few weeks my manager wanted to talk to me in private. He was very concerned about the fact that I had not achieved any visible progress for a while. Then I explained to him in detail all the problems in the current project. I even demonstrated how compiling a simple helloworld-level program with the frameworks we had to use took tens of seconds on the beefiest i7 desktop machine I had available. He did not seem to be able to grasp any of that as his only response was "but you used to be so productive in your previous project". Shortly thereafter the same boss started giving me not-al-all-thinly-veiled accusations that I was just slacking off and that this could lead to serious reprimands.

This story does not have a happy ending. The project eventually failed (due to completely different reasons, though), money was squandered and almost everyone involved got fired. In the aftermath I seriously considered getting out of the software engineering business altogether. The entire experience had been so miserable that becoming a 0× developer was seriously tempting.

Is there something we can learn from this?

The "×ness" of any developer does not exist in a vacuum but depends on many organizational things. The most obvious one is tooling. If you have a CI where tests take 30 minutes to run or your developers have underpowered old laptops, everyone's performance goes down. In fact, the overall health of the code base probably has a bigger effect on developer productivity than all developers' skills combined.

But even more important than technical issues are things that promote healthy team dynamics. These include things like blameless postmortems, openness to ideas from everyone, permission to try new things even if they may fail, stern weeding out of jerk behaviour and, ultimately, trust.

If you work on getting all of these things in your working environment then you may find that you find yourself with a 10× team. And if you do, the entire concept of a single 10× developer becomes meaningless.

g_array_binary_search in GLib 2.61.2

The final API so far in this mini-series on new APIs in the GLib 2.62 series is g_array_binary_search(), put together by Emmanuel Fleury and based on code by Christian Hergert. It’s due to be released in 2.61.2 soon. But first, a reminder about GLib version numbering.

Like the rest of GNOME’s official module set, GLib follows an odd/even versioning scheme, where every odd minor version number, like 2.61.x, is an unstable release building up to an even minor version number, like 2.62.x, which is stable. APIs may be added in unstable releases. They may be modified or even removed (if they haven’t been in a stable release yet). So all of the APIs I’ve blogged about recently still have a chance to be tweaked or dropped if people find problems with them. So if you see a problem or think that one of these APIs would be awkward to use in some way, please say, sooner rather than later! They need fixing before they’re in a stable release.

Back to today’s API, g_array_binary_search(). As its name suggests, this does a binary search on an array (which it requires is already sorted). You can use it like this:

static gint
compare_guint64 (gconstpointer a,
                 gconstpointer b)
{
  guint64 uint64_a = *((guint64 *) a);
  guint64 uint64_b = *((guint64 *) b);

  if (uint64_a < uint64_b)
    return -1;
  else if (uint64_a > uint64_b)
    return 1;
  else
    return 0;
}

g_autoptr(GArray) my_array = g_array_new (FALSE, TRUE, sizeof (guint64));

for (guint i = 0; i < 100; i++)
  {
    guint64 random_uint64 = ( (guint64) g_random_int () << 32) | g_random_int ();
    g_array_append_val (my_array, random_uint64);
  }

g_array_sort (my_array, compare_guint64);

/* Is ‘1234’ in the array? If so, where? */
const guint64 search_uint64 = 1234;
guint search_index;
if (g_array_binary_search (my_array, &amp;search_uint64, compare_guint64, &amp;search_index))
  g_message ("Found ‘1234’ at index %u", search_index);
else
  g_message ("Didn’t find ‘1234’");

As all computer science algorithms courses will tell you, a binary search is faster than a linear search, so you should use this in preference to iterating over an array to find an element in it, where possible.

(That’s not entirely true: the overheads of accounting for the binary search bounds, and the slowness of scattered memory loads from the array in a binary search vs sequential access in a linear search, will probably make it slower than a linear search for small arrays. But both will be fast, and if you need to care about that level of performance, you should be using a custom data structure rather than GArray.)

July 15, 2019

GSOC Progress by Mid July

July Marked the beginning of II GSOC coding month. This month our goal is to make the diff bar model as accurate and intuitive as possible.

One of the biggest thing which I learnt so far is how to contribute on upstream repositories on which our project depends.

In our case this was with Libgit2, we discovered a bug in Libgit2 while doing our project, and Albfan made this a perfect example to show me how to contribute on upstream, how to raise bugs and how to do discussions for getting it solved.

https://github.com/libgit2/libgit2/issues/5153

While this got solved we can’t wait to get the solutions merged. So we filtered out which patches works best for us and I learnt how to apply patches to projects with Flatpak.

I really think the Flatpak team has done a great job on this one, and it was super easy and useful for me to get those patches working with my project. Without flatpak manifest Idk how I would have pulled it off. 🙂

https://gitlab.gnome.org/gaurav1999/diferencia/commit/3c5c93137acfd3c11d5aeeffdd03af68523d6e3e

I tried to understand how amazing is Gtk.TextView and I used something called Gtk.TextTag for highlighting the Diff text in appropriate colors.

Red —> Removed Text

Green —-> Insertion Text

In order to make grounds for Three way merge diff view, we are able to make sure now we paint the diff bar in relative manner.

For this, we introduced a reverse direction property which essentially makes the model know which direction we will be painting the diff curves, Right to Left or Left to Right.

https://gitlab.gnome.org/gaurav1999/diferencia/commit/79ff5406f3ecdca07c3caed204ca1aff13922a2c

Now comes the hard part…

Right now our model works on DiffLineCallback, where we basically pick up each line diff and paint the indicating curve for it. The disadvantage of doing this is that, the curves are getting overlapped and it not looks good.

So I tried to do some pre processing of the data and improve this situation.

and the results look pretty!

While there are still tuning required for the algorithm, and I hope we get this completed real soon.

Now, the most Hard part !!!!!

Visa 😉 , It’s really hard to arrange all of your documents required for European visa, really hoping my hard work pays off and I meet you all Gnome Folks in GUADEC soon.

In the end it’s almost one and a half week left for this month to get end, and I have learnt a lot like always!

Array copying and extending in GLib 2.61.2

A slightly more in-depth post in the mini-series this time, about various new functions which Emmanuel Fleury has landed in GLib 2.61.2 (which is due to be released soon), based on some old but not-quite-finished patches from others.

There’s g_ptr_array_copy() and g_array_copy(); and also g_ptr_array_extend() and g_ptr_array_extend_and_steal().

g_ptr_array_copy() and g_array_copy() are obvious functions and it’s not clear why they haven’t been added before. They allow you to copy a GPtrArray or a GArray, including its contents.
When copying a GPtrArray, you pass in a GCopyFunc to copy each element (for example, by increasing its reference count). If the GCopyFunc is NULL, the element is copied by value.

For example,

g_autoptr(GPtrArray) object_array = g_ptr_array_new_with_free_func (g_object_unref);

for (gsize i = 0; i < 10; i++)
  g_ptr_array_add (object_array, create_new_object (i));

object_array_copy = g_ptr_array_copy (object_array, g_object_ref, NULL);
/* object_array and object_array_copy now contain copies of the same elements, but
 * modifying one array will not modify the other */

The g_ptr_array_extend() functions are used to join one array onto the end of another. This means you can turn the following code to join the GObject elements of array2 onto the end of array1 and ref them all:

for (gsize i = 0; i < array2->len; i++)
  g_ptr_array_add (array1, g_object_ref (g_ptr_array_index (array2, i)));

into

g_ptr_array_extend (array1, array2, g_object_ref, NULL);

If you no longer need array2, you can go further and use g_ptr_array_extend_and_steal() to avoid copying each element. This might be particularly beneficial when using string arrays, where each copy (a g_strdup()) is more expensive. So the following code:

g_autoptr(GPtrArray) array1 = g_ptr_array_new_with_free_func (g_free);
for (guint i = 0; i < 10; i++)
  g_ptr_array_add (array1, g_strdup_printf ("array1 %u", i));

g_autoptr(GPtrArray) array2 = g_ptr_array_new_with_free_func (g_free);
for (guint i = 100; i < 110; i++)
  g_ptr_array_add (array2, g_strdup_printf ("array2 %u", i));

for (gsize i = 0; i < array2->len; i++)
  g_ptr_array_add (array1, g_strdup (g_ptr_array_index (array2, i)));

would become:

g_autoptr(GPtrArray) array1 = g_ptr_array_new_with_free_func (g_free);
for (guint i = 0; i < 10; i++)
  g_ptr_array_add (array1, g_strdup_printf ("array1 %u", i));

g_autoptr(GPtrArray) array2 = g_ptr_array_new_with_free_func (g_free);
for (guint i = 100; i < 110; i++)
  g_ptr_array_add (array2, g_strdup_printf ("array2 %u", i));

g_ptr_array_extend_and_steal (array1, g_steal_pointer (&amp;array2));
/* array2 has now been destroyed */

Available for hire, 2019 edition

Hey folks, I’m back and I’m looking for some new work to challenge me—preferrably again for an organization that does something good and meaningful for the world. You can read my profile on my website, or keep reading here to discover about what I’ve been up to in the past few years.

Sometime after the end of my second term on the GNOME Foundation, I was contacted by a mysterious computer vendor that ships a vanilla GNOME on their laptops, Purism.

A laptop that was sent to me for review

They wanted my help to get their business back on track, and so I did. I began with the easy, low-hanging fruit:

  • Reviewing and restructuring their public-facing content;
  • Doing in-depth technical reviewing of their hardware products, finding industrial design flaws and reporting extensively on ways the products could be improved in future revisions;
  • Using my photo & video studio to shoot official real-world images (some of which you can see below) for use in various marketing collaterals. I also produced and edited videos that played a strong part in increasing the public’s confidence in these products.

As my work was appreciated and I was effectively showing business acumen and leadership across teams & departments, I was shortly afterwards promoted from “director of communications” to CMO.

At the very beginning I had thought it would be a short-lived contract; in practice, my partnership with Purism lasted nearly three years, as I helped the company go from strength to strength. I guess I must’ve done something right 😉

Here are some of the key accomplishments during that time:

Fun designing a professional technical brochure for conferences
  • Grew the business’ gross monthly revenue significantly (by a factor of 10, and up to a factor of 55) over the course of two years.
  • Helped devise and run the Librem 5 phone crowdfunding campaign that raised over $US 2.1 million, with significant press coverage. This proved initial market demand and reduced the risk of entering this new market. As the Linux Action Show commented during two of their episodes: “Wow, can we hire this PR department?” “They’ve done such a good job at promoting this!
  • Made the public-facing brand shine. Over time, converted some of the toughest critics into avid supporters, and turned the company’s name into one that earned trust and commands respect in our industry.
  • Did extensive research of over a hundred events (tradeshows, conferences) aligned with Purism’s business; planned and optimized sponsorships and team attendance to a selection of these events. Designed bespoke brochures, manned product exhibit booths, etc.
  • Leveraged good news, mitigated setbacks, managed customer’s expectations.
  • Devised department budget approximations and projections in preparation for investment growth.
  • Provided support and business experience to the “operations” & “customer support” departments.
  • Defined the marketing department structure and wrote critical roles to recruit for. The director of sales commented that those were “the best job descriptions [he’d] ever seen, across 50 organizations”, so apparently marketeers can make great recruiting copywriters too 😉
  • Identified many marketing and community management infrastructure issues, oversaw the deployment of solutions.
  • Onboarded members of the sales & bizdev teams so that they could blend into the organization’s culture, tap into tacit knowledge and hit the ground running
  • Coined the terms “Adaptive Design” and “Adaptive Applications” as a better, more precise terminology for convergent software in the GNOME platform. Yes, I was the team’s ghostwriter at times, and did extensive copy editing to turn technical reports into blog posts that would appeal to a wider audience while satisfying accuracy requirements.
  • Designed public surveys to gauge market demand for specific products in the B2C space, or to assess enterprise products & services requirements in the B2B space.
  • Etc. Etc.

That’s the jist of it.

With all that said, startups face challenges outside the scope of any single department. There comes a moment when your expertise has made all the difference it could in that environment, therefore making it necessary to conclude the work to seek a new challenge.

After spending a few weeks winding down that project and doing some personal R&D (there were lots of things to take care of in my backlog), I am now officially announcing my availability for hire. Or, in retweetable words:

If you know a business or organization that would benefit from my help, please feel free to share this blog post with them, or to contact me to let me know about opportunities.

The post Available for hire, 2019 edition appeared first on The Open Sourcerer.

ASG! 2019 CfP Re-Opened!

The All Systems Go! 2019 Call for Participation Re-Opened for ONE DAY!

Due to popular request we have re-opened the Call for Participation (CFP) for All Systems Go! 2019 for one day. It will close again TODAY, on 15 of July 2019, midnight Central European Summit Time! If you missed the deadline so far, we’d like to invite you to submit your proposals for consideration to the CFP submission site quickly! (And yes, this is the last extension, there's not going to be any more extensions.)

ASG image

All Systems Go! is everybody's favourite low-level Userspace Linux conference, taking place in Berlin, Germany in September 20-22, 2019.

For more information please visit our conference website!

July 14, 2019

Initializing all local variables with Clang-Tidy

A common source of all kinds of bugs is using variables without properly initializing them. Out of all security problems this one is the simplest to fix, just convert all declarations of type int x; to int x=0;. The main reason for not doing that is laziness, manually going through existing code bases and adding initialization statements is boring and nobody wants to do that.

Fortunately nowadays we don't have to. Clang-tidy provides a nice toolkit for writing source code refactoring tools for C and C++. As an exercise I wrote a checker to do this. It is submitted upstream and is undergoing code review. Implementing it was fairly straightforward. There were only two major problems. The first one was that existing documentation consists mostly of reference manuals. There is no easy to follow tutorials, only Doxygen pages. But if you dig around on the net and work on it a bit, you can get it working.

The second, and bigger, obstacle is that doing anything in the LLVM code base is sloooow. Everything in LLVM and Clang is linked to single, huge, monolithic libraries which take forever to link. Because of reasons I started doing this work on my secondary machine, which is a 4 core i5 with 16 gigs of RAM. I had to limit simultaneous linker jobs to 2 because otherwise it would just crash spectacularly to an out of memory error. Presumably it is impossible to compile the code base on a machine that has only 8 gigs of RAM. It seems that if you want to do any real development on LLVM you need a spare data center to run the compilations, which is unfortunate.

There is an evil hack to work around this, though. Set the CMake build type to Debug and then change CMAKE_CXX_FLAGS_DEBUG and CMAKE_C_FLAGS_DEBUG from -g to -Og. This makes the compilation faster and reduces memory usage to a fraction of the original. The downside is that there is no debug information, but it turns out to not be necessary when writing simple Clang-Tidy checkers.

Once all that is done the actual checker is almost trivial. This is the part that looks up all local variables without initial values:

void InitLocalVariablesCheck::registerMatchers(MatchFinder *Finder) {
  Finder->addMatcher(
      varDecl(unless(hasInitializer(anything()))).bind("vardecl"), this);
}

Then you determine what the initial value should be based on the type of the variable and add a warning and a fixit:

diag(location, "variable %0 is not initialized")
    << MatchedDecl;
diag(location, "insert initial value", DiagnosticIDs::Note)
    << FixItHint::CreateInsertion(
           location.getLocWithOffset(VarName.size()),
           InitializationString);

All in all this amounts to about 100 lines of code plus tests.

But what about performance?

The other major reason not to initialize variables is that it "may cause a runtime performance degradation of unknown magnitude". That is true, but with this tooling the degradation is no longer unknown. You can run the tool and then measure the results. This is trivial for all code bases that have performance benchmarks.

Lightweight i3 developer desktop with OSTree and chroots

Introduction I’ve always liked a clean, slim, lightweight, and robust OS on my laptop (which is my only PC) – I’ve been running the i3 window manager for years, with some custom configuration to enable the Fn keys and set up my preferred desktop session layout. Initially on Ubuntu, for the last two and a half years under Fedora (since I moved to Red Hat). I started with a minimal server install and then had a post-install script that installed the packages that I need, restore my /etc files from git, and some other minor bits.

July 13, 2019

GSoC: First month working in Pitivi

_config.yml

Pitivi is a video editor, free and open source. Targeted at newcomers and professional users, it is minimalist and powerful. This summer I am fortunate to collaborate in Pitivi development through Google Summer of Code.

My goal is to implement an interval time system, with the support of Mathieu Duponchell, my menthor, and other members of the Pitivi community.

An interval time system is a common tool in many video editors. It will introduce new features in Pitivi. The user will be able to set up a range of time in the timeline editor, playback specific parts of the timeline, export the selected parts of the timeline, cut or copy clips inside the interval and zoom in/out the interval.

Mi proposal also includes the design of a marker system to store information at a certain time position.

_config.yml

Interestingly, we started working on the markers system. It was decided that it would be useful later in the interval implementation. To implement markers we have been working on GES, creating two new classes, GESMarkerContainer and GESMarker.

After define the API I started implementing it. At this stage it was incredible helpful to me the dedication of Mathieu orienting me along the process. GES could be “quite” complicate for newcomers like me but it also makes things more interesting! Thanks to my menthor I could focus, divide task into smaller and factible chunks, both in GES and Pitivi code.

_config.yml

My work until now can be summarized in these steps: implement something needed in the API, go to Pitivi and work there until I need something else from the API, then go back to the API…

After some redesings now we have a new row in the timeline, which allows us to insert markers, move and delete them, and edit their content, which for the moments is just a string. Of course they can be saved and recovered. UI is still provisional.

_config.yml

I am really enjoying the experience, my first time in open source. The code is huge, involves differents technologies and I have to work on different levels. But it feels challenging and the community is really supportive.

Getting closer

Since my last blog post I have been on a short vacation but I have also managed to make some progress on my GSoC project again with guidance from my mentor.

My latest work concerns the UI used for the Savestates Manager.

The current Savestates Manager

The available savestates are listed on the right. Note that every savestate has a thumbnail which is a screenshot of the game taken at the moment when the savestate was created. For me it was very satisfying to reach this milestone 🙂

Every savestate also has a creation date which is displayed in the menu, but that’s certainly not as eye-catching as the screenshots.

There are still many missing features and things that need improving (such as the date formatting) but with every commit I feel that I am getting closer to the finished project.

How the finished Savestates Manager is supposed to look like

Next up I will be working on the menu header bar which is going to contain the Load, Delete and Cancel buttons.

July 12, 2019

Fork Awesome Sprites for Beast

Fork-Awesome Logo Yesterday, I sat down to upgrade the Font Awesome package used by Beast’s new UI from 4.7.0 to 5.9.0. In the end, I found that the icons look way more crispy and professional in the 4.7.0 version. Here is an example: Font-Awesome 4 vs 5 The Font Awesome 5 package has some other…

Settings, in a sandbox world

GNOME applications (and others) are commonly using the GSettings API for storing their application settings.

GSettings has many nice aspects:

  • flexible data types, with GVariant
  • schemas, so others can understand your settings (e.,g. dconf-editor)
  • overrides, so distros can tweak defaults they don’t like

And it has different backends, so it can be adapted to work transparently in many situations. One example for where this comes in handy is when we use a memory backend to avoid persisting any settings while running tests.

The GSettings backend that is typically used for normal operation is the DConf one.

DConf

DConf features include profiles,  a stack of databases, a facility for locking down keys so they are not writable, and a single-writer design with a central service.

The DConf design is flexible and enterprisey – we have taken advantage of this when we created fleet commander to centrally manage application and desktop settings for large deployments.

But it is not a great fit for sandboxing, where we want to isolate applications from each other and from the host system.  In DConf, all settings are stored in a single database, and apps are free to read and write any keys, not just their own – plenty of potential for mischief and accidents.

Most of the apps that are available as flatpaks today are poking a ‘DConf hole’ into their sandbox to allow the GSettings code to keep talking to the dconf daemon on the session bus, and mmap the dconf database.

Here is how the DConf hole looks in the flatpak metadata file:

[Context]
filesystems=xdg-run/dconf;~/.config/dconf:ro;

[Session Bus Policy]
ca.desrt.dconf=talk

Sandboxes

Ideally, we want sandboxed apps to only have access to their own settings, and maybe readonly access to a limited set of shared settings (for things like the current font, or accessibility settings). It would also be nice if uninstalling a sandboxed app did not leave traces behind, like leftover settings  in some central database.

It might be possible to retrofit some of this into DConf. But when we looked, it did not seem easy, and would require reconsidering some of the central aspects of the DConf design. Instead of going down that road, we decided to take advantage of another GSettings backend that already exists, and stores settings in a keyfile.

Unsurprisingly, it is called the keyfile backend.

Keyfiles

The keyfile backend was originally created to facilitate the migration from GConf to GSettings, and has been a bit neglected, but we’ve given it some love and attention, and it can now function as the default GSettings backend inside sandboxes.

It provides many of the isolation aspects we want: Apps can only read and write their own settings, and the settings are in a single file, in the same place as all the application data:

~/.var/app/$APP/config/glib-2.0/settings/keyfile

One of the things we added to the keyfile backend is support for locks and overrides, so that fleet commander can keep working for apps that are in flatpaks.

For shared desktop-wide settings, there is a companion Settings portal, which provides readonly access to some global settings. It is used transparently by GTK and Qt for toolkit-level settings.

What does all this mean for flatpak apps?

If your application is not yet available as a flatpak, and you want to provide one, you don’t have to do anything in particular. Things will just work. Don’t poke a hole in your sandbox for DConf, and GSettings will use the keyfile backend without any extra work on your part.

If your flatpak is currently shipping with a DConf hole, you can keep doing that for now. When you are ready for it, you should

  • Remove the DConf hole from your flatpak metadata
  • Instruct flatpak to migrate existing DConf settings, by adding a migrate-path setting to the X-DConf section in your flatpak metadata. The value fo the migrate-path key is the DConf path prefix where your application’s settings are stored.

Note that this is a one-time migration; it will only happen if the keyfile does not exist. The existing settings will be left in the DConf database, so if you need to do the migration again for whatever reason, you can simply remove the the keyfile.

This is how the migrate-path key looks in the metadata file:

[X-DConf]
migrate-path=/org/gnome/builder/

Closing the DConf hole is what makes GSettings use the keyfile backend, and the migrate-path key tells flatpak to migrate settings from DConf – you need both parts for a seamless transition.

There were some recent fixes to the keyfile backend code, so you want to make sure that the runtime has GLib 2.60.6, for best results.

Happy flatpaking!

Update: One of the most recent fixes in the keyfile backend was to correct under what circumstances GSettings will choose it as the default backend. If you have problems where the wrong backend is chosen, as a short-term workaround, you can override the choice with the GSETTINGS_BACKEND environment variable.

Update 2: To add the migrate-path setting with flatpak-builder, use the following option:

--metadata=X-DConf=migrate-path=/your/path/


GNOME Software in Fedora will no longer support snapd

In my slightly infamous email to fedora-devel I stated that I would turn off the snapd support in the gnome-software package for Fedora 31. A lot of people agreed with the technical reasons, but failed to understand the bigger picture and asked me to explain myself.

I wanted to tell a little, fictional, story:

In 2012 the ISO institute started working on a cross-vendor petrol reference vehicle to reduce the amount of R&D different companies had to do to build and sell a modern, and safe, saloon car.

Almost immediately, Mercedes joins ISO, and starts selling the ISO car. Fiat joins in 2013, Peugeot in 2014 and General Motors finally joins in 2015 and adds support for Diesel engines. BMW, who had been trying to maintain the previous chassis they designed on their own (sold as “BMW Kar Koncept”), finally adopts the ISO car also in 2015. BMW versions of the ISO car use BMW-specific transmission oil as it doesn’t trust oil from the ISO consortium.

Mercedes looks to the future, and adds high-voltage battery support to the ISO reference car also in 2015, adding the required additional wiring and regenerative braking support. All the other members of the consortium can use their own high voltage batteries, or use the reference battery. The battery can be charged with electricity from any provider.

In 2016 BMW stops marketing the “ISO Car” like all the other vendors, and instead starts calling it “BMW Car” instead. At about the same time BMW adds support for hydrogen engines to the reference vehicle. All the other vendors can ship the ISO car with a Hydrogen engine, but all the hydrogen must be purchased from a BMW-certified dealer. If any vendor other than BMW uses the hydrogen engines, they can’t use the BMW-specific heat shield which protects the fuel tank from exploding in the event on a collision.

In 2017 Mercedes adds traction control and power steering to the ISO reference car. It is enabled almost immediately and used by nearly all the vendors with no royalties and many customer lives are saved.

In 2018 BMW decides that actually producing vendor-specific oil for it’s cars is quite a lot of extra work, and tells all customers existing transmission oil has to be thrown away, but now all customers can get free oil from the ISO consortium. The ISO consortium distributes a lot more oil, but also has to deal with a lot more customer queries about transmission failures.

In 2019 BMW builds a special cut-down ISO car, but physically removes all the petrol and electric functionality from the frame. It is rebranded as “Kar by BMW”. It then sends a private note to the chair of the ISO consortium that it’s not going to be using ISO car in 2020, and that it’s designing a completely new “Kar” that only supports hydrogen engines and does not have traction control or seatbelts. The explanation given was that BMW wanted a vehicle that was tailored specifically for hydrogen engines. Any BMW customers using petrol or electricity in their car must switch to hydrogen by 2020.

The BMW engineers that used to work on ISO Car have been shifted to work on Kar, although have committed to also work on Car if it’s not too much extra work. BMW still want to be officially part of the consortium and to be able to sell the ISO Car as an extra vehicle to the customer that provides all the engine types (as some customers don’t like hydrogen engines), but doesn’t want to be seen to support anything other than a hydrogen-based future. It’s also unclear whether the extra vehicle sold to customers would be the “ISO Car” or the “BMW Car”.

One ISO consortium member asks whether they should remove hydrogen engine support from the ISO car as they feel BMW is not playing fair. Another consortium member thinks that the extra functionality could just be disabled by default and any unused functionality should certainly be removed. All members of the consortium feel like BMW has pushed them too far. Mercedes stop selling the hydrogen ISO Car model stating it’s not safe without the heat shield, and because BMW isn’t going to be supporting the ISO Car in 2020.

Google Summer of Code with Pitivi

GSoC with Pitivi

This summer I am working under the mentorship of Alexandru Băluț to improve the user experience of the Effects feature in Pitivi.

In the first phase of my project, I worked on redesigning Pitivi’s “Effect Library” to allow users to easily find, organise and utilize their desired effects.

Current Effect Library UI

My first assignment was to remove the ComboBox at the top and replace it with seperate Expanders for the various categories. In the process, we also decided to move away from showing Audio and Video effects separately, instead choosing to integrate “Audio” just as another category. This enabled us to present a hierarchical yet simple interface which also allowed the user to have multiple categories open at once.

The next order of business was to replace the tiny 4:3 thumbnails we have for the effects with larger and more expressive 16:9 ones (Thanks to Valentin Orient for contributing these beautiful new thumbnails!).

My final task for this phase was to add a “Favourites” feature which would allow the user to gather all the effects of their choice in a separate view for quick and easy access. For this, I added a button to the effects which enables the user to effortlessly check or change its “favorited” state.

New Effect Library UI

Concluding my work on the “Effect Library” and I will now be moving onwards to renovating the “Clip Tab” for the next phase.

If you wish to reach me, you can find me in #pitivi and #newcommers on GIMPNet as yat_irc.

July 10, 2019

Newcomers workshop @ GUADEC 2019

This year’s GUADEC is approaching and I can already feel people’s excitement while talking about our annual conference.  It is important that we benefit from having so many GNOMies together in the same location to help the next generation to get started in our project. For this reason, we are planning a workshop during the first day of the BoFs (check our wiki page for more info).

The Newcomers Workshop aims at helping newcomers solve their first Gitlab issue. Historically, Carlos Soriano has championed the initiative (thank Carlos when you see him) and I have participated, guiding dozens of people in the universities here in Brno. In the past, other community members were organizing the workshop all over the world. We plan to expand the initiative by having even more GNOME contributors organizing similar events at a local level.

In the workshop we go step-by-step in the GNOME Newcomers Guide, making sure nobody gets stuck on anything.  As simple as that. The more GNOME developers participate the better, since we can benefit from their project-specific expertise.

The workshop is taking place on August 26th, and anybody interested in making their first contribution is welcome! Save the date!

Writing tests for Rust HTTP source | GSoC 2019


Writing test is an important part of Plugin development in GStreamer or rather for all most all software development. There are several reasons which clearly tells us as why writing test cases are important.

  1. To point out the defects and errors that were made during the development phases
  2. To check whether all features are working at every change
  3. To figure out unhandled scenarios in our software
My GSoC mentor, Sebastian Dröge coded the skeleton of the test with a basic unit test case for HTTP source plugin (aka reqwesthttpsrc). Here is the link to the merge request. The test was to check whether we receive the data correctly which is sent by the server. Here we make a hyper HTTP server which respond with "Hello World". Then we use our plugin to receive the data and we compare both. Also the interesting thing here is the Custom test harness which can be used to initialize a HTTP server with required behavior and our HTTP element with required properties set. We can use this to create the desired Harness for the any test case.
After studying and going through the existing test cases, I started writing test case to check whether 'Server not found error' (404 error) is handled correctly. So I used the Custom test harness to create a HTTP server which returns 404 error and HTTP plugin with default settings. Then I set the plugin to PLAY and wait for an error. Using the assert_eq! macro I compare the results. Here is the gitlab link to the merge request.

Before going to the next test case let me give a brief on states. A state describes whether the element instance is initialized, whether it is ready to transfer data and whether it is currently handling data. There can be 4 states for a element they are NULL, READY, PAUSED and PLAYING. Please refer to this from documentation. 

Next test case has two parts. This test is basically about seeking and checking for data stream. Here I have made an HTTP server return a very large buffer, which I will receive asynchronously chunk by chunk. These are the scenarios
  1. Seeking after the element reached READY state.
  2. Seeking after a buffer was received already.
In the first case it's initially set it to READY and then I seek. After that I set the element to PLAY. Now I check the segment whether seek has properly happened and check the received buffer whether the data is correct. Here is the gitlab link.

Second scenario it's initially set to PLAY as usual and I wait for a buffer then I seek. Finally check like I've done in above case. Additionally I'll make sure there is no old buffer
arriving before the segment event.

Writing test was a great way for me to understand the flow, detail about the APIs and how data is handled. I learned a lot more during this period comparing to the previous weeks. More posts coming up, buckle up ;)


July 09, 2019

Sprint 3: Calendar management dialog, cleanups and bugfixes

The Sprint series comes out every 3 weeks or so. Focus will be on the apps I maintain (Calendar, To Do, and Settings), but it may also include other applications that I contribute to.

GNOME Calendar: the new calendar management dialog landed

It’s landed! The massive rewrite of the calendar management dialog reached a good enough shape to land, and so it happened:

The calendar is a fresh new take on the previous one; the individual online accounts rows were removed in favor of delegating it all to GNOME Settings’ Online Accounts panel, navigation is easier and simpler, adding new calendars is a more intuitive operation, and it’s possible to toggle calendars right from the first page.

I’m pretty happy with the rework itself, and splitting it in pages and a controller was definitely the right choice. It allowed implementing the same functionality in a much more well organized way.

Next step is another much necessary rework: remote calendar discovery.

GNOME To Do: polishing rough edges

GNOME To Do received a round of bugfixes and minor improvements to the list archive feature. In addition to that, the triweekly GTK4 update happened, and multiple crashes were fixed.

GNOME To Do is now good enough for me to use it daily, although more improvements are necessary to make it a pleasant experience.

GNOME Settings: nothing

That’s right; this GNOME Settings week was dedicated entirely to something else. There will be an important blog post about it later, but for now, suffice to say it’s an important feature.

Lessons learned & concerns

There were various conclusions I could draw from this sprint.

The first and most concerning one is that, even though dedicating one week per project has increased productivity by a factor of N > 5, it is honestly not possible to keep up with this rythm of work. This is the third sprint only and I already feel that the energy to keep up with this schedule is fading. I want to try and reduce the scope of the changes that I pick up from now on; big changes and new features will have 2 or 3 weeks assigned to them instead of 1.

The second conclusion is that tasks that require design review can only be done iteratively. The traditional model of asking a designer for something, waiting for a mockup to be created, then working on it, is too slow and has too many steps to achieve something. Doing it iteratively means not only that designers get to see the result and suggest changes immediately; it also means developers spend less time switching contexts, and changes are much easier. This is something I will definitely keep in mind from now on.

bolt 0.8 with support for IOMMU protection

A new release of bolt is out: 0.8 - I owe it to the MM U!. It contains a big new feature, which is suppport for IOMMU, a new bolt config command and a bolt-mock script to interactively test boltd and components that interact with it. And of course the usual bugfixes and improvements.

IOMMU support

I already wrote about the general idea when the Thunderclap paper was published. But to quickly refresh everyone's memory: Thunderbolt, via PCIe, can directly access the main memory (DMA). This opens the door to attacks, the recent Thunderclap attack is a prominent example and demonstration of such an attack. To mitigate DMA attacks, security levels were introduced with Thunderbolt version 3. These new security levels require devices to be authorized before they can be used. On newer hardware and recent kernel versions, another mitigation scheme was introduced that facilitates the input–output memory management unit (IOMMU). The basic idea is to allow direct memory access for Thunderbolt devices only to certain safe memory regions and prevent devices accessing any memory area outside those. The availability of that feature is communicated by the kernel to userspace via the iommu_dma_protection sysfs attribute. If support is active boltd will change its behavior in a few novel ways. This is because we assume that as long as IOMMU protection as enabled, it is safe to authorize devices, even without asking the user. New devices that are not authorized are therefore automatically enrolled, but with a new iommu policy. In the case that IOMMU is turned off again, devices with this iommu policy won't automatically be authorized by boltd and will require explicit user interaction. Additionally, devices that are new but already authorized by the firmware, are now automatically imported, so we always have a record of devices that were attached to the system. Anybody who is interested in even more (technical) details can read bolt issues #128 (iommu) and #137 (auto-import).

boltctl config

The boltctl command line tool gained a new sub-command, boltctl config, to list, read and write global, domain and device properties.


boltctl config

boltctl config can be used to list (--describe), get and set properties.

For example, disabling authorization via boltd can now be done via the boltctl config auth-mode disable. This corresponds to the "Direct Access" setting in GNOME Settings. A list of all available properties can be queried via boltctl config --describe. For more details see also the boltctl(1) man page.

the road to 1.0

IOMMU support was the last major item on the TODO list. There are a few bigger things that should get into 0.9, the biggest one probably being exit-on-idle (#92)2. I want all features to land in 0.9 and then 1.0 to just be a bug fix release a few month after 0.9. All the remaining features are "nice to have" and not really pressing so I will continue working on them but more on the side. That also means they are all up for grabs if someone else wants to help.

Footnotes:

  1. NB: GNOME Shell and Settings watch for the dbus service but don't request it to be started, so if there is no Thunderbolt hardware present in the system boltd should not be running at all.

July 08, 2019

Gtk-rs tutorial

Leonora Tindall has written a very nice tutorial on Speedy Desktop Apps With GTK and Rust. It covers prototyping a dice roller app with Glade, writing the code with Rust and the gtk-rs bindings, and integrating the app into the desktop with a .desktop file.

Bolt 0.8 update

Christian recently released bolt 0.8, which includes IOMMU support. The Ubuntu security team seemed eager to see that new feature available so I took some time this week to do the update.

Since the new version also featured a new bolt-mock utility and installed tests availability. I used the opportunity that I was updating the package to add an autopkgtest based on the new bolt-tests binary, hopefully that will help us making sure our tb3 supports stays solid in the futur ;-)

The update is available in Debian Experimental and Ubuntu Eoan, enjoy!

Battle of the Bilerps: Image Scaling on the CPU

I’ve been on a quest for better bilerps lately. “Bilerp” is, of course, a contraction of “bilinear interpolation“, and it’s how you scale pictures when you’re in a hurry. The GNOME Image Viewer (née Eye of GNOME) and ImageMagick have both offered somewhat disappointing experiences in that regard; the former often pauses noticeably between the initial nearest-neighbor and eventual non-awful scaled images, but way more importantly, the latter is too slow to scale animation frames in Chafa.

So, how fast can CPU image scaling be? I went looking, and managed to produce some benchmarks — and! — code. Keep reading.

What’s measured

The headline reference falls a little short of the gory details: The practical requirement is to just do whatever it takes to produce middle-of-the-road quality output. Bilinear interpolation starts to resemble nearest-neighbor when you reduce an image by more than 50%, so below that threshold I let the implementations use the fastest supported algorithm that still looks halfway decent.

There are other caveats too. I’ll go over those in the discussion of each implementation.

I checked for correctness issues/artifacts by scaling solid-color images across a large range of sizes. Ideally, the color should be preserved across the entire output image, but fast implementations sometimes take shortcuts that cause them to lose low-order bits due to bad rounding or insufficient precision.

I ran the benchmarks on my workstation, which is an i7-4770K (Haswell) @ 3.5GHz.

Performance summary

Image scaling performance plot (mid-size)

This plot consists of 500 samples per implementation, each of which is the fastest out of 50 runs. The input image is 2000×2000 RGBA pixels at 8 bits per channel. I chose additional parameters (channel ordering, premultiplication) to get the best performance out of each implementation. The image is scaled to a range of sizes (x axis) and the lowest time taken to produce a single output image at each size is plotted (y axis). A line is drawn through the resulting points.

Here’s one more:

Image scaling performance plot (large size)

It’s the same thing, but with a huge input and smaller outputs. As you can see, there are substantial differences. It’s hard to tell from the plot, but Smolscale MT averages about 50ms per frame. More on that below. But first, a look at another important metric.

Output quality

Image scaling quality comparison

Input image. It’s fair to say that one of these is not like the others.

Discussion

GDK-Pixbuf

GDK-Pixbuf is the traditional GNOME image library. Despite its various warts, it’s served the project well for close to two decades. You can read more about it in Federico’s braindump from last year.

For the test, I used the gdk-pixbuf 2.38.1 packages in openSUSE Tumbleweed. With the scaling algorithm set to GDK_INTERP_BILINEAR, it provides decent quality at all sizes. I don’t think that’s strictly in line with how bilinear interpolation is supposed to work, but hey, I’m not complaining.

It is, however, rather slow. In fact, at scaling factors of 0.5 and above, it’s the slowest in this test. That’s likely because it only supports unassociated alpha, which forces it to do extra work to prevent colors from bleeding disproportionately from pixels of varying transparency. To be fair, the alpha channel when loaded from a typical image file is usually unassociated, and if I’d added the overhead of premultiplying it to the other implementations, it would’ve bridged some or most of the performance difference.

I suspect it’s also the origin of the only correctness issue I could find; color values from completely transparent pixels will be replaced with black in the output. This makes sense because any value multiplied by a weight of zero, will be zero. It’s mostly an issue if you plan to change the transparency later, as you might do in the realm of very serious image processing. And you won’t be using GDK-Pixbuf in such a context, since GEGL exists (and is quite a bit less Spartan than its web pages suggest).

Pixman

Pixman is the raster image manipulation library used in X.Org and Cairo, and therefore indirectly by the entire GNOME desktop. It supports a broad range of pixel formats, transforms, composition and filters. It’s otherwise quite minimal, and works with premultiplied alpha only.

I used the pixman 0.36.0 packages in openSUSE Tumbleweed. Pixman subscribes to a stricter definition of bilerp, so I went with that for scaling factors 0.5 and above, and the box filter for smaller factors. Output quality is impeccable, but it has trouble with scaling factors lower than 1/16384, and when scaling huge images it will sometimes leave a column of uniformly miscolored pixels at one extreme of the image. I’m chalking that up to limited precision.

Anyhow, the corner cases are more than made up for by Pixman’s absolutely brutal bilerp (proper) performance. Thanks to its hand-optimized SIMD code, it’s the fastest single-threaded implementation in the 0.5x-1.0x range. However, the box filter does not appear to be likewise optimized, resulting in one of the worst performances at smaller scaling factors.

SDL_gfx

The Simple Directmedia Layer is a cross-platform hardware abstraction layer originating in the late 90s as a vehicle for games development. While Loki famously used it to port a bunch of games to Linux, it’s also been a boon to more recent independent games development (cf. titles like Teleglitch, Proteus, Dwarf Fortress). SDL_gfx is one of its helper libraries. It has a dead simple API and is packaged for pretty much everything. And it’s games stuff, so y’know, maybe it’s fast?

I tested libSDL_gfx 2.0.26 and libSDL2_gfx 1.0.4 from Tumbleweed. They perform the same: Not great. Below a scaling factor of 0.5x I had to use a combination of zoomSurface() and shrinkSurface() to get good quality. That implies two separate passes over the image data, which explains the poor performance at low output sizes. However, zoomSurface() alone is also disappointingly slow.

I omitted SDL from the sample output above to save some space, but quality-wise it appears to be on par with GDK-Pixbuf and Pixman. There were no corner cases that I could find, but zoomSurface() seems to be unable to work with surfaces bigger than 16383 pixels in either dimension; it returns a NULL surface if you go above that.

It’s also worth noting that SDL’s documentation and pixel format enums do not specify whether the alpha channel is supposed to be premultiplied or unassociated. The alpha blending formulas seem to imply unassociated, but zoomSurface() and shrinkSurface() displayed color bleeding with unassociated-alpha input in my tests.

Skia

Skia is an influential image library written in C++; Mozilla Firefox, Android and Google Chrome all use it. It’s built around a canvas, and supports drawing shapes and text with structured or raster output. It also supports operations directly on raster data — making it a close analogue to Cairo and Pixman combined.

It’s not available as a separate package for my otherwise excellent distro, so I built it from Git tag chrome/m76 (~May 2019). It’s very straightforward, but you need to build it with Clang (as per the instructions) to get the best possible performance. So that’s what I did.

I tested SkPixmap.scalePixels(), which takes a quality setting in lieu of the filter type. That’s perfect for our purposes; between kLow_SkFilterQuality, kMedium_SkFilterQuality and kHigh_SkFilterQuality, medium is the one we want. The documentation describes it as roughly “bilerp plus mip-maps”. The other settings are either too coarse (nearest-neighbor or bilerp only) or too slow (bicubic). The API supports both premultiplied and unassociated alpha. I used the former.

So, about the apparent quality… In all honesty — it’s poor, especially when the output is slightly smaller than 1/2ⁿ relative to the input, i.e. ½*insize-1, ¼*insize-1, etc. I’m not the first to make this observation. Apart from that, there seems to be a precision issue when working with images (input or output) bigger than 16383 pixels in either dimension. E.g. the color #54555657 becomes #54545454, #60616263 becomes #60606060 and so on.

At least it’s not slow. Performance is fairly respectable across the board, and it’s one of the fastest solutions below 0.5x.

Smolscale

Smolscale is a smol piece of C code that does image scaling, channel reordering and (un)premultiplication. It’s this post’s mystery contestant, and you’ve never heard of it before because up until now it’s been living exclusively on my local hard drive.

I wrote it specifically to meet the requirements I laid out before: Fast, middling quality, no artifacts, handles input/output sizes up to 65535×65535. Smolscale MT is the same implementation, just driven from multiple threads using its row-batch interface.

As I mentioned above, when running in 8 threads it’s able to process a 16383×16383-pixel image to a much smaller image in roughly 50ms. Since it samples every single pixel, that corresponds to about 5.3 gigapixels per second, or ~21 gigabytes per second of input data. At that point it’s close to maxing out my old DDR3-1600 memory (its theoretical transfer rate is 12.8GB/s, times two for dual channel ~= 26GB/s).

I’m going to write about it in detail at some point, but I’ll save that for another post. In the meantime, I put up a Github dump.

Bonus content!

I found a 2012-issue Raspberry Pi gathering dust in a drawer. Why not run the benchmarks on that too?

Image scaling performance plot (ARMv6)

I dropped Skia from this run due to build issues that looked like they’d be fairly time consuming to overcome. Smolscale suffers because it’s written for 64-bit registers and multiple cores, and this is an ARMv6 platform with 32-bit registers and a single core. Pixman is still brutally efficient; it has macro templates for either register size. Sweet!

Settings: new Search panel

I haven’t been working on GNOME Settings for quite some time now. Currently, I am focusing mostly on GNOME Boxes, Usage, and Fedora Silverblue. To be fair I still have some love for Settings and I enjoy context-switching once in a while to hack on code bases which I don’t face daily. Unfortunately I can’t do this more often.

A few years ago I pushed a WIP version of the Settings “Search” panel that never got merged because we were in a moment of transition in the project and at the time we thought that introducing Drag & Drop capabilities to GtkListBox would make sense still in gtk3. Fast forward, we are far from even starting to port Settings to gtk4, but people got to use the panels! For this reason, I rebased and iterated a bit over the Search panel in order to make it identical to the mockups. The final result is previewed below and will be available in our next stable release, 3.34.

P.S.: I haven’t blogged much in the last couple of years mostly because I always felt that blog posts required a certain amount of *amazingness*. Now I’m convinced that small pills, highlighting something as small as the work above, have a place in this blog (better than not blogging at all). :-)

Prioritization of bug reports and feature requests in Free and Open Source software projects

A few months ago I wrote an essay on software development planning in FOSS projects. It tries to answer the following questions:

  • Why has nobody fixed this issue yet?
  • Why wasn’t I consulted about these changes?
  • How I can influence what is worked on?

Some parts of the essay are specific to Wikimedia but I hope it can also be useful for other communities. It is published under CC BY-SA 3.0 so feel free to remix.

If you have a similar document for your project, please feel free to share a link in the comments.

Creating hardware where no hardware exists

The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comment count unavailable comments

Which smart bulbs should you buy (from a security perspective)

People keep asking me which smart bulbs they should buy. It's a great question! As someone who has, for some reason, ended up spending a bunch of time reverse engineering various types of lightbulb, I'm probably a reasonable person to ask. So. There are four primary communications mechanisms for bulbs: wifi, bluetooth, zigbee and zwave. There's basically zero compelling reasons to care about zwave, so I'm not going to.

Wifi


Advantages: Doesn't need an additional hub - you can just put the bulbs wherever. The bulbs can connect out to a cloud service, so you can control them even if you're not on the same network.
Disadvantages: Only works if you have wifi coverage, each bulb has to have wifi hardware and be configured appropriately.
Which should you get: If you search Amazon for "wifi bulb" you'll get a whole bunch of cheap bulbs. Don't buy any of them. They're mostly based on a custom protocol from Zengge and they're shit. Colour reproduction is bad, there's no good way to use the colour LEDs and the white LEDs simultaneously, and if you use any of the vendor apps they'll proxy your device control through a remote server with terrible authentication mechanisms. Just don't. The ones that aren't Zengge are generally based on the Tuya platform, whose security model is to have keys embedded in some incredibly obfuscated code and hope that nobody can find them. TP-Link make some reasonably competent bulbs but also use a weird custom protocol with hand-rolled security. Eufy are fine but again there's weird custom security. Lifx are the best bulbs, but have zero security on the local network - anyone on your wifi can control the bulbs. If that's something you care about then they're a bad choice, but also if that's something you care about maybe just don't let people you don't trust use your wifi.
Conclusion: If you have to use wifi, go with lifx. Their security is not meaningfully worse than anything else on the market (and they're better than many), and they're better bulbs. But you probably shouldn't go with wifi.

Bluetooth


Advantages: Doesn't need an additional hub. Doesn't need wifi coverage. Doesn't connect to the internet, so remote attack is unlikely.
Disadvantages: Only one control device at a time can connect to a bulb, so harder to share. Control device needs to be in Bluetooth range of the bulb. Doesn't connect to the internet, so you can't control your bulbs remotely.
Which should you get: Again, most Bluetooth bulbs you'll find on Amazon are shit. There's a whole bunch of weird custom protocols and the quality of the bulbs is just bad. If you're going to go with anything, go with the C by GE bulbs. Their protocol is still some AES-encrypted custom binary thing, but they use a Bluetooth controller from Telink that supports a mesh network protocol. This means that you can talk to any bulb in your network and still send commands to other bulbs - the dual advantages here are that you can communicate with bulbs that are outside the range of your control device and also that you can have as many control devices as you have bulbs. If you've bought into the Google Home ecosystem, you can associate them directly with a Home and use Google Assistant to control them remotely. GE also sell a wifi bridge - I have one, but haven't had time to review it yet, so make no assertions around its competence. The colour bulbs are also disappointing, with much dimmer colour output than white output.

Zigbee


Advantages: Zigbee is a mesh protocol, so bulbs can forward messages to each other. The bulbs are also pretty cheap. Zigbee is a standard, so you can obtain bulbs from several vendors that will then interoperate - unfortunately there are actually two separate standards for Zigbee bulbs, and you'll sometimes find yourself with incompatibility issues there.
Disadvantages: Your phone doesn't have a Zigbee radio, so you can't communicate with the bulbs directly. You'll need a hub of some sort to bridge between IP and Zigbee. The ecosystem is kind of a mess, and you may have weird incompatibilities.
Which should you get: Pretty much every vendor that produces Zigbee bulbs also produces a hub for them. Don't get the Sengled hub - anyone on the local network can perform arbitrary unauthenticated command execution on it. I've previously recommended the Ikea Tradfri, which at the time only had local control. They've since added remote control support, and I haven't investigated that in detail. But overall, I'd go with the Philips Hue. Their colour bulbs are simply the best on the market, and their security story seems solid - performing a factory reset on the hub generates a new keypair, and adding local control users requires a physical button press on the hub to allow pairing. Using the Philips hub doesn't tie you into only using Philips bulbs, but right now the Philips bulbs tend to be as cheap (or cheaper) than anything else.

But what about


If you're into tying together all kinds of home automation stuff, then either go with Smartthings or roll your own with Home Assistant. Both are definitely more effort if you only want lighting.

My priority is software freedom


Excellent! There are various bulbs that can run the Espurna or AiLight firmwares, but you'll have to deal with flashing them yourself. You can tie that into Home Assistant and have a completely free stack. If you're ok with your bulbs being proprietary, Home Assistant can speak to most types of bulb without an additional hub (you'll need a supported Zigbee USB stick to control Zigbee bulbs), and will support the C by GE ones as soon as I figure out why my Bluetooth transmissions stop working every so often.

Conclusion


Outside niche cases, just buy a Hue. Philips have done a genuinely good job. Don't buy cheap wifi bulbs. Don't buy a Sengled hub.

(Disclaimer: I mentioned a Google product above. I am a Google employee, but do not work on anything related to Home.)

comment count unavailable comments

July 06, 2019

Andaluh-rs, a lib to transcript Spanish to Andaluh

Spain is a big region with a lot of languages, we've the official one for the whole region, Spanish or Castilian (es-ES), and other officials languages spoken in other regions, like the Galician (gl-ES) spoken in Galicia, the Basque (eu-ES), spoken in the Basque Country, the Catalan (ca-ES) spoken in Catalonia.

But we've more languages that doesn't have the same official support, the Valencian, the Aragonese, the Asturian, and more. And also we've some dialects like the Andalusian.

All of this languages was discredit during the dictatorship period when the only official language was the Spanish and the others was treated like vulgar languages or non educated one, and in some cases the language was prohibited.

Whe the democracy arrives, some regions spend a lot of resources trying to recover the language and are supporting it until now with official institutions and lessons in the official education system. But in other regions the stigma continued until today.

That's the case of the Andalusian, that right now is not considered a language but a dialect. In any case, this dialect is treated as a non cultivated language, spoken by illiterates. We've a lot of Spanish movies and Series where the character that spoke Andalusian was the illiterate, from the countryside or the servant of the family.

Be proud of it

In Andalusia we've a lot of culture, literature and music done in Andalusian, but we don't have a way to write that and many people try to avoid the acent and the local words to look more cultured and because there's no a writing system, we write in Spanish, it's hard to write lyrics, poetry and other kind of literature. The Andalusian is not only a way to talk, some words are short and we've contractions and other vowel sounds so if you write in Spanish, it's not the same as spoken, some information is missed and for example in music or poetry the metric doesn't match.

There's a movement trying to define a writing Andalusian and promoting the language, trying to make people proud of it and talk and write without complexes.

Translator, here comes the code

And there's a group of developers that are working in some tools to provide direct translation from Spanish and other tools to ease the Andalusian writing.

I like to write code and I'm always happy to find new problems to solve, to learn new languages, tools and to spend some time trying to code something that I've not done before. So I decided to write a translator from Spanish to Andaluh using rust, and I've created the andaluh-rs lib.

The translator is more or less easy, there're some rules that should be applied from top to bottom that basically replaces some group of letters. There's a implementation in python that uses regular expressions for that. There're a lot of regular expressions, so I thougth that it could be easy to use a parser, so I used the pest parser.

// supress muted /h/

H = { ("h" | "H") }
initial_h = { H ~ letter }
CH = { C ~ H }
inner_ch = { CH ~ letter }
inner_h = { !inner_ch ~ H ~ letter }
hua = { H ~ ("ua" | "UA" | "Ua" | "uA") }
hue = { H ~ ("ue" | "UE" | "Ue" | "uE") }
noh = { !CH ~ !H ~ letter }
h = _{ ((sp|SOI)? ~ initial_h* ~ ((hua | hue | inner_ch | inner_h) | noh+)+)+ }

I've defined each rule in the pest format, so I've a parser for each rule and then I can replace the word with the correct replacement.

pub fn h_rule(input: &str) -> Result<String, Error> {
    rule!(Rule::h, &input, Some(&defs::H_RULES_EXCEPT),
        Rule::initial_h | Rule::inner_h => |pair: Pair<Rule>| {
            let s = pair.as_str();
            let h = slice!(s, 0, 1);
            let next = slice!(s, 1);
            keep_case(&next, &h)
        },
        Rule::hue => |pair: Pair<Rule>| {
            keep_case("güe", &pair.as_str())
        },
        Rule::hua => |pair: Pair<Rule>| {
            keep_case("gua", &pair.as_str())
        })
}

To simplify the code, I've defined the rule macro, with the code used in all rules:

macro_rules! rule {
    ($rule: expr, $input: expr, $( $($t: pat)|* => $r: expr ),* ) => {{
        let map: Option<HashMap<&str, &str>> = None;
        rule!($rule, $input, map, $( $($t)|* => $r ),*)
    }};
    ($rule: expr, $input: expr, $map: expr, $( $($t: pat)|* => $r: expr ),* ) => {{
        let (repl, input) = match $map {
            Some(ref m) => replace_exceptions($input, m),
            None => (vec![], $input.to_string())
        };

        let pairs = AndaluhParser::parse($rule, &input)?;
        let mut output: Vec<String> = vec![];

        for pair in pairs {
            let chunk = match pair.as_rule() {
                $( $($t)|* => {
                    $r(pair)
                } ),*
                _ => {
                    String::from(pair.as_str())
                },
            };
            output.push(chunk);
        }

        let mut outstr = output.join("");

        if $map.is_some() {
            outstr = replace_exceptions_back(&outstr, repl);
        }

        Ok(outstr)
    }}
}

And because the Spanish and the Andaluh languages uses unicode and rust Strings can not be iterated by unicode, I've used unicode_segmentation crate, and defined some utility macros to get the real String len and to get slices of that String.

macro_rules! chars {
    ($input: expr) => {
        UnicodeSegmentation::graphemes($input, true)
    }
}

macro_rules! slice {
    ($input: expr, $start: expr, $end: expr) => {
        chars!($input)
            .skip($start)
            .take($end - $start)
            .collect::<String>()
    };
    ($input: expr, $start: expr) => {
        chars!($input)
            .skip($start)
            .collect::<String>()
    }
}

macro_rules! len {
    ($input: expr) => {
        chars!($input).count()
    }
}

With all this done, we only have to apply all rules, in the correct order, to the input string so we can get the translated String as output.

pub fn epa(input: &str) -> Result<String, Error> {
    // TODO: escape links
    let rules = [
        h_rule,
        x_rule,
        ch_rule,
        gj_rule,
        v_rule,
        ll_rule,
        l_rule,
        psico_rule,
        vaf_rule,
        word_ending_rule,
        digraph_rule,
        exception_rule,
        word_interaction_rule,
    ];

    let mut output = input.to_string();
    for r in rules.iter() {
        let out = r(&output)?;
        output = out.to_string();
    }

    Ok(output)
}

Performance

This code is not the best one, I'm doing a lot of string operations with copies and clones, I'm sure that anyone with more experience with rust can view a lot of points where we can optimize this code. At first I thought that the translation could be done during the parsing, keeping a length to be able to view backward and forward.

Maybe it's possible to read char by char, keeping a buffer, and detecting if we can apply any of the rules with the content in the buffer, but I've based this lib in the python one, so for me it was easier to translate each regex to pest regex and then do the same translations in the lib.

But I still think that there's a better solution for this problem, but some times it's better to have something that just works instead of a never done best solution.

During this process I've learned to use pest and I've been playing a lot with regular expressions, so it was a fun project.

July 05, 2019

Fun with the ODRS, part 2

For the last few days I’ve been working on the ODRS, the review server used by GNOME Software and other open source software centers. I had to do a lot of work initially to get the codebase up to modern standards, but now it has unit tests (86% coverage!), full CI and is using the latest versions of everything. All this refactoring allowed me to add some extra new features we’ve needed for a while.

The first feature changes how we do moderation. The way the ODRS works means that any unauthenticated user can mark a review for moderation for any reason in just one click. This means that it’s no longer shown to any other user and requires a moderator to perform one of three actions:

  • Decide it’s okay, and clear the reported counter back to zero
  • Decide it’s not very good, and either modify it or delete it
  • Decide it’s spam or in any way hateful, and delete all the reviews from the submitter, adding them to the user blocklist

For the last few years it’s been mostly me deciding on the ~3k marked-for-moderatation reviews with the help of Google Translate. Let me tell you, after all that my threshold for dealing with internet trolls is super low. There are already over 60 blocked users on the ODRS, although they’ll never really know they are shouting into /dev/null

One change I’ve made here is that it now takes two “reports” of a review before it needs moderation; the logic being that a lot of reports seem accidental and a really bad review is already normally reported by multiple people in the few days after it’s been posted. The other change is that we now have a locale-specific “bad word list” that submitted reports are checked against at submission time. If they are flagged, the moderator has to decide on the action before it’s ever shown to other users. This has already correctly flagged 5 reviews in the couple of days since it was deployed. If you contributed to the spreadsheet with “bad words” for your country I’m very grateful. That bad word list will be available as a JSON dump on the ODRS on Monday in case it’s useful to other people. I fully expect it’ll grow and change over time.

The other big change is dealing with different application IDs. Over the last decade some applications have moved from “launchable-style” inkscape.desktop IDs to AppStream-style IDs like org.inkscape.Inkscape.desktop and are even reported in different forms, e.g. the Flathub-inspired org.inkscape.Inkscape and the Snappy io.snapcraft.inkscape-tIrcA87dMWthuDORCCRU0VpidK5SBVOc. Until today a review submitted against the old desktop ID wouldn’t match for the Flatpak one, and now it does. The same happens when we get the star ratings which means that apps that change ID don’t start with a clean slate and inherit all the positivity of the old version. Of course, the usual per-request ordering and filtering is done, so older versions than the one requested might be shown lower than newer versions anyway.

This is also your monthly reminder to use <provides><id>oldname.desktop</id></provides> in your metainfo.xml file if you change your desktop ID. That includes you Flathub and Snapcraft maintainers too. If you do that client side then you at least probably get the right reviews if the software center does the right thing, but doing it server side as well makes really sure you’re getting the reviews and ratings you want in all cases.

If all this sounds interesting, and you’d like to know more about the ODRS development, or would like to be a moderator for your language, please join the mailing list and I’ll post there next week when I’ve made the moderator experience nicer than it is now. It’ll also be the place to request help, guidance and also ask for new features.

Nuritzi’s Travel Sponsorship Guide for GUADEC 2019

The deadline for GUADEC 2019 sponsorship is tomorrow, Friday, July 5th. That means you still have a whole day to apply if you haven’t already 😉

This week, I had the opportunity of helping some GNOME newcomers apply for travel sponsorship, and I wanted to blog about some of the questions that came up along the way. I hope this helps anyone else who is trying to better understand how to apply for sponsorship under the new travel policy. 

Photo by Erwan Hesry on Unsplash

How to Apply for Travel Sponsorship

Philip Chimento and I helped rethink the travel policy this year as part of our term on the board. It was recently adopted and has a few important changes. Here’s a quick summary of the new process and what you’ll need to do to apply. 

  1. Read the GNOME Travel Policy. Make sure that you can receive money through one of the payment options that we provide, otherwise, you won’t be able to be reimbursed.

  2. Fill out the Application Form (you can copy and paste the template on the wiki page into an email).

  3. Send screenshots of your flight and hotel price search along with your application form to travel-committee@gnome.org (you can do the price search on Kayak or Expedia, for example. Take a screenshot or save a PDF of the 1st page of results). Here’s more info on how to do this.

    Remember that low-cost flights often make you pay extra for a checked bag, so make sure you include the baggage costs in your request. Here’s our policy on allowed baggage.

    Note: Organizers will often book entire hostels, dorms, hotels, or homes for GUADEC and some of our other large GNOME events. Sponsored individuals are typically expected to stay there and will get a pre-paid room for the duration of their stay instead of a reimbursement for lodging. Read more about this in the FAQ below.

  4. Determine if you need a visa, and if so, request an invitation letter.

  5. Take photos during the event, if possible, and blog about your experience to help tell more people about what you did and learned at the event.

  6. After the event, you have 6 weeks to file a reimbursement report. You’ll need to include receipts of your expenses and a link to your blog post. Here is more info on what’s expected. 

FAQs and Troubleshooting

Who can apply for sponsorship? 

Anybody who is interested in contributing to the GNOME community is encouraged to apply! You do not need to be a Foundation member; however, Foundation Members and event speakers get preference.

We have limited funds available, so if you’re not a Foundation Member, make sure to let us know why you’re excited about attending, how you’ve contributed or participated so far, any relevant participation you’ll have at the event (Volunteering? Attending a hackfest, workshop, or BoF?), and how you might participate in GNOME beyond the event. 

How do I know if event organizers provide pre-booked accommodations and what happens if they do?

If you are traveling to one of our main conferences, it is likely that event organizers will have pre-booked accommodation for conference participants. Sponsored individuals are expected to stay there and will be given a pre-booked room instead of reimbursement for lodging.

Event organizers will typically post about pre-booked accommodations on their website. For GUADEC 2019, for example, the pre-booked options are listed on the GUADEC 2019 website.

In your travel sponsorship application, you should list any special requests that you have for the pre-booked lodging options (e.g. requests around accessibility, single sex rooms, etc.). Accommodation of those requests is not guaranteed, but the travel committee will take your requests and preferences into account.

You can read more about lodging options and expectations in the “lodging costs” section of the travel policy.

Can I extend my travel dates so I can do some sightseeing after the event? 

Yes, you can extend your travel dates, but the GNOME Foundation can only reimburse you for the actual dates of the event + travel days. 

This means that screenshots of flight and hotel comparisons that you send with your application should only be for the event + travel days. 

If your request is approved, you can then book your flight for whatever dates you want, but you will only get reimbursed up to the amount that the travel committee approved you for.

Read more about this here.

What if I can’t afford to pay for my own ticket? 

The GNOME Foundation normally can’t directly pay for people’s tickets and only reimburses people at the end, but with this new travel policy, exceptions can sometimes be made. 

Please add any requests like this to your application, and the Travel Committee may contact you about it since these requests are decided on a case by case basis.

Flight prices have gone up since I sent in my request, what do I do?

We saw this happen a lot in previous years, so we are trying out this new policy to help. Check out the “expired airfare” section of the travel policy for more information.

The visa application process is confusing — help!

I’ve heard this a lot, and I commiserate with those who need to apply for a visa to travel to our events! 

Here are some tips that might help: 

  1. Book cancellable flights and hotels. In some cases, you may not have to actually book flights and accommodation before your visa is approved. Instead, you can just provide an itinerary of your trip. Make sure you understand your visa requirements before trying to book something.

    If you do need to provide booked flights, try using a local travel agent. They can usually put a hold on tickets for some days — usually a couple of weeks. This means the tickets will be booked, but not paid for. You will have to pay the travel agency a nominal fee for their services. Make sure to talk to the travel committee about this before you engage with the travel agency and see if they can reimburse you the extra amount, if needed.

    For hotels, there are some sites like booking.com that let you cancel without penalty up to some days before the date of the trip. This is great, just in case your visa doesn’t get approved, you decide to stay somewhere else (like the pre-booked location that the event team might organize), or in case your travel plans change a bit as the travel dates get closer.

  2. Request an invitation letter. As mentioned above, you can request an invitation letter for GUADEC 2019. The local team is the only group who can help you get a letter, not the travel committee.

  3. Financial stability requirement: Generally visa applications will require you to present your bank statement for the last 3-6 months. This is done in order to assess your financial stability and make sure that you have enough of a financial buffer to bear additional expenses in the foreign land if needed.

    If you are a student, you can submit an attested letter from your parents/guardian, specifying that they can cover addition expenses (if any). If you do this, you will need to attach your parent/guardian’s bank statement in your application and specify this in your personal cover letter.

  4. Talk to the travel committee. The travel committee is a group of volunteers who really wants to help improve this process for the GNOME community. If you have specific questions, or face challenges along the way, make sure to contact them.

A huge thanks to Umang Jain, a member of the Travel Committee, for helping me create these tips!

I still have more questions, who can I ask for help? 

The travel committee can help you answer more specific questions. Here’s how to contact them: 

Email: travel-committee@gnome.org

IRC: #travel (here’s how to use IRC… it’s basically what we use to chat)

Riot: #travel (that link will help you join the correct channel via Riot, a chat client many of us use. Thanks for creating that link, by the way, @Carlos Soriano!)


Ok, that was a lot of information, but I hope it helps! 

See you in Greece 🙂

July 04, 2019

Awakening from the lucid dream

Illustration by Guweiz

As I entered the Facility, I could see Nicole sitting on a folding chair and tapping away frantically on her Gemini PDA, in what looked like a very temporary desk setup.

“That looks very uncomfortable!” I remarked.

She didn’t reply, but I didn’t expect a reply. She was clearly too busy and in a rush to get stuff done. Why all the clickety-clack, wasn’t the phone team around, just inside? Oh, she must be dealing with suppliers.

I moved past the tiny entrance lobby and into a slightly larger mess hall where I saw a bunch of people spread around the room, near tables hugging the walls, and having a gathering that seemed a bit like a hackfest “breakroom” discussion, with various people from the outside. At one of the tables, over what seemed like a grid of brainstorming papers, I saw Kat, who was busy with three other people, planning some sort of project sprint.

“Are you using Scrum?” I prompted.

“I am, but I don’t think they are” she answered, heads down in preparation work to assemble a team.

While I was happy to see her familiar face, her presence struck me as odd; what would Kat, a Collabora QA team lead, be doing managing community folks on-site at a Purism facility in California? I would have thought Heather (Purism’s phone project manager) would be around, but I didn’t see her or other recognizeable members of the team. Well, probably because I was just passing through a crowd of 20 people spread on tables around a lobby area—a transitional space—set up as an ad-hoc workshop. One of the walls had big windows that let me see into a shipping area and actual meeting rooms. I went to the meeting rooms.

As I entered a rectangular, classroom-like area, I found a hackfest—or rather a BoF session—taking place. In one of the table rows, I was greeted by Zeeshan Ali and Christian Hergert, who were having some idle chat.

As I shook their hands I said, “Hello old friends! Long time no see! Too bad this isn’t real.”

“How do you know this is not real?” Christian replied.

I slapped myself in the face, loudly, a couple of times. “Well,” I said, “even though these slaps feel fairly real, I’m pretty sure that’s just my mind playing a trick on me.”

I knew I was inside a mind construct, because I didn’t know the purpose of my visit nor how I’d got here… I knew I didn’t have the luxury to be flying across the continent on my own to leisurely walk into a hackfest I had no preparation for, and I knew Purism weren’t the ones paying for my trip: they no longer were my client since my last visit at the beginning of the summer.

Christian & Zeeshan were still looking at me, as if waiting to confirm I was lucid enough.

“The matrix isn’t that good,” I sighed.

A couple of seconds later, my eyes opened and I woke up. Looking at the bedside clock, it was 4:45 AM.

“Ugh, I hate to be right sometimes.”

Epilogue

It had been quite a long time since I last had a lucid dream, so I jotted this one down for further analysis. After all, good ol’ Sigmund always said,

“The interpretation of dreams is the royal road to a knowledge of the unconscious activities of the mind.”

…and lo, the dream I had that night was very much fashioned upon the context I have found myself in recently. Having been recently freed from an all-consuming battlefront, I can now take a step back to reposition myself, clear the fog and enjoy the summer after the last seven months of gray and gloomy weather. The coïncidences and parallels to be drawn—between the weather, this dream, and reality—are so many, it’s uncanny.

It is much like a reawakening indeed.

As I looked at all this today, I thought it would make a great introduction for me to resume blogging more regularly. And, well, it’s a fun story to share.

Resurfacing

This summer, as I now have some “personal R&D” time for the first time in many years, I am taking the opportunity to get back on track regarding many aspects of my digital life:

  • sorting out and cleaning up data from various projects;
  • getting back to “Inbox Zero”;
  • upgrading my software & personal systems;
  • reviewing my processes & tools for greater efficiency and effectiveness;
  • taking care of my web infrastructure and online footprint.

As part of this R&D phase, I hope to drain the swamp and start blogging again. Here’s the rough roadmap of what I plan to be writing about:

  1. a statement of availability for hire (or as a friend once said to me in 2014 on the verge of a crowdfunding campaign: “Where have all the cowboysgood managers gone?“);
  2. a retrospective on “What the hell have I been up to for the last few years”;
  3. maybe some stories of organizations outside GNOME or the IT industry, for variety;
  4. lifehacks and technical discoveries I’ve found in that time;
  5. personal productivity tips & food for thought.

If I get to #3, 4, 5 in the coming months, I’ll consider that pretty good. And if there’s something you’d really like to see me write about in particular, feel free to contact me.

The post Awakening from the lucid dream appeared first on The Open Sourcerer.

July 03, 2019

DW5821e firmware update integration in ModemManager and fwupd

The Dell Wireless 5821e module is a Qualcomm SDX20 based LTE Cat16 device. This modem can work in either MBIM mode or QMI mode, and provides different USB layouts for each of the modes. In Linux kernel based and Windows based systems, the MBIM mode is the default one, because it provides easy integration with the OS (e.g. no additional drivers or connection managers required in Windows) and also provides all the features that QMI provides through QMI over MBIM operations.

The firmware update process of this DW5821e module is integrated in your GNU/Linux distribution, since ModemManager 1.10.0 and fwupd 1.2.6. There is no official firmware released in the LVFS (yet) but the setup is completely ready to be used, just waiting for Dell to publish an initial official firmware release.

The firmware update integration between ModemManager and fwupd involves different steps, which I’ll try to describe here so that it’s clear how to add support for more devices in the future.

1) ModemManager reports expected update methods, firmware version and device IDs

The Firmware interface in the modem object exposed in DBus contains, since MM 1.10, a new UpdateSettings property that provides a bitmask specifying which is the expected firmware update method (or methods) required for a given module, plus a dictionary of key-value entries specifying settings applicable to each of the update methods.

In the case of the DW5821e, two update methods are reported in the bitmask: “fastboot” and “qmi-pdc“, because both are required to have a complete firmware upgrade procedure. “fastboot” would be used to perform the system upgrade by using an OTA update file, and “qmi-pdc” would be used to install the per-carrier configuration files after the system upgrade has been done.

The list of settings provided in the dictionary contain the two mandatory fields required for all devices that support at least one firmware update method: “device-ids” and “version”. These two fields are designed so that fwupd can fully rely on them during its operation:

  • The “device-ids” field will include a list of strings providing the device IDs associated to the device, sorted from the most specific to the least specific. These device IDs are the ones that fwupd will use to build the GUIDs required to match a given device to a given firmware package. The DW5821e will expose four different device IDs:
    • “USB\VID_413C“: specifying this is a Dell-branded device.
    • “USB\VID_413C&PID_81D7“: specifying this is a DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318“: specifying this is hardware revision 0x318 of the DW5821e module.
    • “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE“: specifying this is hardware revision 0x318 of the DW5821e module running with a Vodafone-specific carrier configuration.
  • The “version” field will include the firmware version string of the module, using the same format as used in the firmware package files used by fwupd. This requirement is obviously very important, because if the format used is different, the simple version string comparison used by fwupd (literally ASCII string comparison) would not work correctly. It is also worth noting that if the carrier configuration is also versioned, the version string should contain not only the version of the system, but also the version of the carrier configuration. The DW5821e will expose a firmware version including both, e.g. “T77W968.F1.1.1.1.1.VF.001” (system version being F1.1.1.1.1 and carrier config version being “VF.001”)
  • In addition to the mandatory fields, the dictionary exposed by the DW5821e will also contain a “fastboot-at” field specifying which AT command can be used to switch the module into fastboot download mode.

2) fwupd matches GUIDs and checks available firmware versions

Once fwupd detects a modem in ModemManager that is able to expose the correct UpdateSettings property in the Firmware interface, it will add the device as a known device that may be updated in its own records. The device exposed by fwupd will contain the GUIDs built from the “device-ids” list of strings exposed by ModemManager. E.g. for the “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” device ID, fwupd will use GUID “b595e24b-bebb-531b-abeb-620fa2b44045”.

fwupd will then be able to look for firmware packages (CAB files) available in the LVFS that are associated to any of the GUIDs exposed for the DW5821e.

The CAB files packaged for the LVFS will contain one single firmware OTA file plus one carrier MCFG file for each supported carrier in the give firmware version. The CAB files will also contain one “metainfo.xml” file for each of the supported carriers in the released package, so that per-carrier firmware upgrade paths are available: only firmware updates for the currently used carrier should be considered. E.g. we don’t want users running with the Vodafone carrier config to get notified of upgrades to newer firmware versions that aren’t certified for the Vodafone carrier.

Each of the CAB files with multiple “metainfo.xml” files will therefore be associated to multiple GUID/version pairs. E.g. the same CAB file will be valid for the following GUIDs (using Device ID instead of GUID for a clearer explanation, but really the match is per GUID not per Device ID):

  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” providing version “T77W968.F1.2.2.2.2.VF.002”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_TELEFONICA” providing version “T77W968.F1.2.2.2.2.TF.003”
  • Device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VERIZON” providing version “T77W968.F1.2.2.2.2.VZ.004”
  • … and so on.

Following our example, fwupd will detect our device exposing device ID “USB\VID_413C&PID_81D7&REV_0318&CARRIER_VODAFONE” and version “T77W968.F1.1.1.1.1.VF.001” in ModemManager and will be able to find a CAB file for the same device ID providing a newer version “T77W968.F1.2.2.2.2.VF.002” in the LVFS. The firmware update is possible!

3) fwupd requests device inhibition from ModemManager

In order to perform the firmware upgrade, fwupd requires full control of the modem. Therefore, when the firmware upgrade process starts, fwupd will use the new InhibitDevice(TRUE) method in the Manager DBus interface of ModemManager to request that a specific modem with a specific uid should be inhibited. Once the device is inhibited in ModemManager, it will be disabled and removed from the list of modems in DBus, and no longer used until the inhibition is removed.

The inhibition may be removed by calling InhibitDevice(FALSE) explicitly once the firmware upgrade is finished, and will also be automatically removed if the program that requested the inhibition disappears from the bus.

4) fwupd downloads CAB file from LVFS and performs firmware update

Once the modem is inhibited in ModemManager, fwupd can right away start the firmware update process. In the case of the DW5821e, the firmware update requires two different methods and two different upgrade cycles.

The first step would be to reboot the module into fastboot download mode using the AT command specified by ModemManager in the “at-fastboot” entry of the “UpdateSettings” property dictionary. After running the AT command, the module will reset itself and reboot with a completely different USB layout (and different vid:pid) that fwupd can detect as being the same device as before but in a different working mode. Once the device is in fastboot mode, fwupd will download and install the OTA file using the fastboot protocol, as defined in the “flashfile.xml” file provided in the CAB file:

<parts interface="AP">
  <part operation="flash" partition="ota" filename="T77W968.F1.2.2.2.2.AP.123_ota.bin" MD5="f1adb38b5b0f489c327d71bfb9fdcd12"/>
</parts>

Once the OTA file is completely downloaded and installed, fwupd will trigger a reset of the module also using the fastboot protocol, and the device will boot from scratch on the newly installed firmware version. During this initial boot, the module will report itself running in a “default” configuration not associated to any carrier, because the OTA file update process involves fully removing all installed carrier-specific MCFG files.

The second upgrade cycle performed by fwupd once the modem is detected again involves downloading all carrier-specific MCFG files one by one into the module using the QMI PDC protocol. Once all are downloaded, fwupd will activate the specific carrier configuration that was previously active before the download was started. E.g. if the module was running with the Vodafone-specific carrier configuration before the upgrade, fwupd will select the Vodafone-specific carrier configuration after the upgrade. The module would be reseted one last time using the QMI DMS protocol as a last step of the upgrade procedure.

5) fwupd removes device inhibition from ModemManager

The upgrade logic will finish by removing the device inhibition from ModemManager using InhibitDevice(FALSE) explicitly. At that point, ModemManager would re-detect and re-probe the modem from scratch, which should already be running in the newly installed firmware and with the newly selected carrier configuration.

Pitivi – Making a Nest

                              PiTiVi Logo.svg          Image result for google summer of code

Pitivi is an open source video editing software for Linux. It provides creatives with a simple and elegant interface to edit and bring their videos to realisation. As with every other great software, Pitivi’s development community is always striving to add newer and better features. This year I participated in the Google Summer of Code to add the ‘Nesting’ Feature to the platform. I am currently working on this with my mentor Thiblahute Saunier. In this blog I chart out our current progress and the future tasks at hand.

Nesting Clips:

With nesting users can combine a series of sequences into a master clip. This master clip can be edited on like a normal clip, while simultaneously providing users the ability to go into the master clip’s timeline and make changes. This will help them to organise the timeline, enable re-usability of sequences and provide a richer user experience. 

Nesting in Adobe Premiere Pro

(Here is a link to better understand Nesting: https://www.youtube.com/watch?v=A8Aw53JBLZY )

For the past weeks we have been working on the back-end, Gstreamer Editing Services(GES) to implement nesting of clips in a timeline. Earlier with ges-launch-1.0 we could create a timeline, load several clips to it with +clip , add effects to them +effect, set it’s property etc, now with ges-launch-1.0 .xges file can be used as source and nested timelines can be created.

(Here is a link to ges-launch-1.0 documentation: https://gstreamer.freedesktop.org/documentation/tools/ges-launch.html?gi-language=c#)

So while nesting in Pitivi, the idea is to create a new timeline, copy and paste the selected clips in this timeline, remove them from the main timeline and finally add the new timeline to the main timeline.

Now .xges file can be used as source and nested timelines can be created. So while nesting in Pitivi the idea is to create a new timeline, copy and paste the selected clips in this timeline, remove them from the main timeline and finally add the new timeline to the main timeline.

Testing:

gst-validate-launcher is used to create test suites to test the behavior of the created pipelines and test the user actions as described in the .scenario files.

(Here is the link to gst-validate-launcher documentation: https://gstreamer.freedesktop.org/documentation/gst-devtools/gst-validate-launcher.html?gi-language=c )

I have been busy implementing tests and writing scenarios for nesting. The test suite is working properly. For playback.nested tests, ges-launch-1.0 +clip is used to add the .xges file to the timeline instead of -l. Thanks to Thiblahute, most of the tests are passing successfully. So now ‘gst-validate-launcher ges’ generates and runs through the tests for Nesting.

The scenarios seek on nested timelines and check whether the outputted frame is correct. To be specific, they load a clip, serialize it in a .xges file, resulting in a nested timeline. It then loads the .xges file, seeks and checks the frame while moving them around in the layers and adding effects to the nested timeline. Basically, they emulate a user’s actions. Currently I am wrapping up a few scenarios.

The journey ahead:

The next part of the project will involve in the implementation and user interface of nesting in Pitivi. I had some ideas about the interface which I suggested in my proposal but we will be having rigorous discussions and decide the final interface.

My experience working in Pitivi:

In the past few weeks I’ve learnt and improved a lot. In the beginning I was a bit reserved and shy to tell my problems but after talking and getting to know my mentor, I think I’ve overcome that fear. The guidance of my mentor has been crucial in this journey. Until now he has done all of the heavy-lifting for the back-end all the while helping me to get up to speed. Hopefully now I will be able to take the reins and at the same time be able to learn more from him. I look forward to an amazing summer and the work we have in front of us.

July 02, 2019

Constraint layouts

What are constraints

At its most basic, a constraint is a relation between two values. The relation
can be described as a linear equation:

target.attribute = source.attribute × multiplier + constant

For instance, this:

Can be described as:

blue.start = red.end × 1.0 + 8.0

Or:

  • the attribute, “start”, of the target, “blue”, which is going to be set by the constraint; this is the left hand side of the equation
  • the relation between the left and right hand sides of the equation, in this case equality; relations can also be greater than or equal to,
    and less than or equal to
  • the attribute, “end”, of the source, “red”, which is going to be read by the constraint; this is the right hand side of the equation
  • the multiplier, “1.0”, applied to the attribute of the source
  • the constant, “8.0”, an offset added to the attribute

A constraint layout is a series of equations like the one above, describing all the relationships between the various parts of your UI.

It’s important to note that the relation is not an assignment, but an equality (or an inequality): both sides of the equation will be solved in a way that satisfies the constraint; this means that the list of constraints can be rearranged; for instance, the example above can be rewritten as:

red.end = blue.start × 1.0 - 8.0

In general, for the sake of convenience and readability, you should arrange your constraints in reading order, from leading to trailing edge, from top to bottom. You should also favour whole numbers for multipliers, and positive numbers for constants.

Solving the layout

Systems of linear equations can have one solution, multiple solutions, or even no solution at all. Additionally, for performance reasons, you don’t really want to recompute all the solutions every time.

Back in 1998, the Cassowary algorithm for solving linear arithmetic constraints was published by Greg J. Badros and Alan Borning, alongside its implementation in C++, Smalltalk, and Java. The Cassowary algorithm tries to solve a system of linear equations by finding its optimal solution; additionally, it does so incrementally, which makes it very useful for user interfaces.

Over the past decade various platforms and toolkits started providing layout managers based on constraints, and most of them used the Cassowary algorithm. The first one was Apple’s AutoLayout, in 2011; in 2016, Google added a ConstraintLayout to the Android SDK.

In 2016, Endless implemented a constraint layout for GTK 3 in a library called Emeus. Starting from that work, GTK 4 now has a GtkConstraintLayout layout manager available for application and widget developers.

The machinery that implements the constraint solver is private to GTK, but the public API provides a layout manager that you can assign to your GtkWidget class, and an immutable GtkConstraint object that describes each constraint you wish to add to the layout, binding two widgets together.

Guiding the constraints

Constraints use widgets as sources and targets, but there are cases when you want to bind a widget attribute to a rectangular region that does not really draw anything on screen. You could add a dummy widget to the layout, and then set its opacity to 0 to avoid it being rendered, but that would add unnecessary overhead to the scene. Instead, GTK provides GtkConstraintGuide, and object whose only job is to contribute to the layout:

An example of the guide UI element

In the example above, only the widgets marked as “Child 1” and “Child 2” are going to be visible, while the guide is going to be an empty space.

Guides have a minimum, natural (or preferred), and maximum size. All of them are constraints, which means you can use guides not just as helpers for alignment, but also as flexible spaces in a layout that can grow and shrink.

Describing constraints in a layout

Constraints can be added programmatically, but like many things in GTK, they can also be described inside GtkBuilder UI files, for convenience. If you add a GtkConstraintLayout to your UI file, you can list the constraints and guides inside the special “<constraints>” element:

  <object class="GtkConstraintLayout">
    <constraints>
      <constraint target="button1" target-attribute="width"
                     relation="eq"
                     source="button2" source-attribute="width" />
      <constraint target="button2" target-attribute="start"
                     relation="eq"
                     source="button1" source-attribute="end"
                     constant="12" />
      <constraint target="button1" target-attribute="start"
                     relation="eq"
                     source="super" source-attribute="start"
                     constant="12" />
      <constraint target="button2" target-attribute="end"
                     relation="eq"
                     source="super" source-attribute="end"
                     constant="-12"/>
    </constraints>
  </object>

You can also describe a guide, using the “<guide>” custom element:

  <constraints>
    <guide min-width="100" max-width="500" />
  </constraints>

Visual Format Language

Aside from XML, constraints can also be described using a compact syntax called “Visual Format Language”. VFL descriptions are row and column oriented: you describe each row and column in the layout using a line that visually resembles the layout you’re implementing, for instance:

|-[findButton]-[findEntry(<=250)]-[findNext][findPrev]-|

Describes an horizontal layout where the findButton widget is separated from the leading edge of the layout manager by some default space, and followed by the same default amount of space; then by the findEntry widget, which is meant to be at most 250 pixels wide. After the findEntry widget we have some default space again, followed by two widgets, findNext and findPrev, flush one against the other; finally, these two widgets are separated from the trailing edge of the layout manager by the default amount of space.

Using the VFL notation, GtkConstraintLayout will create all the required constraints without necessarily having to describe them all manually.

It’s important to note that VFL cannot describe all possible constraints; in some cases you will need to create them using GtkConstraint’s API.

Limits of a constraint layout

Constraint layouts are immensely flexible because they can implement any layout policy. This flexibility comes at a cost:

  • your layout may have too many solutions, which makes it ambiguous and unstable; this can be problematic, especially if your layout is very complex
  • your layout may not have any solution. This is usually the case when you’re not using enough constraints; a rule of thumb is to use at least two constraints per target per dimension, since all widgets should have a defined position and size
  • the same layout can be described by different series of constraints; in some cases it’s virtually impossible to say which approach is better, which means you will have to experiment, especially when it comes to layouts that dynamically add or remove UI elements, or that allow user interactions like dragging UI elements around

Additionally, at larger scales, a local, ad hoc layout manager may very well be more performant than a constraint based one; if you have a list box that can grow to an unknown amount of rows you should not replace it with a constraint layout unless you measure the performance impact upfront.

Demos

Of course, since we added this new API, we also added a few demos to the GTK Demo application:

A constraints demoThe constraints demo window, as part of the GTK demo application.

As well as a full constraints editor demo:

The GTK constraints editor demoA screenshot of the GTK constraints editor demo application, showing the list of UI elements, guides, and constraints in a side bar on the left, and the result on the right side of the window

More information

Removing rsvg-view

I am preparing the 2.46.0 librsvg release. This will no longer have the rsvg-view-3 program.

History of rsvg-view

Rsvg-view started out as a 71-line C program to aid development of librsvg. It would just render an SVG file to a pixbuf, stick that pixbuf in a GtkImage widget, and show a window with that.

Over time, it slowly acquired most of the command-line options that rsvg-convert supports. And I suppose, as a way of testing the Cairo-ification of librsvg, it also got the ability to print SVG files to a GtkPrintContext. At last count, it was a 784-line C program that is not really the best code in the world.

What makes rsvg-view awkward?

Rsvg-view requires GTK. But GTK requires librsvg, indirectly, through gdk-pixbuf! There is not a hard circular dependency because GTK goes, "gdk-pixbuf, load me this SVG file" without knowing how it will be loaded. In turn, gdk-pixbuf initializes the SVG loader provided by librsvg, and that loader reads/renders the SVG file.

Ideally librsvg would only depend on gdk-pixbuf, so it would be able to provide the SVG loader.

The rsvg-view source code still has a few calls to GTK functions which are now deprecated. The program emits GTK warnings during normal use.

Rsvg-view is... not a very good SVG viewer. It doesn't even start up with the window scaled properly to the SVG's dimensions! If used for quick testing during development, it cannot even aid in viewing the transparent background regions which the SVG does not cover. It just sticks a lousy custom widget inside a GtkScrolledWindow, and does not have the conventional niceties to view images like zooming with the scroll wheel.

EOG is a much better SVG viewer than rsvg-view, and people actually invest effort in making it pleasant to use.

Removal of rsvg-view

So, the next version of librsvg will not provide the rsvg-view-3 binary. Please update your packages accordingly. Distros may be able to move the compilation of librsvg to a more sensible place in the platform stack, now that it doesn't depend on GTK being available.

What can you use instead? Any other image viewer. EOG works fine; there are dozens of other good viewers, too.

Canonical’s Desktop Team is hiring

Join the desktop team

Some good news for anyone who might read this. In the Canonical desktop team we’re hiring a new Software Engineer.

More details in the job description, but if you’re looking for an opportunity that lets you:

  • work remotely
  • work on GNOME and related desktop technologies (both upstream and downstream!)
  • help to ship a solid Ubuntu every 6 months
  • work with smart people
  • have the opportunity to travel to, and present at, conferences and internal events

then please apply. You do not need to already be a GNOME or a Ubuntu/Debian expert to apply for this position – you’ll be given a mentor and plenty of time and support to learn the ropes.

Please feel free to contact me on IRC (Laney on all the best networks) / email (iain.lane@canonical.com) / Telegram (@lan3y) if you’re considering an application and you’d like to chat about it.

Network and Disk sources

Sysprof has gained network and disk device statistics. You can use the combined graphs for a quick overview, or view them individually.

June 30, 2019

Now I have a web Solid pod

I’ve just created my Solid pod: https://olea.solid.community/.

Tim Berners-Lee proposes Solid as a way to implement his original vision for the World Wide Web. If timbl says something like this then I’m interested:

Within the Solid ecosystem, you decide where you store your data. Photos you take, comments you write, contacts in your address book, calendar events, how many miles you run each day from your fitness tracker… they’re all stored in your Solid POD. This Solid POD can be in your house or workplace, or with an online Solid POD provider of your choice. Since you own your data, you’re free to move it at any time, without interruption of service.

More details are at https://solid.inrupt.com/how-it-works.

I’ve poked just a bit about what Solid can do. Don’t have many time to do now. It’s nice to check how it’s based on linked data, so the potential applications are infinite. And they have a forum too (running Discourse, ♥).

My IT personal strategy requires to implement my own services as much as I can. Solid has a server implementation available I would like to use somewhere in the future.

Love to see the Semantic Web coming back.

June 28, 2019

On Version Numbers

I’m excited to announce that Epiphany Tech Preview has reached version 3.33.3-33, as computed by git describe. That is 33 commits after 3.33.3:

Epiphany about dialog displaying the version number

I’m afraid 3.33.4 will arrive long before we  make it to 3.33.3-333, so this is probably the last cool version number Epiphany will ever have.

I might be guilty of using an empty commit to claim the -33 commit.

I might also apologize for wasting your time with a useless blog post, except this was rather fun. I await the controversy of your choice in the comments.