GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

December 11, 2017

#PeruRumboGSoC2018 – Session 4

We celebrated yesterday another session of the local challenge 2017-2 “PeruRumboGSoC2018”. It was held at the Centro Cultural Pedro Paulet of FIEE UNI.  GTK on C was explained during the fisrt two hours of the morning based on the window* exercises from my repo to handle some widgets such as windows, label and buttons.Before he scheduled lunch, we were able to program a Language Selector using grid with GTK on C. These are some of the students git: Fiorella, Cris, Alex, Johan Diego & GiohannyWe’ve shared a deliciuos Pollo a la Brasa, and a tasty Inca Cola to drink during our lunch.

Martin Vuelta helped us to make our miniapplication works by clicking a single or multiple languages in our language selector to open a useful links to learn those programming languages. We needed to install webkit4 packages on Fedora 27.Thank you so much Martin for supporting the group with your expertise and good sense of humor! We are going to have three more sessions this week to finish the program.


Filed under: Education, Events, FEDORA, GNOME, τεχνολογια :: Technology, Programming Tagged: #PeruRumboGSoC2018, apply GSoC, fedora, Fedora + GNOME community, Fedora Lima, gnome 3, GSoC 2018, GSoC Peru Preparation, Julita Inca, Julita Inca Chiroque, Lima, Peru Rumbo al GSoC 2018

ZeMarmot project got a Liberapay account!

We were asked a few times about trying out Liberapay. It is different from other recurring funding platforms (such as Patreon and Tipeee) that it is managed by a non-profit and have lesser fees (from what I understand, there are payment processing fees, but they don’t add their own fees) since they fund themselves on their own platform (also the website itself is Free Software).

Though we liked the concept, until now we were a bit reluctant for a single reason: ZeMarmot is already on 2 platforms (we started before even knowing Liberapay), and more platforms mean more time spent to manage them all, time we prefer using to hack Free Software and draw/animate Libre Animation.

Nevertheless recently Patreon made fee policy changes which seem to anger the web (just search for it on your favorite web search engine, there are dozens of articles on the topic) and as most Patreon projects, we lost many patrons (at time of writing, 20 patrons for $80.63 of pledge left in 4 days, more than 10% the patronage received from this platform!).

So we decided to give Liberapay a try. If you like ZeMarmot project, and our contributions to GIMP, then feel free to fund us at:

» ZeMarmot Liberapay page «
https://liberapay.com/ZeMarmot/

Main difference with other platforms:

  • both EUR (€) and USD ($) donations are accepted, which is cool;
  • there are no news system so one has to get them oneself (for instance reading the project blog or twitter);
  • all patrons are fully anonymous (which means they won’t appear in credits of the movie);
  • localization is supported (right now our page is only in English, but we will make a French version soon!).

Now this is just another platform, we are not abandoning Patreon and Tipeee. Don’t feel like you have to move your patronage to Liberapay if you don’t want to. From now on, this will just be one more option.

Finally we remind that ZeMarmot project is fully managed by a non-profit registered in France, LILA. This means there are also other means to support the project, for instance with direct bank transfers (most European banks allow monthly bank transfers without any fees, so if you are in the EU, this may be the best solution), or Paypal (though for very small amounts, fees are quite expensive, they are quite ok for most donations), etc. To see the full list of ways to fund LILA, hence ZeMarmot and GIMP: https://libreart.info/en/donate

CSR devices now supported in fwupd

On Friday I added support for yet another variant of DFU. This variant is called “driverless DFU” and is used only by BlueCore chips from Cambridge Silicon Radio (now owned by Qualcomm). The driverless just means that it’s DFU like, and routed over HID, but it’s otherwise an unremarkable protocol. CSR is a huge ODM that makes most of the Bluetooth audio chips in vendor hardware. The hardware vendor can enable or disable features on the CSR microcontroller depending on licensing options (for instance echo cancellation), and there’s even a little virtual machine to do simple vendor-specific things. All the CSR chips are updatable in-field, and most vendors issue updates to fix sound quality issues or to add support for new protocols or devices.

The BlueCore CSR chips are used everywhere. If you have a “wireless” speaker or headphones that uses Bluetooth there is a high probability that it’s using a CSR chip inside. This makes the addition of CSR support into fwupd a big deal to access a lot of vendors. It’s a lot easier to say “just upload firmware” rather than “you have to write code” so I think it’s useful to have done this work.

The vendor working with me on this feature has been the awesome AIAIAI who make some very nice modular headphones. A few minutes ago we uploaded the H05 v1.5 firmware to the LVFS testing stream and v1.6 will be coming soon with even more bug fixes. To update the AIAIAI H05 firmware you just need to connect the USB cable and press and hold the top and bottom buttons on the headband until the LED goes out. You can then update the firmware using fwupdmgr update or just using GNOME Software. The big caveat is that you have to be running fwupd >= 1.0.3 which isn’t scheduled to be released until after Christmas.

I’ve contacted some more vendors I suspect are using the CSR chips. These include:

  • Jarre Technologies
  • RIVA Audio
  • Avantree
  • Zebra
  • Fugoo
  • Bowers&Wilkins
  • Plantronics
  • BeoPlay
  • JBL

If you know of any other “wireless speaker” companies that have issued at least one firmware update to users, please let me know in a comment here or in an email. I will follow up all suggestions and put the status on the Naughty&Nice vendorlist so please check that before suggesting a company. It would also be really useful to know the contact details (e.g. the web-form URL, or the email address) and also the model name of the device that might be updatable, although I’m happy to google myself if required. Thanks as always to Red Hat for allowing me to work on this stuff.

Grow your skills with GNOME

Another year of GNOME development is coming to a close so it’s time to look back as we forge into 2018. This is going to be more verbose than I generally write. I hope you’ll have a warm drink and take the time to read through because I think this is important.

Twenty years of GNOME is a monumental achievement. We know so much more about software development than we did when we started. We regularly identify major shortcomings and try to address them. That is a part of our shared culture I enjoy greatly.

GNOME contributors have a wide variety of interests and direction when it comes to a computer’s role in our lives. That naturally creates an ever-expanding set of goals. As our goals expand we must become more organized if our quality is to maintain or improve.

Traditionally, we have a very loosely organized project. People spend their time on things that interest them, which does not put the focus on the end product. I intend to convince you that is now holding us back. We’re successful not because of our engineering focus but despite it. This results in overworking our contributors and we can do better.


Those that have not worked in larger engineering companies may be less familiar with some of the types of roles there are in software development. So let’s take a moment to describe these roles so everyone is on the same page.

Programmers are responsible for the maintenance of the code-base and implementing new features. All of us familiar with this role in GNOME because it’s what a large number of our contributors do.

Designers are responsible for thinking through the current and planned features to find improved ways for users to solve their problems.

Graphic Designers can often overlap with Design, but not necessarily. They’re responsible for creating the artwork used in the given project.

Quality Assurance ensures that you don’t ship a product that is broken. You don’t wait until the freezes to do this, but do it as features are developed so that the code is fresh in the programmers minds while addressing the issues. The sooner you catch issues, the less likely a code or design failure reaches users.

User Support is your front-line defense to triage incoming issues by your users. Without users your project is meaningless. Finding good people for this role can have a huge impact on keeping your users happy and your developers less stressed. If your bug tracker is also your user support, you might want to ask yourself if you really have user support. When you have a separate support system and bug-tracker, user support is responsible for converting user issues into detailed bug reports.

Security Engineers look for trust, privacy, and other safety issues in products and infrastructure. They take responsibility to ensure issues are fixed in a timely manner and work with others when planning features to help prevent issues in the first place.

User and Developer Advocates are liaisons between your team and the people using (or developing third-party tools with) your product. They amplify the voices of those speaking important truths.

User Testing is responsible for putting your product in-front of users and seeing how well they can perform the given tasks. Designers use this information to refine and alter designs.

Tech writers are responsible for writing technical documentation and help guides. They also help refine programmer authored API documentation. This role often fulfills the editor role to ensure a unified voice to your project’s written word.

Build engineers ensure that your product can be packaged, built reliably, and distributed to users.

Operations and “DevOps” ensure that your product is working day-to-day. They provide and facilitate the tooling that these roles need to do their jobs well.

Internationalization and localization ensure that your software is available to a group of users who might otherwise not be able to use your software. It enables your software to have global impact.

Release management is your final check-point for determining when and what you release based on the goals of the project. They don’t necessarily determine road-maps, but they do keep you honest.

Product managers are responsible for taking information and feedback from all these roles and converting that into a coherent project direction and forward looking vision. They analyze common issues, bug velocity, and features to ensure reasonable milestones that keeps the product functional as it transforms into it’s more ideal state. Most importantly, this role is leadership.

There are other roles involved in the GNOME project. If I didn’t include your role here, it is by no means of any lesser value.


For the past 3 years I’ve been working very hard because I fulfill a number of these roles for Builder. It’s exhausting and unsustainable. It contributes to burnout and hostile communication by putting too much responsibility on too few people’s shoulders.

I believe that communication breakdown is a symptom of a greater problem, not the problem itself.

To improve the situation, we need to encourage more people to join us in non-programming roles. That doesn’t mean that you can’t program, but simply a recognition that these other roles are critical to a functioning and comprehensive software project.


There are a few strategies used by companies with how to structure teams. But they can be generalized into three forms, as follows.

Teams based on product contain the aforementioned roles, but are assembled in a tight-knit group where people are focused on a single product. This model can excel at ensuring all of the members are focused on a single vision.

Teams based on role contain the aforementioned roles, but are assembled by the role. Members of the team work on different projects. This model can accel in cross-training newer team members.

A hybrid approach tries to balance the strengths of both team-based and role-based so that your team members get long-term mentorship but stick around in a project long enough to benefit from contextual knowledge.

To some degree we have teams based on role even though it’s very informal. I think we could really gain from increasing our contributors in these roles and taking a hybrid approach. For the hybrid approach to work, there needs to be strong mentor-ship for each role.

My current opinion is that with a strong focus on individual products, we can improve our depth of quality and address many outstanding user issues.

Due to how loosely assembled our teams are I think it is very difficult for someone to join GNOME and provide one of these non-programming roles in an existing project. That is because they not only need to fulfill the role but also define what that role should be and how it would contribute to the project. Then they need to convince the existing team members it’s needed.


With stronger inclusion of these roles into our software process we can begin to think about the long-term skill development of our contributors.

A good manager shepherds their team by ensuring they refine existing skills while expanding to new areas of interest.

I want people to know that by joining GNOME they can feel assured that they will be part of something greater than themselves. They will both refine and develop new skills. In many ways, we provide an accelerator for career development. We can provide an opportunity that might otherwise be unapproachable.


If contributing to GNOME in one or more of these roles sounds interesting to you, then please come join us. We need to learn to rely on each other in new ways. For that to happen, self-organization to fulfill these roles must become a priority.

https://www.gnome.org/get-involved/

December 10, 2017

The art of the usability interview

During a usability test, it's important to understand what the tester is thinking. What were they looking for when they couldn't find a button or menu item? During the usability test, I recommend that you try to observe, take notes, capture as much data as you can about what the tester is doing. Only after the tester is finished with a scenario or set of scenarios should you ask questions.

But how do you ask questions in a way to gain the most insight? Asking the right questions can sometimes be an art form; it certainly requires practice. A colleague shared with me a few questions she uses in her usability interviews, and I am sharing them here for your usability interviews:

Before starting a scenario or set of scenarios:

  • What are three things you might do in this application?
  • What menu options do you see here and what do you think they do?
  • What might you do on this panel?
  • What is your first impression of the application?
  • What do these icons do? What do they represent?

After finishing a set of scenarios:

  • Who do you think the application was created for?
  • How easy did you think it was to get around the application?
  • If you could make one change to the application, what would it be?
  • Is there a feature you think is missing?
  • Do you remember any phrases or icons that you didn't understand?


The goal is to avoid leading questions, or any questions that suggests a "right" and "wrong" answer.

December 09, 2017

scikit-survival 0.5 released

Today, I released a new version of scikit-survival. This release adds support for the latest version of scikit-learn (0.19) and pandas (0.21). In turn, support for Python 3.4, scikit-learn 0.18 and pandas 0.18 has been dropped.

Many people are confused about the meaning of predictions. Often, they assume that predictions of a survival model should always be non-negative since the input is the time to an event. However, this not always the case. In general, predictions are risk scores of arbitrary scale. In particular, survival models usually do not predict the exact time of an event, but the relative order of events. If samples are ordered according to their predicted risk score (in ascending order), one obtains the sequence of events, as predicted by the model. A more detailed explanation is available in the Understanding Predictions in Survival Analysis section of the documentation.

Download

You can install the latest version via Anaconda (Linux, OSX and Windows):

conda install -c sebp scikit-survival

or via pip:

pip install -U scikit-survival

December 08, 2017

Moving to Berlin

I have been meaning to document my experience of moving to Berlin, mainly to help people who are considering to move or are about to move. However I'm lazy and unless I'm paid to do so, I just won't get around to doing that so instead of ending up posting nothing, I'll just quickly list all the advice I have here:

  • Don't actually order any services through check24 website. Only use them to compare prices etc.
  • Avoid Vodafone for broadband connection. Follow this thread for why.
  • Consider using an online bank, like N26.
  • For your Anmeldung,
    • go to Bürgeramt in Neukölln or Kreuzberg (unless you speak German).
    • book appointment around noon or be prepared to wait a month.
  • Make sure you have local friends who speak German and are willing to help you out. Many locals will tell you that you don't need German in Berlin but that is simply not true.
  • Either consider hiring an estate agent or make sure your temporary residence allows you to register on their address.
  • Related to above, you don't have to pay deposit upfront if you use EuroKaution service.
  • Consider the after 10am monthly travel pass if you don't commute to work before that time.

Setting up Continuous Integration on gitlab.gnome.org

Simple Scan recently migrated to the new gitlab.gnome.org infrastructure. With modern infrastructure I now have the opportunity to enable Continuous Integration (CI), which is a fancy name for automatically building and testing your software when you make changes (and it can do more than that too).

I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!

There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.

To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:

build-ubuntu:
  image: ubuntu:rolling
  before_script:
    - apt-get update
    - apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
  script:
    - meson _build
    - ninja -C _build install


The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.

The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.

The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.

Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.

And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the README.md:

[![Build Status](https://gitlab.gnome.org/GNOME/simple-scan/badges/master/build.svg)](https://gitlab.gnome.org/GNOME/simple-scan/pipelines)

This gives the following image that shows the status of the build:

pipeline status

And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:

build-fedora:
  image: fedora:latest
  before_script:
    - dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
  script:
    - meson _build
    - ninja -C _build install


Now it builds on both Ubuntu and Fedora with every commit!

I hope this helps you getting started with CI and gitlab.gnome.org. Happy hacking.

December 07, 2017

Default ColorSpaces

Recently a user filed a bug where the same RGB color when converted into a UIColor, and later into CGColor is different that going from the RGB value to a CGColor directly on recent versions of iOS.

You can see the difference here:

What is happening here is that CGColors that are created directly from the RGB values are being created on kCGColorSpaceGenericRGB colorspace. Starting with iOS 10, UIColor objects are being created with a device specific color space, in my current simluator this value is kCGColorSpaceExtendedSRGB.

You can see the differences in this workbook

OSK update

There’s been a rumor that I was working on improving gnome-shell on-screen keyboard, what’s been up here? Let me show you!

The design has been based on the mockups at https://wiki.gnome.org/Design/OS/ScreenKeyboard, here’s how it looks in English (mind you, hasn’t gone through theming wizards):

The keymaps get generated from CLDR (see here), which helped boost the number of supported scripts (c.f. caribou), some visual examples:

As you can see there’s still a few ugly ones, the layouts aren’t as uniform as one might expect, these issues will be resolved over time.

The additional supported scripts don’t mean much without a way to send those fancy chars/strings to the client. We traditionally were just able to send forged keyboard events, which means we were restricted to keycodes that had a representation in the current keymap. On X11 we are kind of stuck with that, but we can do better on Wayland, this work relies on a simplified version of the text input protocol that I’m doing the last proofreading before proposing as v3 (the branches currently use a private copy). Using an specific protocol allows for sending UTF8 strings independently of the keymap, very convenient too for text completion.

But there are keymaps where CLDR doesn’t dare going, prominent examples are Chinese or Japanese. For those, I’m looking into properly leveraging IBus so pinyin-like input methods work by feeding the results into the suggestions box:

Ni Hao!

The suggestion box even kind of works with the typing booster ibus IM. But you have to explicitly activate it, there is room for improvement here in the future.

And there is of course still bad stuff and todo items. Some languages like Korean neither have a layout, nor input methods that accept latin input, so they are badly handled (read: not at all). It would also be nice to support shape-based input.

Other missing things from the mockups are the special numeric and emoji keymaps, there’s some unpushed work towards supporting those, but I had to draw the line somewhere!

The work has been pushed in mutter, gtk+ and gnome-shell branches, which I hope will get timely polished and merged this cycle 🙂

A mini-rant on the lack of string slices in C

Porting of librsvg to Rust goes on. Yesterday I started porting the C code that implements SVG's <text> family of elements. I have also been replacing the little parsers in librsvg with Rust code.

And these days, the lack of string slices in C is bothering me a lot.

What if...

It feels like it should be easy to just write something like

typedef struct {
    const char *ptr;
    size_t len;
} StringSlice;

And then a whole family of functions. The starting point, where you slice a whole string:

StringSlice
make_slice_from_string (const char *s)
{
    StringSlice slice;

    assert (s != NULL);

    slice.ptr = s;
    slice.len = strlen (s);
    return slice;
}

But that wouldn't keep track of the lifetime of the original string. Okay, this is C, so you are used to keeping track of that yourself.

Onwards. Substrings?

StringSlice
make_sub_slice(StringSlice slice, size_t start, size_t len)
{
    StringSlice sub;

    assert (len <= slice.len);
    assert (start <= slice.len - len);  /* Not "start + len <= slice.len" or it can overflow. */
                                        /* The subtraction can't underflow because of the previous assert */
    sub.ptr = slice.ptr + start;
    sub.len = len;
    return sub;
}

Then you could write a million wrappers for g_strsplit() and friends, or equivalents to them, to give you slices instead of C strings. But then:

  • You have to keep track of lifetimes yourself.

  • You have to wrap every function that returns a plain "char *"...

  • ... and every function that takes a plain "char *" as an argument, without a length parameter, because...

  • You CANNOT take slice.ptr and pass it to a function that just expects a plain "char *", because your slice does not include a nul terminator (the '\0 byte at the end of a C string). This is what kills the whole plan.

Even if you had a helper library that implements C string slices like that, you would have a mismatch every time you needed to call a C function that expects a conventional C string in the form of a "char *". You need to put a nul terminator somewhere, and if you only have a slice, you need to allocate memory, copy the slice into it, and slap a 0 byte at the end. Then you can pass that to a function that expects a normal C string.

There is hacky C code that needs to pass a substring to another function, so it overwrites the byte after the substring with a 0, passes the substring, and overwrites the byte back. This is horrible, and doesn't work with strings that live in read-only memory. But that's the best that C lets you do.

I'm very happy with string slices in Rust, which work exactly like the StringSlice above, but &str is actually at the language level and everything knows how to handle it.

The glib-rs crate has conversion traits to go from Rust strings or slices into C, and vice-versa. We alredy saw some of those in the blog post about conversions in Glib-rs.

Sizes of things

Rust uses usize to specify the size of things; it's an unsigned integer; 32 bits on 32-bit machines, and 64 bits on 64-bit machines; it's like C's size_t.

In the Glib/C world, we have an assortment of types to represent the sizes of things:

  • gsize, the same as size_t. This is an unsigned integer; it's okay.

  • gssize, a signed integer of the same size as gsize. This is okay if used to represent a negative offset, and really funky in the Glib functions like g_string_new_len (const char *str, gssize len), where len == -1 means "call strlen(str) for me because I'm too lazy to compute the length myself".

  • int - broken, as in libxml2, but we can't change the API. On 64-bit machines, an int to specify a length means you can't pass objects bigger than 2 GB.

  • long - marginally better than int, since it has a better chance of actually being the same size as size_t, but still funky. Probably okay for negative offsets; problematic for sizes which should really be unsigned.

  • etc.

I'm not sure how old size_t is in the C standard library, but it can't have been there since the beginning of time — otherwise people wouldn't have been using int to specify the sizes of things.

OARS Gets a New Home

The Open Age Ratings Service is a simple website that lets you generate some content rating XML for your upstream AppData file.

In the last few months it’s gone from being hardly used to being used multiple times an hour, probably due to the requirement that applications on Flathub need it as part of the review process. After some complaints, I’ve added a ton more explanation to each question and made it easier to use. In particular if you specify that you’re creating metadata for a “non-game” then 80% of the questions get hidden from view.

As part of the relaunch, we now have a proper issue tracker and we’re already pushed out some minor (API compatible) enhancements which will become OARS v1.1. These include several cultural sensitivity questions such as:

  • Homosexuality
  • Prostitution
  • Adultery
  • Desecration
  • Slavery
  • Violence towards places of worship

The cultural sensitivity questions are work in progress. If you have any other ideas, or comments, please let me know. Also, before I get internetted-to-death, this is just for advisory purposes, not for filtering. Thanks.

Comparing C, C++ and D performance with a real world project

Some time ago I wrote a blog post comparing the real world performance of C and C++ by converting Pkg-config from C to C++ and measuring the resulting binaries. This time we ported it to D and running the same tests.

Some caveats

I got comments that the C++ port was not "idiomatic C++". This is a valid argument but also kind of the point of the test. It aimed to test the behavior of ported code, not greenfield rewrites. This D version is even more unidiomatic, mostly because this is the first non-trivial D project I have ever done. An experienced D developer could probably do many of the things much better than what is there currently. In fact, there are parts of the code I would do differently based solely on the things I learned as the project progressed.

The code is available in this Github repo. If you wish to use something else than GDC, you probably need to tweak the compiler flags a bit. It also does not pass the full test suite. Once the code was in good enough condition to pass the Gtk+ test needed to get the results on this post, motivation to keep working on it dropped a fair bit.

The results

The result array is the same as in the original post, but the values for C++ using stdlibc++ have been replaced with corresponding measurements from GDC.

                                    GDC   C++ libc++       C

Optimized exe size                364kB        153kB    47kB
minsize exe size                  452kB        141kB    43kB
3rd party dep size                    0            0   1.5MB
compile time                       3.9s         3.3s    0.1s
run time                          0.10s       0.005s  0.004s
lines of code                      3249         3385    3388
memory allocations                  151         8571    5549
Explicit deallocation calls           0            0      79
memory leaks                          7            0   >1000
peak memory consumption            48.8kB         53kB    56kB

Here we see that code size is not D's strong suite. As an extra bit of strangeness the size optimized binary took noticeably more space than the regular one.  Compile times are also unexpectedly long given that D is generally known for its fast compile times. During development GDC felt really snappy, though, printing error messages on invalid code almost immediately. This would indicate that the slowdown is coming from GDC's optimization and code generation passes.

The code base is the smallest of the three but not by a huge margin. D's execution time is the largest of the three but most of that is probably due to runtime setup costs, which are amplified in a small program like this.

Memory consumption is where things get interesting. D uses a garbage collector by default whereas C and C++ don't, requiring explicit deallocation either manually or with RAII instead. The difference is clear in the number of allocations done by each language. Both C and C++ have allocation counts in the thousands whereas D only does 151 of them. Even more amazingly it manages to beat the competition by using the least amount of memory of any of the tested languages.

Memory graphs

A massif graph for the C++ program looked like this:


This looks like a typical manual memory management graph with steadily increasing memory consumption until the program is finished with its task and shuts down. In comparison D looks like the following:


D's usage of a garbage collector is readily apparent here. It allocates a big chunk up front and keeps using it until the end of the program. In this particular case we see that the original chunk was big enough for the whole workload so it did not need to grow the size of the memory pool. The small jitter in memory consumption is probably due to things such as file IO and work memory needed by the runtime.

The conversion and D as a language

The original blog posted mentioned that converting the C program to C++ was straightforward because you could change things in very small steps (including individual items in structs) while keeping the entire test suite running for the entire time. The D conversion was the exact opposite.

It started from the C++ one and once the files were renamed to D, nothing worked until all of the code was proper D. This meant staring at compiler failure messages and fixing issues until they went away (which took several weeks of work every now and then when free time presented itself) and then fixing all of the bugs that were introduced by the fixes. A person proficient in D could probably have done the whole thing from scratch in a fraction of the time.

As a language D is a slightly weird experience. Parts of it are really nice such as the way it does arrays and dictionaries. Much of it feels like a fast, typed version of Python, but other things are less ergonomic. For example you can do if(item in con) for dictionaries but not for arrays (presumably due to the potential for O(n) iterations).

Perhaps the biggest stepping stone is the documentation. There are nice beginner tutorials  but intermediate level documentation seems to be scarce or possibly it's just hard to bing for. The reference documentation seems to be written by experts for other experts as tersely as possible. For comparison Python's reference documentation is both thorough and accessible in comparison. Similarly the IDE situation is unoptimal, as there are no IDEs in Ubuntu repositories and the Eclipse one I used was no longer maintained and fairly buggy (any programming environment that does not have one button go-to-definition and reliably working ctrl+space is DOA, sorry).

Overall though once you get D running it is nice. Do try it out if you haven't done so yet.

December 06, 2017

Everything In Its Right Place

Back in July, I wrote about trying to get Endless OS working on DVDs. To recap: we have published live ISO images of Endless OS for a while, but until recently if you burned one to a DVD and tried to boot it, you’d get the Endless boot-splash, a lot of noise from the DVD drive, and not much else. Definitely no functioning desktop or installer!

I’m happy to say that Endless OS 3.3 boots from a DVD. The problems basically boiled down to long seek times, which are made worse by data not being arranged in any particular order on the disk. Fixing this had the somewhat unexpected benefit of improving boot performance on fixed disks, too. For the gory details, read on!

The initial problem that caused the boot process to hang was that the D-Bus system bus took over a minute to start. Most D-Bus clients assume that any method call will get a reply within 25 seconds, and fail particularly badly if method calls to the bus itself time out. In particular, systemd calls a number of methods on the system bus right after it launches it; if these calls fail, D-Bus service activation will not work. iotop and systemd-analyze plot strongly suggested that dbus-daemon was competing for IO with systemd-udevd, modprobe incantations, etc. Booting other distros’ ISOs, I noticed local-fs.target had a (transitive) dependency on systemd-udev-settle.service, which as the name suggests waits for udev to settle down.1 This gets most hardware discovery out of the way before D-Bus and friends get started; doing the same in our ISOs means D-Bus starts relatively quickly and the boot process can continue.

Even with this change, and many smaller changes to remove obviously-unnecessary work from the boot sequence, DVDs took unacceptably long to reach the first-boot experience. This is essentially due to reading lots of small files which are scattered all over the disk: the laser has to be physically repositioned whenever you need to seek to a different part of the DVD, which is extremely slow. For example, initialising IBus involves running ibus-engine-m17n --xml which reads hundreds of tiny files. They’re all in the same directory, but are not necessarily physically close to one another on the disk. On an otherwise idle system with caches flushed, running this command from an loopback-mounted ISO file on an SSD took 0.82 seconds, which we can assume is basically all squashfs decompression overhead. From a DVD, this command took 40 seconds!

What to do? Our systemd is patched to resurrect systemd-readahead (which was removed upstream some time ago) because many of our target systems have spinning disks, and readahead improves boot performance substantially on those systems. It records which files are accessed during the boot sequence to a pack file; early in the next boot, the pack file is replayed using posix_fadvise(..., POSIX_FADV_WILLNEED); to instruct the kernel that these files will be accessed soon, allowing them to be fetched eagerly, in an order matching the on-disk layout. We include a pack file collected from a representative system in our OS images to have something to work from during the first boot.

This means we already have a list of all2 files which are accessed during the boot process, so we can arrange them contiguously on the disk. The main stumbling block is that our ISOs (like most distros’) contain an ext4 filesystem image, inside a GPT disk image, inside a squashfs filesystem image, and ext4 does not (to my knowledge!) provide a straightforward way to move certain files to a particular region of the disk. To work around this, we adapt a trick from Fedora’s livecd-tools, and create the ext4 image in two passes. First, we calculate the size of the files listed in the readahead pack file (it’s about 200MB), add a bit for filesystem overhead, create an ext4 image which is only just large enough to hold these files, and copy them in. Then we grow the filesystem image to its final size (around 10GB, uncompressed, for a DVD-sized image) and copy the rest of the filesystem contents. This ensures that the files used during boot are mostly contiguous, near the start of the disk.3

Does this help? Running ibus-engine-m17n --xml on a DVD prepared this way takes 5.6 seconds, an order of magnitude better than the 40 seconds observed on an unordered DVD, and booting the DVD is more than a minute faster than before this change. Hooray!

Due to the way our image build and install process works, the GPT disk image inside the ISO is the same one that gets written to disk when you install Endless OS. So: how will this trick affect the installed system? One potential problem is that mke2fs uses the filesystem size to determine various attributes, like block and inode sizes, and 200MB is small enough to trigger the small profile. So we pass -T default to explicitly select more appropriate parameters for the final filesystem size.4 As far as I can tell, the only impact on installed systems is positive: spinning disks also have high seek latency, and this change cuts 15% off the boot time on a Mission One. Of course, this will gradually decay when the OS is updated, since new files used at boot will not be contiguous, but it’s still nice to have. (In the back of my mind, I’ve always wondered why boot times always get worse across the lifetime of a device; this is the first time I’ve deliberately caused this to be the case.)

The upshot: from Endless OS 3.3 onwards, ISOs boot when written to DVD. However, almost all of our ISOs are larger than 4.7 GB! You can grab the Basic version, which does fit, from the Linux/Mac tab on our website and give it a try. I hope we’ll make more DVD-sized ISOs available in a future release. New installations of Endless OS 3.3 or newer should boot a bit more quickly on rotating hard disks, too. (Running the dual-boot installer for Windows from a DVD doesn’t work yet; for a workaround, copy all the files off the DVD and run them from the hard disk.)

Oh, and the latency simulation trick I described? Since it delays reads, not seeks, it is actually not a good enough simulation when the difference between the two matters, so I did end up burning dozens of DVD+Rs. Accurate simulation of optical drive performance would be a nice option in virtualisation software, if any Boxes or VirtualBox developers are reading!

  1. Fedora’s is via dmraid-activation.service, which may or may not be deliberate; anecdotally, SUSE DVDs deliberately add a dependency for this reason.
  2. or at least the majority of
  3. When I described this technique internally at Endless, Juan Pablo pointed out that DVDs can actually read data faster from the end (outside) of the disk. The outside of the disk has more data per rotation than the centre, and the disk spins at a constant rotation speed. A quick test with dd shows that my drive is twice as fast reading data from the end of the disk compared to the start. It’s harder to put the files at the end of the ext4 image, but we might be able to artificially fragment the squashfs image to put the first few hundred MBs of its contents at the end.
  4. After Endless OS is installed, the filesystem is resized again to fill the free space on disk.

GitLab update: Moving to the next step

Hello community,

I have good news, after few meetings and discussions with GitLab we reached an agreement on a way to bring the features we need and to fix our most important blockers in a reasonable time and in a way that are synced with us. Their team will fix our blockers in the next 1-2 months, most of them will be fix in the release of 22th of December and the rest if everything goes well in the release of 22th of January. The one left that out of those 2 months is a richer UI experience for duplicates, which is going to be an ongoing effort.

Apologies for the blockage for those that regularly asked to migrate their project, I wanted to make sure we are doing things in the right steps. I also wanted to make sure that I get feedback and comments about the initiative all around in my effort to make a representation of the community for taking these decisions. Now it’s the point where I’m confident, the feedback and comments both inside and outside of our core community has been largely that we should start our path to fully migrate to GitLab.

So starting today we move forward to the next step, this means that all projects that want to migrate are free to migrate. I’m also coordinating with some core apps for a migration in the upcoming month (e.g. Documents, Photos, Boxes), with other core projects to be migrated once we have in GitLab the features we need (i.e. Software, Shell, Mutter), and more platform-ish core projects like gtk+, glib etc. to be taken their time to ensure their migration is smooth. All depends individually of the project and the maintainer, of course.

With this change comes other news: We did our first batch migration of 8 projects today, totaling 21 projects that have moved by now. Also, the Engagement team has started using GitLab for better tracking and collaboration with the rest of the community, don’t hesitate to check it out if you want to make publicity of some feature or if you want to collaborate!

To make the transition easier, I created a general documentation for using GitLab for GNOMER’s, check it out here (feel free to edit). If you want to help, get in touch with me or check out our task list. If you want your project to be moved, get in touch with me or create an issue like this one.

As always, I’m there for your questions and feedback. You can do so in this mail chain, in irc, in private messages to me or by filling issues in the GNOME infrastructure project. I just want to ask, please keep in mind that I’m doing this entirely in my free time, so be considerate, I don’t have unlimited energy 🙂

Also thanks to all that helped so far, specially Phillip, Emmanuele , Alberto, Andrea and the GitLab team.

Hope you enjoy the news and the work we have done.

You can follow the discussion in the desktop-devel-list of GNOME.


Outreachy's finally here !

It’s been a month since the Outreachy Round 15 results were announced . Yay! my proposal for adding a network panel to GNOME Usage was selected. I am glad to be working on something I personally have been longing for. Moreover, I finally have something to cut down on my Xbox addiction and channelize it into bringing the network panel to life.
It’s going to be really amazing working with my mentor Felipe Borges , and Usage’s co-maintainer Petr Stetka ,given their experience and expertise.

Here’s a walkthrough of what the project is all about:

Currently there are not many Linux based GUI tools to monitor network statistics on our system ,unlike the CLI tools. Network Panel in GNOME-Usage will serve the purpose of making a UI available at the user’s service enabling them to monitor their network in a process oriented manner.

This panel can be designed to provide not only the per-process data transfer rates ,but also other details : open ports dedicated to some service (this can be of a great use to start or stop services from a UI ), list of interfaces. Currently it’s not finalized what all additional data will be available apart from the data transfer rates, but this panel surely has loads of new things in store for the users.

Lately, I’ve been discussing with my mentor regarding the approach for the backend API , which we plan to be incorporated in libgtop. As the Outreachy round officially started yesterday ,I plan to dig into the libgtop codebase and get started with coding ,the most amazing part of this internship !

This week onwards, I will be regular with my blog posts , updating about my progress on the project.
Lots in store for the geeky network enthusiasts looking forward to having a new compelling look to otherwise conventional network details.

Stay tuned! :)

UX Hackfest London

Last week I took part in the GNOME Shell UX Hackfest in London, along with other designers and developers from GNOME and adjacent communities such as Endless, Pop!, and elementary. We talked about big, fundamental things, like app launching and the lock/login screen, as well as some smaller items, like the first-run experience and legacy window decorations.

I won’t recap everything in detail, because Cassidy from System76 has already done a great job at that. Instead, I want to highlight some of the things I found most interesting.

Spatial model

One of my main interests for this hackfest was to push for better animations and making better use of the spatial dimension in GNOME Shell. If you’ve seen my GUADEC Talk, you know about my grand plan to introduce semantic animations across all of GNOME, and the Shell is obviously no exception. I’m happy to report that we made good progress towards a clear, unified spatial model for GNOME Shell last week.

Everything we came up with are very early stage concepts at this point, but I’m especially excited about the possibility of having the login/unlock screen be part of the same space as the rest of the system, and making the transition between these fluid and semantic.

Tiling

Another utopian dream of mine is a tiling-first desktop. I’ve long felt that overlapping windows are not the best way to do multitasking on screens, and tiling is something I’m very interested in exploring as an alternative. Tiling window managers have long done this, but their UX is usually subpar. However, some text editors like Atom have pretty nice graphical implementations of tiling window managers nowadays, and I feel like this approach might be scalable enough to cover most OS-level use cases as well (perhaps with something like a picture-in-picture mode for certain use cases).

Tiling in the Atom text editorTiling in the Atom text editor

We touched on this topic at various points during this hackfest, especially in relation to the resizable half-tiling introduced in 3.26, and the coming quarter-tiling. However, our current tech stack and the design of most apps are not well suited to a tiling-first approach, so this is unlikely ot happen anytime soon. That said, I want to keep exploring alternatives to free-floating, overlapping windows, and will report on my progress here.

Header bars everywhere

A topic we only briefly touched on, but which I care about a lot, was legacy window decorations (aka title bars). Even though header bars have been around for a while, there are still a lot of apps we all rely on with ugly, space-eating bars at the top (Inkscape, I’m looking at you).

Screenshot of a full-screen Blender window with a title barOn a 1366x768px display, a 35px title bar takes up close to 5% of the entire screen.

We discussed possible solutions such as conditionally hiding title bars in certain situations, but finally decided that the best course of action is to work with apps upstream to add support for header bars. Firefox and Chromium are currently in the process of implementing this, and we want to encourage other third-party apps to do the same.

Screenshot of Firefox with client-side decorations Firefox with client-side decorations (in development)

This will be a long and difficult process, but it will result in better apps for everyone, instead of hacky partial solutions. The work on this has just begun, and I’ll blog more about it as this initiative develops.

In summary, I think the hackfest set a clear direction for the future of GNOME Shell, and one that I’m excited to work towards. I’d like to thank the GNOME Foundation for sponsoring my attendance, Allan and Mario for organizing the hackfest, and everyone who attended for being there, and being awesome! Until next time!

GNOME foundation sponsorship badge

UTC and Anywhere on Earth support

A quick post to tell you that we finally added UTC support to Clocks' and the Shell's World Clocks section. And if you're into it, there's also Anywhere on Earth support.

You will need to have git master versions of libgweather (our cities and timezones database), and gnome-clocks. This feature will land in GNOME 3.28.



Many thanks to Giovanni for coming up with an API he was happy with after I attempted a couple of iterations on one. Enjoy!

Update: As expected, a bug crept in. Thanks to Colin Guthrie for spotting the error in the "Anywhere on Earth" timezone. See this section for the fun we have to deal with.

Linux on Supercomputers

Today, I did a presentation about Linux on Supercomputers at the Faculty of  Industrial of UNMSM for its annivrsary. It was published the event in the Intranet of the School.

I started by presenting the project of Satoshi Sekiguchi from Japan, who is in charge of ABCI, a supercomputer that pretends to be the top1 in the list of supercomputers around the world. This project is expected to be done in April 18 to help simulations of earthquakes with 130 petaflops of speed in calculations. The top 5 of top500 list:The way supercomputers were measured with the Linpack tool and why Linux have been used in the most powerful supercomputers were also explained. History of supercomputers and techonology relate to were topics during the talk. I have also emphasized the importance of gathering a multidisciplinary group in a supercomputer project or in other parallelized computer architecture.

Thanks to the organizers for contacting to do this rewarding talk. Linux is important for scientific purposes as well as for education in Peru and around the world. 

Screen Shot 2017-12-05 at 9.50.49 PM


Filed under: Education, Events, FEDORA, GNOME, GNU/Linux/Open Source, τεχνολογια :: Technology Tagged: ABCI, Facultad de Ingenieria Industrial, Julita Inca, Julita Inca Chiroque, linux, supercomputer talk, supercomputers, top500, UNMSM

December 05, 2017

2017-12-05 Tuesday.

  • Mail; admin. Lunch with J. Commercial call. Spent much of the day doing the things that are supposed to be quick & get done before you work on larger tasks - but somehow fill the time.
  • Out to the Hopbine in Cambridge in the evening with J. for a lovely Collabora Christmas party, good to catch up with the local part of the team.

summing up 93

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The future of humanity and technology, by Stephen Fry

Above all, be prepared for the bullshit, as AI is lazily and inaccurately claimed by every advertising agency and app developer. Companies will make nonsensical claims like "our unique and advanced proprietary AI system will monitor and enhance your sleep" or "let our unique AI engine maximize the value of your stock holdings". Yesterday they would have said "our unique and advanced proprietary algorithms" and the day before that they would have said "our unique and advanced proprietary code". But let's face it, they're almost always talking about the most basic software routines. The letters A and I will become degraded and devalued by overuse in every field in which humans work. Coffee machines, light switches, christmas trees will be marketed as AI proficient, AI savvy or AI enabled. But despite this inevitable opportunistic nonsense, reality will bite.

If we thought the Pandora's jar that ruined the utopian dream of the internet contained nasty creatures, just wait till AI has been overrun by the malicious, the greedy, the stupid and the maniacal. We sleepwalked into the internet age and we're now going to sleepwalk into the age of machine intelligence and biological enhancement. How do we make sense of so much futurology screaming in our ears?

Perhaps the most urgent need might seem counterintuitive. While the specialist bodies and institutions I've mentioned are necessary we need surely to redouble our efforts to understand who we humans are before we can begin to grapple with the nature of what machines may or may not be. So the arts and humanities strike me as more important than ever. Because the more machines rise, the more time we will have to be human and fulfill and develop to their uttermost, our true natures.

an outstanding lecture exploring the impact of technology on humanity by looking back at human history in order to understand the present and the future.

We're building a dystopia just to make people click on ads, by Zeynep Tufekci

We use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run Facebook or Google are maliciously and deliberately trying to make the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building. And that's the core of the problem.

So what can we do? We need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.

no new technology has only a one-sided effect. every technology is always both a burden and a blessing. not either or, but this and that. what bothers me is that we seem to ignore the negative impact of new technologies, justifying this attitude with their positive aspects.

the bullet hole misconception, by daniel g. siegel

If you're never exposed to new ideas and contexts, if you grow up only being shown one way of thinking about the computer and being told that there are no other ways to think about this, you grow up thinking you know what we're doing. We have already fleshed out all the details, improved and optimized everything a computer has to offer. We celebrate alleged innovation and then delegate picking up the broken pieces to society, because it's not our fault – we figured it out already.

We have to tell ourselves that we haven't the faintest idea of what we're doing. We, as a field, haven't the faintest idea of what we're doing. And we have to tell ourselves that everything around us was made up by people that were no smarter than us, so we can change, influence and build things that make a small dent in the universe.

And once we understand that, only then might we be able to do what the early fathers of computing dreamed about: To make humans better – with the help of computers.

the sequel to my previous talk, the lost medium, on bullet holes in world war 2 bombers, page numbering, rotating point of views and how we can escape the present to invent the future.

December 04, 2017

2017-12-04 Monday.

  • Mail chew, consultancy call, synched with Dennis; admin: customer, partner contacts, variously. TDF board call.

Distrinet R&D Bites

The Distrinet Research Group at KULeuven (where I studied!), recently asked me to speak about “Cloud Native” at one of their R&D Bites sessions. My talk covered Kubernetes, cloud automation and all the cool new things we can do in this brave new cloud native world.

Annotated slides of the talk can be found here.

Experiences in building cloud-native businesses: the Ticketmatic case


Comments | More on rocketeer.be | @rubenv on Twitter

Talking at Cubaconf 2017 in Havanna, Cuba

Few weeks ago I had a talk at Cubaconf 2017 in Havanna, Cuba. It’s certainly been an interesting experience. If only because of Carribean people. But also because of the food and the conditions the country has be run under the last decades.

Before entering Cuba, I needed a tourist visa in form of the turist trajeta. It was bothering me for more than it should have. I thought I’d have to go to the embassy or take a certain airline in order to be able to get hold of one of these cased. It turned out that you can simply buy these tourist cards in the Berlin airport from the TUI counter. Some claimed it was possible to buy at the immigration, but I couldn’t find any tourist visa for sale there, so be warned. Also, I read that you have to prove that you have health insurance, but nobody was interested in mine. That said, I think it’s extremely clever to have one…

Connecting to the Internet is a bit difficult in Cuba. I booked a place which had “Wifi” marked as their features and I naïvely thought that it meant that you by booking the place I also get to connect to the Internet. Turns out that it’s not entirely correct. It’s not entirely wrong either, though. In my case, there was an access point in the apartment in which I rented a room. The owner needs to turn it on first and run a weird managing software on his PC. That software then makes the AP connect to other already existing WiFis and bridges connections. That other WiFi, in turn, does not have direct Internet access, but instead somehow goes through the ISP which requires you to log in. The credentials for logging in can be bought in the ISPs shops. You can buy credentials worth 1 hour of WiFi connection (note that I’m avoiding the term “Internet” here) for 3 USD or so from the dealer around the corner. You can get your fix from the legal dealer cheaper (i.e. the Internet office…), but that will probably involve waiting in queues. I often noticed people gathering somewhere on the street looking into their phones. That’s where some signal was. When talking to the local hacker community, I found out that they were using a small PCB with an ESP8266 which repeats the official WiFi signal. The hope is that someone will connect to their piece of electronics so that the device is authenticated and also connects the other clients associated with the fake hotspot. Quite clever.

The conference was surprisingly well attended. I reckon it’s been around hundred people. I say surprisingly, because from all what I could see the event was weirdly organised. I had close to zero communication with the organisers and it was pure luck for me to show up in time. But other people seemed to be in the know so I guess I fell through the cracks somehow. Coincidentally, you could only install the conference’s app from Google, because they wouldn’t like to offer a plain APK that you can install. I also didn’t really know how long my talks should be and needed to prepare for anything between 15 and 60 minutes.

My first talk was on PrivacyScore.org, a Web scanner for privacy and security issues. As I’ve indicated, the conference was a bit messily organised. The person before me was talking into my slot and then there was no cable to hook my laptop up with the projector. We ended up transferring my presentation to a different machine (via pen drives instead of some fancy distributed local p2p network) in order for me to give my presentation. And then I needed to rush through my content, because we were pressed for going for lunch in time. Gnah. But I think a few people were still able to grasp the concepts and make it useful for them. My argument was that Web pages load much faster if you don’t have to load as many trackers and other external content. Also, these people don’t get updates in time, so they might rather want to visit Web sites which generally seem to care about their security. I was actually approached by a guy running StreetNet, the local DIY Internet. His idea is to run PrivacyScore against their network to see what is going on and to improve some aspects. Exciting.

My other talk was about GNOME and how I believe it makes more secure operating systems. Here, my thinking was that many people don’t have expectations of how their system is supposed to be looking or even working. And being thrown into the current world in which operating systems spy on you could lead to being primed to have low expectations of the security of the system. In the GNOME project, however, we believe that users must have confidence in their computing being safe and sound. To that end, Flatpak was a big thing, of course. People were quite interested. Mostly, because they know everything about Docker. My trick to hook these people is to claim that Docker does it all wrong. Then they ask pesky questions which gives me many opportunities to mention that for some applications squashfs is inferior to, say, OStree, or that you’d probably want to hand out privileges only for a certain time rather than the whole life-time of an app. I was also to make people look at EndlessOS which attempts to solve many problems I think Cubans have.

The first talk of the conference was given by Ismael and I was actually surprised to meet people I know. He talked about his hackerspace in Almería, I think. It was a bit hard to me to understand, because it was in Spanish. He was followed by Valessio Brito who talked about putting a price on Open Source Software. He said he started working on Open Source Software at the age of 16. He wondered how you determine how much software should cost. Or your work on Open Source. His answer was that one of the determining factors was simply personal preference of the work to be performed. As an example he said that if you were vegan and didn’t like animals to be killed, you would likely not accept a job doing exactly that. At least, you’d be inclined to demand a higher price for your time. That’s pretty much all he could advise the audience on what to do. But it may also very be that I did not understand everything because it was half English and half Spanish and I never noticed quickly enough that the English was on.

An interesting talk was given by Christian titled “Free Data and the Infrastructure of the Commons”. He began saying that the early textile industry in Lyon, France made use of “software” in 1802 with (hard wired) wires for the patterns to produce. With the rise of computers, software used to be common good in the early 1960s, he said. Software was a common good and exchanged freely, he said. The sharing of knowledge about software helped to get the industry going, he said. At the end of the 1970s, software got privatised and used to be licensed from the manufacturer which caused the young hacker movement to be felt challenged. Eventually, the Free Software movement formed and hijacked the copyright law in order to preserve the users’ freedoms, he said. He then compared the GPL with the French revolution and basic human rights in that the Free Software movement had a radical position and made the users’ rights explicit. Eventually, Free Software became successful, he said, mainly because software was becoming more successful in general. And, according to him, Free Software used to fill a gap that other software created in the 80s. Eventually, the last bastion to overcome was the desktop, he said, but then the Web happened which changed the landscape. New struggles are software patents, DRM, and privacy of the “bad services”. He, in my point of view rightfully so, said that all the proliferation of free and open source software, has not lead to less proprietary software though. Also, he is missing the original FOSS attitude and enthusiasm. Eventually he said that data is the new software. Data not was not an issue back when software, or Free Software even, started. He said that 99% of the US growth is coming from the data processing ad companies like Google or Facebook. Why does data have so much value, he asked. He said that actually living a human is a lot of work. Now you’re doing that labour for Facebook by entering the data of your human life into their system. That, he said, is where the value in coming from. He made the the point that Software Freedoms are irrelevant for data. He encouraged the hackers to think of information systems, not software. Although he left me wondering a bit how I could actually do that. All in all, a very inspiring talk. I’m happy that there is a (bad) recording online:

I visited probably the only private company in Cuba which doubles as a hackerspace. It’s interesting to see, because in my world, people go and work (on computer stuff) to make enough money to be free to become a singer, an author, or an artist. In Cuba it seems to be the other way around, people work in order to become computer professionals. My feeling is that many Cubans are quite artsy. There is music and dancing everywhere. Maybe it’s just the prospects of a rich life though. The average Cuban seems to make about 30USD a month. That’s surprising given that an hour of bad WiFi costs already 1 USD. A beer costs as much. I was told that everybody has their way to get hold of some more money. Very interesting indeed. Anyway, the people in the hackerspace seemed to be happy to offer their work across the globe. Their customers can be very happy, because these Cubans are a dedicated bunch of people. And they have competitive prices. Even if these specialists make only hundred times as much the average Cuban, they’d still be cheap in the so called developed world.

After having arrived back from Cuba, I went to the Rust Hackfest in Berlin. It was hosted by the nice Kinvolk folks and I enjoyed meeting all the hackers who care about making use of a safer language. I could continue my work on rustifying pixbuf loaders which will hopefully make it much harder to exploit them. Funnily enough, I didn’t manage to write a single line of Rust during the hackfest. But I expected that, because we need to get to code ready to be transformed to Rust first. More precisely, restructure it a bit so that it has explicit error codes instead of magic numbers. And because we’re parsing stuff, there are many magic numbers. While digging through the code, other bugs popped up as well which we needed to eliminate as side challenges. I’m looking much forward to writing an actual line of Rust soon! ;-)

When faster WiFi means unusable connection

I recently moved home and got FTTC with PlusNet, the speed is good when measuring (almost the advertised 80Mb/20Mb) but the connection was unusable due to TCP connections hanging every few minutes (very annoying with ssh but screen helps, worse when using a website for a payment and needing to retry and trust you will only be charged once).

Yesterday I decided to sit down and investigate. Router has logs which were quite helpful. A lot of things like OUT: BLOCK [9] Packet invalid in connection (Invalid tcp flags for current tcp state: TCP [192.168.1.73]:54426-​>[46.19.168.229]:443 on ppp3)

This followed the laptop being seen moving from interface ath10 to interface ath00 and it was moving back and forth quite often.

Looking at the logs on one of the laptops those switches looked like wlan0: disconnect from AP b8:d9:4d:41:76:fb for new auth to b8:d9:4d:41:76:fa

What happened is that default settings on PlusNet router is to have “identical” 2.4GHz and 5GHz networks so the devices believe they are the same network and switch between AP, but they are actually different and the connection tracking gets reset each time such switch happens.

Disabling the 5GHz network made my connection usable, I could probably just change its settings to make it separate.

Download and install operating systems directly in GNOME Boxes

If you are closely following the development of GNOME Boxes, you probably have read Debarshi’s announcement of this new feature that allows you to download and install Red Hat Enterprise Linux gratis directly from Boxes.

This time we are enabling you to install many other operating system virtual machines right from inside Boxes. A moving picture is better than words, so watch the preview below:

The list is populated by libosinfo, which gets updated shortly after every new OS release. If you don’t see your favorite distro there, please send us a patch.

This feature will feature GNOME 3.28.
Happy virtualization! :-)

December 03, 2017

Talking at GI Tracking Workshop in Darmstadt, Germany

Uh, I almost forgot about blogging about having talked at the GI Tracking Workshop in Darmstadt, Germany. The GI is, literally translated, the “informatics society” and sort of a union of academics in the field of computer science (oh boy, I’ll probably get beaten up for that description). And within that body several working groups exist. And one of these groups working on privacy organised this workshop about tracking on the Web.

I consider “workshop” a bit of a misnomer for this event, because it was mainly talks with a panel at the end. I was an invited panellist for representing the Free Software movement contrasting a guy from affili.net, someone from eTracker.com, a lady from eyeo (the AdBlock Plus people), and professors representing academia. During the panel discussion I tried to focus on Free Software being the only tool to enable the user to exercise control over what data is being sent in order to control tracking. Nobody really disagreed, which made the discussion a bit boring for me. Maybe I should have tried to find another more controversial argument to make people say more interesting things. Then again, it’s probably more the job of the moderator to make the participants discuss heatedly. Anyway, we had a nice hour or so of talking about the future of tracking, not only the Web, but in our lives.

One of the speakers was Lars Konzelmann who works at Saxony’s data protection office. He talked about the legislative nature of data protection issues. The GDPR is, although being almost two years old, a thing now. Several types of EU-wide regulations exist, he said. One is “Regulation” and the other is “Directive”. The GDPR has been designed as a Regulation, because the EU wanted to keep a minimum level of quality across the EU and prevent countries to implement their own legislation with rather lax rules, he said. The GDPR favours “privacy by design” but that has issues, he said, as the usability aspects are severe. Because so far, companies can get the user’s “informed consent” in order to do pretty much anything they want. Although it’s usefulness is limited, he said, because people generally don’t understand what they are consenting to. But with the GDPR, companies should implement privacy by design which will probably obsolete the option for users to simply click “agree”, he said. So things will somehow get harder to agree to. That, in turn, may cause people to be unhappy and feel that they are being patronised and being told what they should do, rather than expressing their free will with a simple click of a button.

Next up was a guy with their solution against tracking in the Web. They sell a little box which you use to surf the Web with, similar to what Pi Hole provides. It’s a Raspberry Pi with a modified (and likely GPL infringing) modification of Raspbian which you plug into your network and use as a gateway. I assume that the device then filters your network traffic to exclude known bad trackers. Anyway, he said that ads are only the tip of the iceberg. Below that is your more private intimate sphere which is being pried on by real time bidding for your screen estate by advertising companies. Why would that be a problem, you ask. And he said that companies apply dynamic pricing depending on your profile and that you might well be interested in knowing that you are being treated worse than other people. Other examples include a worse credit- or health rating depending on where you browse or because your bank knows that you’re a gambler. In fact, micro targeting allows for building up a political profile of yours or to make identity theft much easier. He then went on to explain how Web tracking actually works. He mentioned third party cookies, “social” plugins (think: Like button), advertisement, content providers like Google Maps, Twitter, Youtube, these kind of things, as a means to track you. And that it’s possible to do non invasive customer recognition which does not involve writing anything to the user’s disk, e.g. no cookies. In fact, such a fingerprinting of the users’ browser is the new thing, he said. He probably knows, because he is also in the business of providing a tracker. That’s probably how he knows that “data management providers” (DMP) merge data sets of different trackers to get a more complete picture of the entity behind a tracking code. DMPs enrich their profiles by trading them with other DMPs. In order to match IDs, the tracker sends some code that makes the user’s browser merge the tracking IDs, e.g. make it send all IDs to all the trackers. He wasn’t really advertising his product, but during Q&A he was asked what we can do against that tracking behaviour and then he was forced to praise his product…

Eye/o’s legal counsel Judith Nink then talked about the juristic aspects of blocking advertisements. She explained why people use adblockers in first place. I commented on that before, claiming that using an adblocker improves your security. She did indeed mention privacy and security being reasons for people to run adblockers and explicitly mentionedmalvertising. She said that Jerusalem Post had ads which were actually malware. That in turn caused some stir-up in Germany, because it was coined as attack on German parliament… But other reasons for running and adblocker were data consumption and the speed of loading Web pages, she said. And, of course, the simple annoyance of certain advertisements. She presented some studies which showed that the typical Web site has 50+ or so trackers and that the costs of downloading advertising were significant compared to downloading the actual content. She then showed a statement by Edward Snowden saying that using an ad-blocker was not only a right but is a duty.

Everybody should be running adblock software, if only from a safety perspective

Browser based ad blockers need external filter lists, she said. The discussion then turned towards the legality of blocking ads. I wasn’t aware that it’s a thing that law people discuss. How can it possibly not be legal to control what my client does when being fed a bunch of HTML and JavaScript..? Turns out that it’s more about the entity offering these lists and a program to evaluate them *shrug*. Anyway, ad-blockers use either blocking or hiding of elements, she said where “blocking” is to stop the browser from issuing the request in first place while “hiding” is to issue the request, but to then hide the DOM element. Yeah, law people make exactly this distinction. She then turned to the question of how legal either of these behaviours is. To the non German folks that question may seem silly. And I tend to agree. But apparently, you cannot simply distribute software which modifies a Browser to either block requests or hide DOM elements without getting sued by publishers. Those, she said, argue that gratis content can only be delivered along with ads and that it’s part of the deal with the customer. Like that they also transfer ads along with the actual content. If you think that this is an insane argument, especially in light of the customer not having had the ability to review that deal before loading that page, you’re in good company. She argued, that the simple act of loading a page cannot be a statement of consent, let alone be a deal of some sorts. In order to make it a deal, the publishers would have to show their terms of service first, before showing anything, she said. Anyway, eye/o’s business is to provide those filter lists and a browser plugin to make use of those lists. If you pay them, however, they think twice before blocking your content and make exceptions. That feels a bit mafiaesque and so they were sued for “aggressive geschäftliche Handlung”, an “aggressive commercial behaviour”. I found the history of cases interesting, but I’ll spare the details for the reader here. You can follow that, and other cases, by looking at OLG Koeln 6U149/15.

Next up was Dominik Herrmann to present on PrivacyScore.org, a Web portal for scanning Web sites for security and privacy issues. It is similar to other scanners, he said, but the focus of PrivacyScore is publicity. By making results public, he hopes that a race to the top will occur. Web site operators might feel more inclined to implement certain privacy or security mechanisms if they know that they are the only Web site which doesn’t protect the privacy of their users. Similarly, users might opt to use a Web site providing a more privacy friendly service. With the public portal you can create lists in order to create public benchmarks. I took the liberty to create a list of Free Desktop environments. At the time of creation, GNOME fell behind many others, because the mail server did not implement TLS 1.2. I hope that is being taking as a motivational factor to make things more secure.

December 01, 2017

Product review: WASD V2 Keyboard

A new blog on Planet GNOME often means an old necropost for us residents of the future to admire.

I, too, bought a custom keyboard from WASD. It is quite nice to be able to customize the printing using an SVG file. Yes, my keyboard has GNOME feet on the super keys, and a Dvorak layout, and, oh yes, Cantarell font. Yes, Cantarell was silly, and yes, it means bad kerning, but it is kind of cool to know I’m probably the only person on the planet to have a Cantarell keyboard.

It was nice for a little under one year. Then I noticed that the UV printing on some of the keys was beginning to wear off. WASD lets you purchase individual keycaps at a reasonable price, and I availed myself of that option for a couple keys that needed it, and then a couple more. But now some of the replacement keycaps need to be replaced, and I’ve owned the keyboard for just over a year and a half. It only makes sense to purchase a product this expensive if it’s going to last.

I discovered that MAX Keyboard offers custom keyboard printing using SVG files, and their keycaps are compatible with WASD. I guess it’s a clone of WASD’s service, because I’ve never heard of MAX before, but I don’t actually know which came first. Anyway, you can buy just the keycaps without the keyboard, for a reasonable price. But they apparently use a UV printing process, which is what WASD does, so I have no clue if MAX will hold up any better or not. I decided not to purchase it. (At least, not now. Who knows what silly things I might do in the future.) Instead, I purchased a blank PBT keycap set from them. It arrived yesterday, and it seems nice. It’s a slightly different shade of black than WASD’s keycaps, but that’s OK. Hopefully these will hold up better, and I won’t need to replace the entire keyboard. And hopefully I don’t find I need to look at the keys to find special characters or irregularly-used functions like PrintScreen and media keys. We’ll see.

GNOME and Rust

I’ve been keeping an eye on Rust for a while now, so when I read Alberto’s statement of support for more Rust use in GNOME, I couldn’t resist piling on…

From the perspective of someone who’s quite used to C, it does indeed seem to tick all the boxes. High performance, suitability for low-level tasks and C ABI compatibility tend to be sticking points with new languages — and Rust kills it in those departments. Anyone who needs further convincing should read up on Raph Levien’s font renderer. The usual caveat about details vis-a-vis the Devil applies, but the general idea looks exactly right. Rust’s expressiveness and lack of baggage means it could even outperform C for non-trivial code, on top of all the other advantages.

There are risks too, of course. I’d worry about adoption, growth and the availability of bindings/libraries/other features, like a good optional GC for high-level apps (there is at least one in the works, but it doesn’t seem to be quite ready for prime-time yet). Rust is on an upwards trajectory, and there doesn’t seem to be many tasks where it’s eminently unsuitable, so in theory, it could have a wide reach: operating systems, platform libraries, both client- and server-side applications, games and so on. However, it doesn’t appear to be the de facto language in many contexts yet. Consider the statement “If I learn language X, I will then be able to work on Y.” Substitute for X: Java, Javascript, Python, ObjC, C, C++, C# or even Visual Basic — and Y becomes obvious. How does Rust fare?

That is, of course, a very conservative argument, while in my mind the GNOME project represents, for better or worse, and C use notwithstanding, a more radical F/OSS philosophy. Its founding was essentially formulated as a revolt against the Qt license (and a limited choice of programming languages!), it was an early adopter of Git for version control, and it’s a driver for Wayland and Flatpak now. For what it’s worth, speaking as mostly a downstream integrator, I wouldn’t mind it if GNOME embraced its DNA yet again and fully opened the door to Rust.

New toy

wasd-keyboard.jpg

I got a new toy. It’s a WASD keyboard with Cherry MX Clear switches. The picture doesn’t do it justice; maybe I should’ve gotten a new camera instead… I guess it’ll have to wait.

Mechanical-switch keyboards are pricey, but since I spend more than 2000 hours a year in front of a keyboard, it’s not a bad investment. Or so I’m telling myself. Anyway, it’s a big step up from the rubber dome one I’ve been using for the past couple of years. The key travel is longer, and it’s nice to have proper tactile feedback. Since the Clear switches have stiff springs, I can also rest my fingers on the keys when daydreamingthinking. It has anti-slip pads underneath, so it stays put, and it doesn’t bounce or rattle at all.

Until our last move, I clung to an older, clicky keyboard (I don’t remember which brand — I thought it was Key Tronic, but I’ve a hard time finding any clicky keyboards of theirs at the moment), worried that the future held rubber dome and chiclets only — but today, there are lots of options if you look around. I guess we have mostly gamers and aficionados to thank for that. So thank you, gamers and aficionados.

I did plenty of research beforehand, but WASD finally drew me in with this little detail: They have some very well thought-out editable layout templates for SodipodiInkscape. Good taste in software there.

November 30, 2017

Large number of XML Nodes and GXml performance

GXml performance has been improved since initial releases.

First implementation parse all to libxml2 tree and then to a GObject set of classes, in order to provide GObject Serialization framework.

Over time GXmlGom was added as a set of classes avoiding to use libxml2 tree improving both memory and performance on Serialization.

GXml has been used in many applications like parse Electrical Substation Configuration Language files by librescl.org; to Mexican Tax Authority XML invoices format, among others.

QRSVG Performance

For my private projects, I need to create QR of size 61×61 = 3721 squares. This means at least 2700 XML nodes. This is a large number of nodes and because QRSVG depends on GSVG and it depends on GXml, all them depend on GXml’s implementation for performance.

Initial measurements suggest that, at no surprise, using a simple array of objects takes up to 0.5 seconds to add just a node, as maximum time measured.

So GXml’s implementation should be improved for large number of nodes. Now it uses Gee.ArrayList, is clean and easy to wrap a node list implementing W3C DOM4 API. But now I’m considering to use Gee.TreeMap, because it is designed for large collection of objects, from its documentation:

This implementation is especially well designed for large quantity of data. The (balanced) tree implementation insure that the set and get methods are in logarithmic complexity.

The problem is its Map interface, where I need to implement a Gee.BidirList interface over it, in order to ensure fit in W3C DOM4 API and get performance boost.

Lets see how evolves this. Any suggestion?

November 29, 2017

So we are working in new conferences for 2018

Well, now we can say here (Almería, Spain) we know something about how to do technical conferences and meetings, specially opensource/freesoftware ones. In 2016 and 2017 we co-organized:

And we are really grateful to those people and communities who trusted us to be their host for some days. The reunited experience aimed us to new challenges. So here I just want to share in which conferences I’m currently involved for 2018:

  • SuperSEC 2018, a national (Spain) conference on secure software development, in the orbit of OWASP, to be held in next May. And we are almost ready to open the CFP!

  • GUADEC 2018, the European conference for the GNOME et al. community, from 6 to 11th of July.

But we want mooar, so we are currently biding to host Flock 2018, the annual meeting for the Fedora Project community

Our goal is to host both sister conferences one just after the other, so a lot of people could attend both saving good money.

So, if you are a contributor of any of those communities or just an opensource enthusiast consider this extraordinaire opportunity to match your summer holidays in a nice place for tourism and the July 2018 opensource world meeting point!

Wish us luck :-)

PD: Just updated the definitive dates for GUADEC 2018.

Fedora Media Writer Available in Flathub

Fedora Media Writer is the tool to create live USB flash drives with Fedora. You can also use dd or GNOME Disks, but Fedora Media Writer is the only graphical tool that is tested with Fedora ISOs (please don’t use UNetbootin and such because they really cause faulty Fedora installations).

Fedora Media Writer is available as an RPM package in Fedora repositories and we provide installation files for Windows and macOS. Those are actually offered to users with Windows and macOS as the default download options at getfedora.org. We’ve provided users of other Linux distributions with a flatpak, but it was hosted in its own repo. Recently we managed to get the flatpak to Flathub which many users have already enabled, so now it’s even easier and faster to install.

Snímek z 2017-11-29 13-12-31


Dialog Tunnelling

So I’m finally resurrecting this blog to life after a long time.

I’m simply going to talk about what I’ve been currently working on in Collabora Online or LibreOffice Online, as part of my job at Collabora.

In our quest to bring more features available to our users editing documents in the browser, we are attacking something that contains the majority of the features in LibreOffice – the dialogs. One of the complaints that power users make in Online is that it lacks advanced features: they cannot add coloured borders in their paragraphs, manage tracked changes/comments, correct the spelling and grammar in the document, etc. The question before us is how do we bring these functionalities to the cloud at your disposal in your browser tab?

We really don’t want to write another million lines of code in Javascript to make them available in your browser and then dealing with separate set of bugs for time to come.

So we decided to come up with a plan to just tunnel all the hard work that developers have done for the past couple of decades: come up with appropriate infrastructure to open the dialog in headless mode, paint them as a bitmap in the backend, and tunnel then image to you in the browser. And then add life to them by tunnelling your mouse/key events as well which will invalidate and update the new image you are seeing the browser. Don’t worry; we are not sending the whole dialog image back to your browser every time. Only the part that needs updating in the dialog is sent back to the browser saving us precious time and network bandwidth improving your UX.

The current state of the project looks really promising. Not just the modeless dialogs, we are able to tunnel the modal ones as well which is not something we had expected earlier.

Since text is boring, here’s a preview that shows dialog tunnelling in action in our test tools, GtkTiledViewer. The integration with Online is ready too and undergoing some final polishing. But it’s not something you’d have to wait for too long; we are polishing a big refactor to LibreOffice core master to install the dialog infrastructure needed for integration. Now you will be able to do pretty much all the things in Online (and in CODE version 3.0 soon to be released) that you’ve always wanted to do.

Here are the slides from the talk I delivered on the same topic in our annual LibreOffice Conference in Rome this year.

Dialog Tunnelling in Collabora Online

So I’m finally resurrecting this blog to life after a long time.

I’m simply going to talk about what I’ve been currently working on in Collabora Online or LibreOffice Online, as part of my job at Collabora.

In our quest to bring more features available to our users editing documents in the browser, we are attacking something that contains the majority of the features in LibreOffice – the dialogs. One of the complaints that power users make in Online is that it lacks advanced features: they cannot add coloured borders in their paragraphs, manage tracked changes/comments, correct the spelling and grammar in the document, etc. The question before us is how do we bring these functionalities to the cloud at your disposal in your browser tab?

We really don’t want to write another million lines of code in Javascript to make them available in your browser and then dealing with separate set of bugs for time to come.

So we decided to come up with a plan to just tunnel all the hard work that developers have done for the past couple of decades: come up with appropriate infrastructure to open the dialog in headless mode, paint them as a bitmap in the backend, and tunnel then image to you in the browser. And then add life to them by tunnelling your mouse/key events as well which will invalidate and update the new image you are seeing the browser. Don’t worry; we are not sending the whole dialog image back to your browser every time. Only the part that needs updating in the dialog is sent back to the browser saving us precious time and network bandwidth improving your UX.

The current state of the project looks really promising. Not just the modeless dialogs, we are able to tunnel the modal ones as well which is not something we had expected earlier.

Since text is boring, here’s a preview that shows dialog tunnelling in action in our test tools, GtkTiledViewer. The integration with Online is ready too and undergoing some final polishing. But it’s not something you’d have to wait for too long. Now you will be able to do pretty much all the things that you’ve always wanted to do.

Here are the slides from the talk I delivered on the same topic in our annual LibreOffice Conference in Rome this year.

November 27, 2017

London UX Hackfest

London UX Hackfest from jimmac on Vimeo.

Thanks to the GNOME Foundation, a handful of designers and developers got together last week in London to refocus on the core element of the GNOME experience, the shell. Allan and Cassidy have already summed up everything in their well written blog posts, so I’d like to point to some pretty pictures and the video above.

Stay tuned for some higher fidelity proposals in the areas of app switching & launching and the lock/login experience.

November 26, 2017

GStreamer Rust bindings release 0.9

About 3 months, a GStreamer Conference and two bug-fix releases have passed now since the GStreamer Rust bindings release 0.8.0. Today version 0.9.0 (and 0.9.1 with a small bugfix to export some forgotten types) with a couple of API improvements and lots of additions and cleanups was released. This new version depends on the new set of releases of the gtk-rs crates (glib/etc).

The full changelog can be found here, but below is a short overview of the (in my opinion) most interesting changes.

Tutorials

The basic tutorials 1 to 8 were ported from C to Rust by various contributors. The C versions and the corresponding explanatory text can be found here, and it should be relatively easy to follow the text together with the Rust code.

This should make learning to use GStreamer from Rust much easier, in combination with the few example applications that exist in the repository.

Type-safety Improvements

Previously querying the current playback position from a pipeline (and various other things analogous) was giving you a plain 64-bit integer, just like in C. However in Rust we can easily do better.

The main problem with just getting an integer was that there are “special” values that have the meaning of “no value known”, specifically GST_CLOCK_TIME_NONE for values in time. In C this often causes bugs by code ignoring this special case and then doing calculations with such a value, resulting in completely wrong numbers. In the Rust bindings these are now expressed as an Option<_> so that the special case has to be handled separately, and in combination with that for timed values there is a new type called ClockTime that is implementing all the arithmetic traits and others so you can still do normal arithmetic operations on the values, while the implementation of those operations takes care of GST_CLOCK_TIME_NONE. Also it was previously easy to get a value in bytes and add it to a value in time. Whenever multiple formats are possible, a new type called FormatValue is now used that combines the value itself with its format to prevent such mistakes.

Error Handling

Various operations in GStreamer can fail with a custom enum type: link pads (PadLinkReturn), pushing a buffer (FlowReturn), changing an element’s state (StateChangeReturn). Previously handling this was not as convenient as the usual Result-based error handling in Rust. With this release, all these types provide a function into_result() that allows to convert into a Result that splits the enum into its good and bad cases, e.g. FlowSuccess and FlowError. Based on this, the usual Rust error handling is possible, including usage of the ?-operator. Once the Try trait is stable, it will also be possible to directly use the ?-operator on FlowReturn and the others before conversion into a Result.

All these enums are also marked as #[must_use] now, which causes a compiler warning if code is not specifically handling them (which could mean to explicitly ignore them), making it even harder to ignore errors caused by any failures of such operations.

In addition, all the examples and tutorials make use of the above now and many examples were ported to the failure crate and implement proper error handling in all situations now, for example the decodebin example.

Various New API

Apart from all of the above, a lot of new API was added. Both for writing GStreamer-based applications, and making that easier, as well as for writing GStreamer plugins in Rust. For the latter, the gst-plugin-rs repository with various crates (and plugins) was ported to the GStreamer bindings and completely rewritten, but more on that in another blog post in the next couple of days once the gst-plugin crate is released and published on crates.io.

Builder 3.27 Progress

We are a couple of months into Builder’s 3.28 development. We have fewer big ticket features scheduled this cycle when compared to 3.26. However that is replaced by a multitude of smaller features and details. Let’s take a look at some of what has been done already.

Flatpak Improvements

Early in the cycle we merged a feature upstream in flatpak-builder to emit escape sequences to set the terminal title as we progress through the build pipeline. Users of jhbuild are probably familiar with this type of thing as it does something similar. We can now consume this information from Builder to show more detailed progress about your Flatpak as it builds.

With yesterdays Flatpak 0.10.1 release, we got a feature we needed to access /usr/include of the host from a Flatpak. This means Builder can more easily develop against your host platform when using Builder from flatpak. It’s not a common request, but one we can support now.

Also yesterday, was the release of flatpak-builder 0.10.5. It has a new feature allowing us to specify --state-dir. If we detect a new enough flatpak-builder, we’ll use this to share dependency checkouts among various projects. When combined with shallow clones, I expect this to help reduce downloads for people who contribute to multiple projects.

Pseudo-Terminal Integration

We now depend on libvte directly from libide. This allows us to use a pseudo-terminal (PTY) in the build pipeline and show a terminal for the build output. This is both faster than our previous GtkTextView implementation and also adds support for colors and fixed scroll-back. If you have something other than a subprocess generating a build logs, we merge those into the terminal too!

Simplified Newcomers

As seen previously, we have a simpler process for newcomers wanting to explore an existing GNOME project. Just click on the icon and hit run!

Improved To-Do

By increasing our guarantees of thread-safety, we were able to speed up our scanning for todo items. We also fixed a few bugs along the way.

Improved Editor Search

Our editor search is some of the trickiest code in Builder. This is because we have to try to emulate various systems such as Vim. We refactored quite a bit of it to make it more resilient and handle all those tricky corner cases better.

More Code Indexers

Patrick contributed a GJS code indexer which can make it easier to jump around to classes and functions in your GJS-based project. I did the same for Vala. If you’re part of either of these language communities, we could really use your help improving our support for them.

Three-Finger-Swipe

As seen previously, the editor gained three-finger-swipe support to move editor panels left or right. You need Wayland for this feature for proper three-finger-swipe support for the lower layers of the stack.

Improved Meson and CMake Integration

Both the Meson and CMake build system plugins have been ported to C to get some type safety on our side. The architecture was also changed a bit to make it easier to extract compiler flags without needlessly advancing the build pipeline.

Unit Testing

The basics of unit testing have landed. We still have lots to do here before 3.28 like running under gdb and getting failure logs.

Find-Other-File Improvements

The find-other-file plugin was improved to support using the global search to list alternate files. This can be handy when switching between source, headers, and ui files.

Compile Commands Database

Builder now has a helper for compile_commands.json style files made popular by Clang. This can simplify the implementation of CFLAGS extraction by build systems that support it.

Build Target Providers

Creating and IDE that natively supports such a wide variety of project types and packaging technologies can be quite a challenge. There is often no clear abstraction for where a piece of information should be extracted. For example, does the build system know about installed build targets and how to run them? Is it the packaging technology, or a .desktop file? How about when containers are used?

This harsh reality means that sometimes we need to be very specific about our extension points. The new build target provider allows various system components to give us information about build artifacts. This has made it easier to run applications even when the build system has limited support. Long story short, if you use flatpak, things should mostly Just Work™, even when you use less well supported build systems like CMake.

Happy hacking!

November 22, 2017

GNOME Shell UX Hackfest

GNOME Shell has made significant improvements over the years since GNOME 3.0 was first released. This has included overhauling notifications, introducing a unified system status area, and refining how window selection and application launching works. Additionally, a huge amount of work has gone into polishing the experience through many many small improvements.

At the same time, some of the core elements of the GNOME Shell experience haven’t significantly changed for some time, and I have started to feel that a round of improvements is due, both to address long-standing issues and to ensure that the shell continues to develop in ways that our users value.

GNOME is also in the fantastic position of having new partners who we are developing working relationships with, particularly around the shell. Nowadays there are a variety of derivatives who are using the shell, including Endless, Ubuntu and Pop!_OS, and there’s a real desire to collaborate over the future of the shell and share in any benefits that might result.

Last week, these twin desires coalesced as a user experience hackfest which aimed to improve the design of the shell.

The hackfest was deliberately small, in order to provide a conducive environment for design work. Participants included Robin Tafel, Cosimo Cecchi, Jakub Steiner, Tobias Bernard, Florian Müllner, Cassidy James Blaede, Mario Sanchez Prada and myself (Nick Richards also called in). These individuals had affiliations with Endless, Red Hat, System76 and elementary OS, and they included both experienced GNOME designers and fresh perspectives.

While there wasn’t anyone from Ubuntu at the hackfest, we are in contact and I’ll be working to ensure that they are included in the process that the hackfest has initiated.

Overall, I was extremely happy with the event, and we came away with some exciting plans, which we think will result in major improvements to the GNOME Shell user experience.

Turning the ideas we’ve generated into viable designs will be a lot of work, and I’ll provide more details once some of the details have been filled in. In the mean time, Cassidy has written up a detailed account of the hackfest, which includes some more specifics for those who are especially interested.

I’d like to thank the GNOME Foundation for sponsoring my attendance at the hackfest, as well as Endless and Red Hat for providing the space for the event. I’d also like to offer my heartfelt gratitude to all the attendees, every one of whom made valuable and talented contributions over the four days.

Photo courtesy of Jakub Steiner (CC-BY-SA 2.0).

November 21, 2017

Mono's TLS 1.2 Update

Just wanted to close the chapter on Mono's TLS 1.2 support which I blogged about more than a year ago.

At the time, I shared the plans that we had for upgrading the support for TLS 1.2.

We released that code in Mono 4.8.0 in February of 2017 which used the BoringSSL stack on Linux and Apple's TLS stack on Xamarin.{Mac,iOS,tvOS,watchOS}.

In Mono 5.0.0 we extracted the TLS support from the Xamarin codebase into the general Mono codebase and it became available as part of the Mono.framework distribution as well as becoming the default.

Feeds