GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

November 19, 2017

Maps Towards 3.28

So, it's been a while since my last blog post.

Some work has been done since the release of 3.26 in September. On the visual side we have adapted the routing sidebar to use a similar styling as is used in Files (Nautilus) and the GTK+ filechooser.










I also took the oportunity to improve the looks of the transit route labels when using a dark theme variant:

So that now the labels will get an outline in the lighter text color when the background is dark (the opposite compared to when using the regular light theme variant).

Another small improvement is that we now support the common --version command line option to… well, show the version number :-)
Also, the copyright date in the About dialog has been updated (since it hadn't been since 2013…).

Under the hood, and not immediately visible, I have cleaned out some cruft that generated a lot of run-time warnings about superfluous function arguments  (either the Javascript GIR bindings expected extra error “out parameters” before, or GJS just has gotten more verbose about these things).

I also started playing with some newer ES6 features, namely closures (so avoiding declaring an inline function using the “function” keyword and binding this to the scope, and rather do something like () => { do_suff_here; } when declaring signal callbacks and such. And the other thing being using ES6 classes (and the new GObject syntax in GJS.

And this not only makes the code nicer to read (IMO) it also cuts down on the LoC count a bit:

 
It's not only been hacking lately though, I had a nice video conference with representatives from Mapbox to discuss ways we could give back to them in return for their generousness towards us. I have some ideas about features we could implement in Maps that they were interested in. This is something I will most likely bring up again in a not too distant future.

November 18, 2017

Happy birthday to Fortran!

A recent article reminded me that the Fortran programming language is now sixty years old! This is a major milestone. And while I don't really write Fortran code anymore, the article was a great reminder of my early use of Fortran.

My first compiled language was Fortran. I was an undergraduate physics student at the University of Wisconsin-River Falls, and as part of the physics program, every student had to learn Fortran programming. Since this was the very early 1990s, we used FORTRAN 77 (the new capitalization of "Fortran" would be established with Fortran 90, a few years later).

We learned FORTRAN 77 so could do numerical analysis, or other programmatic analysis of lab data. For example, while spreadsheets of the era could calculate a linear fit to x-y data, including standard deviations, we could not fit polynomials to nonlinear data. But given a maths textbook, you could write your own program in FORTRAN 77. I wrote many programs like this throughout my undergraduate career.

As a research intern at a national lab between my junior and senior years, my mentors discovered I knew FORTRAN. So I got the assignment to port a FORTRAN IV program to FORTRAN 77 (Fortran 90 had recently been defined, and the lab didn't have the compiler yet). It was my first programming "job" and through experience I realized I wanted a career in IT rather than physics research.

I also taught myself the C programming language, and thereafter switched to C when I needed to write a program. I haven't needed to write Fortran code since then.

The last time I wrote anything in Fortran was a few years ago. At the time, I read an article about a proposed proof to the Collatz conjecture: the so-called "hailstone" sequence. From Slashdot: "A hailstone sequence starts from any positive integer n the next number in the sequence is n/2 if n is even and 3n+1 if n is odd. The conjecture is that this simple sequence always ends in 1. Simple to state but very difficult to prove and it has taken more than 60 years to get close to a solution."

I hadn't heard of the hailstone sequence before, but I thought it was an interesting problem. I could have easily written this program in C or even Bash, but I used the opportunity to return to my roots with Fortran. I created a simple FORTRAN 77 program to iterate the hailstone sequence. To celebrate Fortran's milestone birthday, I'd like to share my program with you:

      PROGRAM HAILSTN
C     A PROGRAM TO ITERATE THE HAILSTONE SEQUENCE.
C
C     THE RULE FOR A HAILSTONE SEQUENCE IS:
C
C     START AT ANY POSITIVE INTEGER, N.
C     IF N IS EVEN, THE NEXT NUMBER IS N/2. IF N IS ODD, THE NEXT NUMBER
C     IS 3N+1. ITERATE.
C
C     IN THEORY, ALL HAILSTONE SEQUENCES WILL END WITH 1.


 10   PRINT *, 'Enter starting number (any positive integer):'
      READ *, N

C      PRINT *, 'You entered: ', N

      IF (N.LT.1) THEN
         PRINT *, 'You must enter a positive integer'
         GOTO 10
      ENDIF

C     ITERATE

      PRINT *, N

      ITER = 0

 20   IF (MOD(N,2).EQ.0) THEN
         N = N / 2
      ELSE
         N = (3 * N) + 1
      ENDIF

      ITER = ITER + 1

      PRINT *, N
      IF (N.NE.1) GOTO 20

      PRINT *, 'Number of iterations: ', ITER

      END PROGRAM

This program doesn't demonstrate the best programming practices, but it does represent many FORTRAN 77 programs. To illustrate, allow me to walk you through the code:

First, FORTRAN code was originally written on punched cards. The first FORTRAN used columns to understand the program listing. FORTRAN 77 programs used the same column rules:

  • If a C or * in column 1, the line is a comment
  • Program labels (line numbers) are in columns 1–5
  • Program statements begin on column 7, but cannot go beyond column 72
  • Any character in column 6 will continue the line from the preceding line (not used here)

While you could (and should) declare variables to be of a certain type, FORTRAN 77 used a set of implicit rules to assign variable types: all variables starting IN are assumed INTEGER, and variables starting with other letters are declared REAL (floating point).

My program uses only two variables, N and ITER, which are both integer variables.

FORTRAN 77 is a simple language, so you should be able to figure out what the program is doing based on those rules. I'll add a note about the code starting with line label 20. FORTRAN 77 doesn't have a do-while loop concept, so you end up constructing your own using a label, IF, and GOTO.

And that's what happens starting at label 20. The program begins a loop iteration, following the rules of the hailstone sequence: for each positive integer n the next number in the sequence is n/2 if n is even and 3n+1 if n is odd. After updating the ITER iteration count and printing the current value of N, the program continues to loop back to line label 20 (using GOTO) until N reaches 1.

When the loop is complete, the program prints the number of iterations, and exits.

Here's a sample run:

 Enter starting number (any positive integer):
11
          11
          34
          17
          52
          26
          13
          40
          20
          10
           5
          16
           8
           4
           2
           1
 Number of iterations:           14

Happy birthday, Fortran!

November 17, 2017

Code Hospitality

Recently on the Greater than Code podcast there was an episode called "Code Hospitality", by Nadia Odunayo.

Nadia talks about thinking of how to make people comfortable in your code and in your team/organization/etc., and does it in terms of thinking about host/guest relationships. Have you ever stayed in an AirBnB where the host carefully prepares some "welcome instructions" for you, or puts little notes in their apartment to orient/guide you, or gives you basic guidance around their city's transportation system? We can think in similar ways of how to make people comfortable with code bases.

This of course hit me on so many levels, because in the past I've written about analogies between software and urbanism/architecture. Software that has the Quality Without A Name talks about Christopher Alexander's architecture/urbanism patterns in the context of software, based on Richard Gabriel's ideas, and Nikos Salingaros's formalization of the design process. Legacy Systems as Old Cities talks about how GNOME evolved parts of its user-visible software, and makes an analogy with cities that evolve over time instead of being torn down and rebuilt, based on urbanism ideas by Jane Jacobs, and architecture/construction ideas by Stewart Brand.

I definitely intend to do some thinking on Nadia's ideas for Code Hospitality and try to connect them with this.

In the meantime, I've just rewritten the README in gnome-class to make it suitable as an introduction to hacking there.

Rust+GNOME Hackfest in Berlin, 2017

Last weekend I was in Berlin for the second Rust+GNOME Hackfest, kindly hosted at the Kinvolk office. This is in a great location, half a block away from the Kottbusser Tor station, right at the entrance of the trendy Kreuzberg neighborhood — full of interesting people, incredible graffitti, and good, diverse food.

Rug of Kottbusser Tor

My goals for the hackfest

Over the past weeks I had been converting gnome-class from the old lalrpop-based parser into the new Procedural Macros framework for Rust, or proc-macro2 for short. To do this the parser for the gnome-class mini-language needs to be rewritten from being specified in a lalrpop grammar, to using Rust's syn crate.

Syn is a parser for Rust source code, written as a set of nom combinator parser macros. For gnome-class we want to extend the Rust language with a few conveniences to be able to specify GObject classes/subclasses, methods, signals, properties, interfaces, and all the goodies that GObject Introspection would expect.

During the hackfest, Alex Crichton, from the Rust core team, kindly took over my baby steps in compiler writing and made everything much more functional. It was invaluable to have him there to reason about macro hygiene (we are generating an unhygienic macro!), bugs in the quoting system, and general Rust-iness of the whole thing.

I was also able to talk to Sebastian Dröge about his work in writing GObjects in Rust by hand, for GStreamer, and what sort of things gnome-class could make easier. Sebastian knows GObject very well, and has been doing awesome work in making it easy to derive GObjects by hand in Rust, without lots of boilerplate — something with which gnome-class can certainly help.

I was also looking forward to talking again with Guillaume Gomez, one of the maintainers of gtk-rs, and who does so much work in the Rust ecosystem that I can't believe he has time for it all.

Graffitti heads

Extend the Rust language for GObject? Like Vala?

Yeah, pretty much.

Except that instead of a wholly new language, we use Rust as-is, and we just add syntactic constructs that make it easy to write GObjects without boilerplate. For example, this works right now:

#![feature(proc_macro)]

extern crate gobject_gen;

#[macro_use]
extern crate glib;
use gobject_gen::gobject_gen;

gobject_gen! {
    // Derives from GObject
    class One {
    }

    impl One {
        // non-virtual method
        pub fn one(&self) -> u32 {
            1
        }

        virtual fn get(&self) -> u32 {
            1
        }
    }

    // Inherits from our other class
    class Two: One {
    }

    impl One for Two {
        // overrides the virtual method
        // maybe we should use "override" instead of "virtual" here?
        virtual fn get(&self) -> u32 {
            2
        }
    }
}

#[test]
fn test() {
    let one = One::new();
    let two = Two::new();

    assert!(one.one() == 1);
    assert!(one.get() == 1);
    assert!(two.one() == 1);
    assert!(two.get() == 2);
}

This generates a little boatload of generated code, including a good number of unsafe calls to GObject functions like g_type_register_static_simple(). It also creates all the traits and paraphernalia that Glib-rs would create for the Rust binding of a normal GObject written in C.

The idea is that from the outside world, your generated GObject classes are indistinguishable from GObjects implemented in C.

The idea is to write GObject libraries in a better language than C, which can then be consumed from language bindings.

Current status of gnome-class

Up to about two weeks before the hackfest, the syntax for this mini-language was totally ad-hoc and limited. After a very productive discussion on the mailing list, we came up with a better syntax that definitely looks more Rust-like. It is also easier to implement, since the Rust parser in syn can be mostly reused as-is, or pruned down for the parts where we only support GObject-like methods, and not all the Rust bells and whistles (generics, lifetimes, trait bounds).

Gnome-class supports deriving classes directly from the basic GObject, or from other GObject subclasses in the style of glib-rs.

You can define virtual and non-virtual methods. You can override virtual methods from your superclasses.

Not all argument types are supported. In the end we should support argument types which are convertible from Rust to C types. We need to finish figuring out the annotations for ownership transfer of references.

We don't support GObject signals yet; I think that's my next task.

We don't support GObject properties yet.

We don't support defining new GType interfaces yet, but it is planned. It should be easy to support implementing existing interfaces, as it is pretty much the same as implementing a subclass.

The best way to see what works right now is probably to look at the examples, which also work as tests.

Digression on macro hygiene

Rust macros are hygienic, unlike C macros which work just through textual substitution. That is, names declared inside Rust macros will not clash with names in the calling code.

One peculiar thing about gnome-class is that the user gives us a few names, like a class name Foo and some things inside it, say, a method name bar, and a signal baz and a property qux. From there we want to generate a bunch of boilerplate for GObject registration and implementaiton. Some of the generated names in that boilerplate would be

Foo              // base name
FooClass         // generated name for the class struct
Foo::bar()       // A method
Foo::emit_baz()  // Generated from the signal name
Foo::set_qux()   // Generated property setter
foo_bar()        // Generated C function for a method call
foo_get_type()   // Generated C function that all GObjects have

However, if we want to actually generate those names inside our gnome-class macro and make them visible to the caller, we need to do so unhygienically. Alex started started a very interesting discussion on macro hygiene, so expect some news in the Rust world soon.

TL;DR: there is a difference between a code generator, which gnome-class mostly intends to be, and a macro system which is just an aid in typing repetitive code.

Fuck wars

People for whom to to be thankful

During the hackfest, Nirbheek has been porting librsvg from Autotools to the Meson build system, and dealing with Rust peculiarities along the way. This is exactly what I needed! Thanks, Nirbheek!

Sebastian answered many of my questions about GObject internals and how to use them from the Rust side.

Zeeshan took us to a bunch of good restaurants. Korean, ramen, Greek, excellent pizza... My stomach is definitely thankful.

Berlin

I love Berlin. It is a cosmopolitan, progressive, LGBTQ-friendly city, with lots of things to do, vast distances to be traveled, with good public transport and bike lanes, diverse food to be eaten along the way...

But damnit, it's also cold at this time of the year. I don't think the weather was ever above 10°C while we were there, and mostly in a constant state of not-quite-rain. This is much different from the Berlin in the summer that I knew!

Hackers at Kimchi Princess

This is my third time visiting Berlin. The first one was during the Desktop Summit in 2011, and the second one was when my family and I visited the city two years ago. It is a city that I would definitely like to know better.

Thanks to the GNOME Foundation...

... for sponsoring my travel and accomodation during the hackfest.

Sponsored by the GNOME Foundation

November 16, 2017

2017-11-16 Thursday.

  • Mail chew, catchup with Jona, partner call, lunch with J. Chat with Tom, built code left & right. oreport crashes when it fails to read kernel symbols as a user.
  • Read some memory profiles with Ash; interesting. Poked at ways of profiling for scalability bottlenecks, some nice blog and slides on perf - but still, working out what processor events to trace in order to find contended mutex's, or contended atomic references still puzzles me.

The Road to 3.28: Calendar and To Do

Greetings my GNOME friends!

It’s been a long time with no news. I guess work and masters are really getting in the way… good news is that I’ll finish masters in 2 months, and will have some free time to devote to this beloved project.

“Bad” news is that, after almost 6 years, I’ll finally take some time to have a real vacation. I’ll stay 3 weeks out of the loop in February, a time where I’ll be traveling to the other side of the world, watching the sunset at the beach with my wife. Without a computer. While it’s unfortunate to the community, I think this time is necessary for my mental health – I’ve gone way too many times through the almost-burned-out state recently.

But even with all of these thing in our way, thanks to the help of awsome old and new contributors, Calendar and To Do received a lot of new features!

Calendar

Lets begin with my beloved Calendar. My focus for the past weeks was rewriting the Month view. It was a hard, painful process, but I can say for sure now that, of the very few responsive widgets in GNOME, the Month view is the best one! 😛

The most substantial changes were:

  • The day numbers are at the top of each cell now. This is thanks to the hard design work of Allan Day, Jean-François and Lapo.
  • Each cell now only shows the overflow button when absolutely necessary. When implementing this new behavior, a few longstanding issues were fixed.
  • The Month view now finally has a fully working, sane code to deal with RTL languages.
  • When clicking the +N button, the cell “zooms in” and display the list of events. This is a big design improvement over the popovers that we were using.
  • Code-wise, the Month view code that position the events is an order of magnitude simpler and easier to read. It may sound like a purely technical matter, but it has user-visible effects too: easier, cleaner code means more features and less issues in the future.

Of course, no words can make people as excited as a sequence of pictures! Lets check this out:

 

The animations were implemented usuing the animation framework in libdazzle, all thanks to Christian Hergert’s work on GNOME Builder. Kudos!

For the next cycle, thanks to the hard work of a new and awsome contributor Florian Brosch, this is what’s coming next:

Weather indication in Week viewWeather indication

We’re on track to land the features that were proposed for this cycle. You can check out the plans at the Roadmap page of Calendar. You can also get help us with these tasks with design, code and testing!

To Do

GNOME To Do also received a lot of attention already. We’re going through a big redesign, thanks to the leading design work of Tobias Bernard, and the results are already gratifying.

The immeditaly noticeable change is the tasklist view:

New tasklist view in To Do 3.27New tasklist view in To Do 3.27

 

The rows are entirely draggable now. I’ll continue working on these features, but more importantly, I want people to take some of this work over and contribute to the project!

Talking about managing tasks, GNOME To Do was moved to GitLab! I can’t state how much of an improvement it is over the previous Bugzilla approach. We now have an updated and organized Kanban Board:

 

GNOME To Do in GitLab: the Kanban BoardGNOME To Do in GitLab: the Kanban Board

The reason for that is to have a consolidated workflow:

  • A designer moves the task to “Design” column and works on it.
  • Once design is settled, a developer moves the task to the “Development” column and fixes/implements the task.
  • When the task if implemented, the developer moves the task to “Code Review” column, and a maintainer will review the code.
  • Once the code is reviewed and the code landed, the task is moved to the “QA” column, where a tester will pick up and test it.
  • When all the regressions and issues of that task are fixed, the task is closed

So far, the experience with this workflow has been outstanding. We were able to find out much more bugs due to QA being a first-class citizen in the process.  Filing bugs is now a breeze too! There are bug templates already available, and I took the burden and made a colossal cleanup and organization of the bug list:

GNOME To Do in GitLab: Issue TemplatesGNOME To Do in GitLab: Issue Templates

I encourage everyone to not trust me and go check it out: https://gitlab.gnome.org/GNOME/gnome-todo. The downside is that I’m feeling incredibly demotivated to check Calendar bugs in Bugzilla now 😦

We’re Not Quite There Yet

While many of these changes are super exciting, this is just the first part of the cycle. There are much more to work on, and the more people get involved, the more we will accomplish. Things are moving in a fast pace, and I’m incredibly happy with the direction of these projects.

To help pushing community involvement, I went ahead and wrote a page describing how can you help testing. With Flatpak, this is ridiculously easy – and yet, absolutely necessary! So, don’t hesitate to get in touch and help us shaping the next GNOME version.

See you all around o/

2017-11-15 Wednesdy.

  • Mail chew, built ESC stats, estimation bits. Lunch with J. Chat with Jona, sync with Eike & team on calc threading merge.

“Improving the performance of the qcow2 format” at KVM Forum 2017

I was in Prague last month for the 2017 edition of the KVM Forum. There I gave a talk about some of the work that I’ve been doing this year to improve the qcow2 file format used by QEMU for storing disk images. The focus of my work is to make qcow2 faster and to reduce its memory requirements.

The video of the talk is now available and you can get the slides here.

The KVM Forum was co-located with the Open Source Summit and the Embedded Linux Conference Europe. Igalia was sponsoring both events one more year and I was also there together with some of my colleages. Juanjo Sánchez gave a talk about WPE, the WebKit port for embedded platforms that we released.

The video of his talk is also available.

QEMU and function keys (follow-up)

Since I posted my suggestion for QEMU a few weeks ago, I've learned a few things about QEMU. Thanks so much to the folks who contacted me via email to help me out.

A brief review of my issue:

I like to run FreeDOS in QEMU, on my Linux laptop. QEMU makes it really easy to boot FreeDOS or to test new installations. During our run up to the FreeDOS 1.2 release, I tested every pre-release version by installing under QEMU.

But one problem pops up occasionally when using QEMU. A lot of old DOS software uses the function keys to do various things. The most common was F1 for help, but it was common for an install program to use F10 to start the install.

And with QEMU, you can use those keys. Except some of them. Some function keys, like F10, are intercepted by the window system or desktop environment. You can get around this in QEMU by using the QEMU console (available in the menu bar or tabs) and typing a sendkey command, like sendkey f10. But that's kind of awkward, especially for new users. Nor is it very fast if you often need to use the function keys.

So I suggested that QEMU add a toolbar with the function keys.

Of course, QEMU's preference is that QEMU should grab the keyboard and intercept all the function keys, blocking the window system shortcut keys like F10. So QEMU wants to do this. And I understand that QEMU used to do this. Sounds like the current issue is a regression in the Wayland implementation—and I run Fedora Linux, so I'm using Wayland.

As Daniel responded via the QEMU tracker for my bug:

Recently though, there has been a regression in this area. With the switch to Wayland instead of Xorg, the standard GTK APIs for doing this keyboard grabbing / accel blocking no longer work correctly. Instead apps need to add custom code to talk to the Wayland compositor IIRC. There's work underway to address this but its a way off.

So that explains it. I'm happy to have this captured by the application. Doing the keyboard interception "live" is a much better solution (better usability) than the toolbar I suggested. Thanks!

How To Tweak Firefox’s User Interface

Firefox Quantum has made a clean break from Firefox’s legacy addons. Hooray!

A casualty of this change is the ability to have addons that fundamentally alter Firefox’s user interface. This can be a problem if you depended on this for accessibility needs. Say, you had an addon that enlarged the fonts in Firefox’s chrome.

Luckily, not all is lost. With some CSS knowledge, you can customize the Firefox user interface as much as you need. Simply drop some CSS rules into $PROFILE/chrome/userChrome.css.

Here is an example rule that employs large yellow on black text:

* {
  font-size-adjust: 0.75 !important;
  background-color: #000 !important;
  color: yellow !important;
}

The effect on Firefox will be dramatic:

Restylized user interface with yellow on black text

Note, this will break things, and it will not be perfect. Before using this kind of solution check what accessibility features your platform provides.


November 15, 2017

vdb17

with the modest success of my last year's talk the lost medium i was reinvited by the kind folks of voxxed days belgrade to delve into this topic a bit further. vdb17 was an amazing experience – again – being one of the biggest and most inspiring technology conferences in eastern europe with excellent speakers from all over the world and about 800+ attendees.

my previous talk focused a lot on the early days of personal computing, the ingenious ideas we lost over time and the notion that we're not really thinking about how we can use the medium computer to augment our human capabilities.

after delivering this talk however i had the feeling that i left out an important question: what now? how can we improve?

this was the base for my new talk, the bullet hole misconception, in which i'm exploring how we can escape the present to invent the future and what questions must we ask if we are to amplify our human capabilities with computers.

feel free to share it and if you have questions, feedback or critique i'd love to hear from you!

We’re Organizing Flatpak Workshop

Several years ago, we organized two workshops focused on packaging software for Fedora. It was a success. Both workshops were full and several participants became package maintainers in Fedora. Packaging for Flatpak is becoming popular lately, so we decided to offer a workshop which is focused on flatpaking apps.

flatpak-logo

This time it’s not focused on the community around Fedora because you can create and run flatpaks in pretty much any Linux distribution. The workshop will again be run by experts in the field (Carlos Soriano, Felipe Borges, Jan Grulich,…). The morning part will consist of talks on Flatpak introduction, specifics for Qt and GTK+ based apps, portals, which integrate apps with the system, and at the end you will also learn to get your app to Flathub, the centralized Flatpak repository. In the afternoon you’ll have an opportunity to flatpak an application of your choice with help of the lectors. That’s why we’d appreciate if you specified what app you’d like to flatpak, so that we can get ready for it. But it’s completely optional.

Date: Wed, November 29, 2017
Time: 10:00-17:00
Venue: Neptunium and Plutonium Rooms in Red Hat Czech, Purkyňova 111, 612 00 Brno
Capacity: 20 participants
Registration: please fill out this form
Language: primarily in English
Admission: free
Prerequisities: knowledge of Linux at the level of advanced users, own laptop with flatpak, flatpak-builder (>=0.8) installed, GNOME Builder is not necessary, but may come handy.


Talking at Kieler LinuxTage 2017 in Kiel, Germany

I was invited to present GNOME at the Kieler LinuxTage in Kiel, Germany.

logo

Compared to other events, it’s a tiny happening with something between fifty and hundred people or so. I was presenting on how I think GNOME pushes the envelope regarding making secure operating systems (slides, videos to follow). I was giving three examples of how GNOME achieves its goal of priding a secure OS without compromising on usability. In fact, I claimed that the most successful security solutions must not involve the user. That sounds a bit counter intuitive to people in the infosec world, because we’re trying to protect the user, surely they must be involved in the process. But we better not do that. This is not to say that we shouldn’t allow the user to change preferences regarding how the solutions behave, but rather that it should work without intervention. My talk was fairly good attended, I think, and we had a great discussion. I tend to like the discussion bit better than the actual presentation, because I see it as an indicator for how much the people care. I couldn’t attend many other presentations, because I would only attend the second day. That’s why I couldn’t meet with Jim :-/

But I did watch Benni talking about hosting a secure Web site (slides). He started his show with mentioning DNS which everybody could read, He introduced DNSSEC. Which, funnily enough, everybody also can read, but he failed to mention that. But at least nobody can manipulate the response. Another issue is that you leak information about your host names with negative responses, because you tell the client that there is nothing between a.example.com and b.example.com. He continued with SSH for deploying your Web site and mentioned SSHFP which is a mechanism for authenticating the host key. The same mechanism exists for Web or Mail servers, he said: DANE, DNS-based Authentication of named entities. It works via TLSA records which encode either the certificate or the used public key. Another DNS-based mechanism is relatively young: CAA. It asserts that a certificate for a host name shall be signed by a certain entity. So you can hopefully prevent a CA that you’ve never heard of creating a certificate for your hosts. All of these mechanisms try to make the key exchange in TLS a bit less shady. TLS ensures a secure channel, i.e. confidentiality, non-repudiation, and integrity. That is considered to be generally useful in the Web context. TLS tends to be a bit of a minefield, because of the version and configuration matrix. He recommended to use at least TLS as of version 1.2, to disable compression due to inherent attacks on typical HTTP traffic (CRIME), and to use “perfect forward secrecy” ciphers for protecting the individual connections after the main key leaked. Within TLS you use x509 certificates for authenticating the parties, most importantly in the Web world, the server side. The certificate shall use a long enough RSA key, he said, The certificate shall not use a CN field to indicate the host name, but rather the SAN field. The signatures should be produced with “at least SHA-256”. He then mentioned OCSP because life happens and keys get lost or stolen. However, with regular OSCP the clients expose the host names they visit, he said. Enter OCSP Stapling. In that case the Web server itself gets the OCSP response and hands it over to the client. Of course, this comes with its own challenges. But it may also happen that CAs issue certificates for a host name which doesn’t expect that new certificate. In that case, Certificate transparency becomes useful. It’s composed of three components, he said. Log servers which logs all created certificates, monitors which pull the logs, and auditors which check the logs for host names. Again, your Browser may want to check whether the given certificate is in the CT logs. This opens the same privacy issue as with OCSP and can be somewhat countered with signed log statements from a few trusted log servers.

In any case, TLS is only useful, he said, if you are actually using it. Assuming you had a secure connection once, you can use the TLS Strict Transport Security header. It tells the browser to use TLS in the future. Of course, if you didn’t have that first connection, you can have your webapp entered in the STS Preloading list which is then baked into major browsers. Another mechanism is HTTP Public Key Pinning which is a HTTP header to tell the client which certificates or CAs shall be accepted. The header value is a simple list of hashed certificates. He mentioned the risk of someone hijacking your Web presence with an injected HPKP header. A TLS connection has eventually been established successfully. Now the HTTP layer gets interesting, he said. The Content Type Options header prevents Internet Explorer from snooping content types which might cause an image to be executed as JavaScript. Many Cross-Site Scripting attacks, he said, originate from being embedded in a frame. To prevent that, you can set the X-Frame-Options header. To activate Cross-Site Scripting protection mechanisms, the X-XSS-Protection header can be set. It’s probably turned off by default for compatibility reasons, he said. If you know where exactly your data is coming from, you can make use of a Content Security Policy which is like SELinux for your browser. It’s a bit of a complicated mechanism though. For protecting your Webapp he mentioned Sub-Resource Integrity which is essentially the hash of what script you expect. This prevents tampering with the foreign script, malicious or not.

I think that was one of the better talks in the schedule with many interesting details to be discovered. I enjoyed it a lot. I did not enjoy their Web sites, though, which are close to being unusable. The interface for submitting talks gives you a flashback to the late 90’s. Anyway, it seems to have worked for many years now and hope they will have many years to come.

Talking at PET-CON 2017.2 in Hamburg, Germany

A few weeks ago, I was fortunate enough to talk at the 7th Privacy Enhancing Techniques Conference (PET-CON 2017.2) in Hamburg, Germany. It’s a teeny tiny academic event with a dozen or so experts in the field of privacy.

The talks were quite technical, involving things like machine learning over logs or secure multi-party computation. I talked about how I think that the best technical solution does not necessarily enable the people to be more private, simply because the people might not be able to make use of the tool properly. A concern that’s generally shared in the academic community. Yet, the methodology to create and assess the effectiveness of a design is not very elaborated. I guess we need to invest more brain power into creating models, metrics, and tools for enabling people to do safer computing.

So I’m happy to have gone and to have had the opportunity of discussing the issues I’m seeing. Likewise, I find it very interesting to see where the people are currently headed towards.

November 14, 2017

Igalia is Hiring

Igalia is hiring web browser developers. If you think you’re a good candidate for one of these jobs, you’ll want to fill out the online application accompanying one of the postings. We’d love to hear from you.

We’re especially interested in hiring a browser graphics developer. We realize that not many graphics experts also have experience in web browser development, so it’s OK if you haven’t worked with web browsers before. Low-level Linux graphics experience is the more important qualification for this role.

Igalia is not just a great place to work on cool technical projects like WebKit. It’s also a political and social project: an egalitarian, worker-owned cooperative where everyone has an equal vote in company decisions and receives equal pay. It’s been around for 16 years, so it’s also not a startup. You can work remotely from wherever you happen to be, or from our office in A Coruña, Spain. You won’t have a boss, but you will be expected to work well with your colleagues. It’s not the right fit for everyone, but there’s nowhere I’d rather be.

GNOME ASIA 2017 Chongqing

This years GNOME ASIA was held in beautiful city of Chongqing which is surrounded in the mountains.
I traveled to Chongqing along with Daniel and we arrived to Chongqing at 4 in the morning on the same day as that of the conference. We would have liked to travel at least one day before the event but we could not because of work commitments. We wanted to show a working prototype of our project but instead of that we decided to show simple demo of creating a digital contract and deploy on Blockchain. It was last minute decision (day before the conference) , so finally we ended up having code in the flight.




We talked about "GNOME and blockchains, why what and how?", slides for the talk can be found on
gnome.asia talk.
Thing about Chongqing city and GNOME.ASIA conference is "unexpected things" aka surprises. GNOME ASIA conference was started with fire show from local artists and ended something special that I could capture in video

November 13, 2017

Gnome apps migrated to flathub

Last week I finally migrated the last app from the gnome stable application flatpak repo to flathub. The old repo is now deprecated and will not get any new builds.

In the future, all stable flatpak builds of gnome apps will be on flathub only, so make sure you add it as a remote.

There are a lot of apps on flathub now, have a look. And if your application is not there but you’re interested in adding it, please check out the flathub docs.

#PeruRumboGSoC2018 – Session 1

Our first session has started last Sunday at UIGV. It was quite difficult to find a laboratory with Linux computers opened on Sunday, but thanks to the UIGV to let us use the classroom at Faculty of Turism. We had 7 hours of Linux basis,  from 10am to 6:00 pm. I started the training with GNU/Linux basis, Fedora, GNOME, Linux generalities and Linux commands. There was a group conformed by students from different universities so the first part was an introduction of what the have heard about GSoC, GNOME, Fedora and Linux.The students took their laptops to the session and an important detail was missing in classroom this time, an extension cord. Then, we placed them in the corners where plug points were. Gathered the list of their blogs and git accounts followed as they set their Fedora environments. I also lectured about the Linux history and basis of the GNU/Linux. Some works of the students, so far -> Carlos, Giohanny 1, 2, Johan, Franz, Fiorella, Rommel, Solanch, Lizbeth 1, 2 and Cristian.Thanks to GNOME for sponsoring our lunch. It was taken around 1:00 pm. We ate “chifa”.More basic commands, how to be connected on IRC (newcomers channels) and the use of VI were also explained while the light of the sun was slowly descending around 6:00 p.m.Thanks all the attendances, Randy as one of our trainers, Solanch for her help. Great job!Special thanks to GNOME, Fedora and the Linux Foundation for helping us in education! 🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: #PeruRumboGSoC2018, comandos básicos Linux, fedora, GNOME, gnome 3, GSoC, Julita Inca, Julita Inca Chiroque, Lima, linux, Linux Foundation, UIGV

November 12, 2017

Exam Results and Pass List #PeruRumboGSoC2018

Early this morning, students from different universities of Lima, Peru came to UNI to take an exam to prove their knowledge of programming and GNU/Linux.

52 were registered in the previuos days and I started to look for a lab with 50 Linux manchines which was actually very challenging. Then, we decided to just have two classrooms and take a written exam where we could see the skill level growth related to programming on Linux. Only 17 showed up ((maybe the Peruvian soccer match from last night that determined if we would go to the World Cup was the reason why it was so hard for them to wake up early).

According to our plan, we are looking for students that are almost ready for the GSoC, because we are aware that six Sundays will not be enough to master these subjects: C, python, JavaScript, Node JS, Linux, GIT and documentation. We also noticed that people who manage these skills, in advance are working and are not enrolled in university or are retired, or simply they want their Sundays to rest.

However, there are interested students that might not have the same skills as intermediate or advanced level in programming on Linux. That is why we consider it important to have a general view of the new group throughout the exam, so they can compare their academic achievements at the end of the instructional period. 

This is the first step to make our students more confident to apply next April 2018 to the GSoC program. Our main objetive in this phase is to build a good GITHub profile with blogs and content related to programming on Linux.

Thanks to the support of the Linux Foundation, we are going be to supported with $500 in course for each participant who finishes this program successfully. The only way to prove this is by documentation (Git / blog) of each session. Glad to have more women involved.

Tomorrow our sessions are going to start at UIGV. We have authorization from 10 am to 4pm, I am going to use two hours to ensure they accomplish their documentation until 6pm.

The list of the participants who pass is only a reference, we are going to help them with food expenses. But we are not restricting any student who wants to learn with us. So anyone can join us, and the pacing will be basically according to the level of the first top 12 students.  Also, a constantly evaluation every session will let us decide who is going to receive the sweatshirts from the Linux Foundation.

Screen Shot 2017-11-12 at 11.17.05 PM

Thank so much LinuXatUNI, trainers and all people who help us as a local group.


Filed under: FEDORA, GNOME Tagged: #PeruRumboGSoC2018, Exam Results, fedora, Fedora + GNOME community, GNOME, GSoC Peru Preparation, Julita Inca, Julita Inca Chiroque, Lima, LinuXatUNI, Pass List, Perú

November 11, 2017

Code indexing in Builder

Anoop, one of Builder’s GSoC students this past summer, put together a code-index engine built upon Builder’s fuzzy search algorithm. It shipped with support for C and C++. Shortly after the 3.27 cycle started, Patrick added support for GJS. Today I added support for Vala which was rather easy given the other code we have in Builder.

It looks something like this:

A screenshot of Builder display the code search results for Vala

Happy Hacking!

November 10, 2017

Aiming for C++ sorting speed with a plain C API

A well known performance measurement result is that C++ standard library's std::sort function is a lot faster than C library's equivalent qsort. Most people, when they first hear of this, very strongly claim that this is not possible, C is just as fast (if not faster) than C++, this is a measurement error, the sorting algorithms used are different and so on. Then they run the experiment themselves and find that C++ is indeed faster.

The reason for this has nothing to do with how the sorting function is implemented but everything to do with the API. The C API for sorting, as described in the man pages looks like this:

void qsort(void *base, size_t nmemb, size_t size,
           int (*compar)(const void *, const void *));

The interesting point here is the last argument, which is a function pointer to a comparison function. Because of this, the sort implementation can not inline this function call but must instead always call the comparator function, which translates to an indirect jump.

In C++ the sort function is a template. Because of this the comparison function can be inlined in the implementation. This turns out to have a massive performance difference (details below). The only way to emulate this in plain C would be to ship the sort function as a preprocessor monster thingy that people could then use in their own code. This leads to awful and hard to maintain code, so this is usually not done. It would be nice to be able to provide a similar fast sort performance with a stable plain C API but due to the way shared libraries work, it's just not possible.

So let's do it by cheating.

If we know that a compiler is available during program execution, we can implement a hybrid solution that achieves this. Basically we emulate how JIT compilers work. All we need is something like this:

sorter opt_func = build_sorter("int",
    "(const int a, const int b) { return a < b; }");
(*opt_func)(num_array, NUM_INTS);

Here build_sorter is a function that takes as arguments the type of the item being sorted, and a sorting function as source code. Then the function calls into the external compiler to create an optimised sorting function and returns that via a function pointer that can be called to do the actual sort.

Full source code is available in this Github repo. Performance measurements for 100 million integers are as follows.

C++ is almost twice as fast as plain C. On the fly code generation is only 0.3 seconds slower than C++, which is the amount of time it takes to compile the optimised version. Codegen uses the C++ sorting function internally, so this result is expected.

Thus we find that it is possible to provide C++ level performance with a plain C API, but it requires the ability to generate code at runtime.

Extra bonus

During testing it was discovered that for whatever reason the C++ compiler (I used GCC) is not able to inline free functions as well as lambdas. That is, this declaration:

std::sort(begin, end, sorting_function);

generates slower code than this one:

std::sort(begin, end, [](sorting_lambda_here));

even though the contents of both comparison functions is exactly the same (basically return a < b). This is the reason the source code for the sort function above is missing the function preamble.

Using BuildStream through Docker

BuildStream isn’t packaged in any distributions yet, and it’s not entirely trivial to install it yourself from source. BuildStream itself is just Python, but it depends on a modern version of OSTree (2017.8 or newer at time of writing), with the GObject introspection bindings, which is a little annoying to have to build yourself1.

So we have put some work into making it convenient to run BuildStream inside a Docker container. We publish an image to the Docker hub with the necessary dependencies, and we provide a helper script named bst-here that sets up a container with the current worked directory mounted at /src and then runs a BuildStream command or an interactive shell inside it. Just download the script, read it through and run it: all going well you’ll be rewarded with an interactive Bash session where you can run the bst command. This allows users on any distro that supports Docker to run BuildStream builds in a pretty transparent way and without any major performance limitations. It even works on Mac OS X!

In order to run builds inside a sandbox, BuildStream uses Bubblewrap. This requires certain kernel features, in particular CONFIG_USER_NS which right now is not enabled by default in Arch and possibly in other distros. Docker containers run against the kernel of the host OS so it doesn’t help with this issue.

The Docker images we provide are based off Fedora and are built by GitLab CI from this repo. After a commit to that repo’s ‘master’ branch, a new image wends its way across hyperspace from GitLab to the Docker hub. These images are then pulled when the bst-here wrapper script calls docker run. (We also use these images for running the BuildStream CI pipelines).

More importantly, we now have a mascot now! Let me introduce the BuildStream beaver:

Beavers, of course, are known for building things in streams, are also native to Canada. This isn’t going to be the final logo, he’s just been brought in as we got tired of the project being represented by a capital letter B in a grey circle. If anyone can contribute a better one then please get in touch!

So what can you build with BuildStream now that you have it running in Docker? As recently announced, you can build GNOME! Follow this modified version of the newcomer’s guide to get started. Soon you will also be able to build Flatpak runtimes using the rewritten Freedesktop SDK; or build VM images using Baserock; and of course you can create pipelines for your own projects (although if you only have a few dependencies, using Meson subprojects might be quicker).

After one year of development, we are just a few blockers away from releasing BuildStream 1.0. So it is a great time to get involved in the project!

[1]. Installing modern OSTree from source is not impossible — my advice if you want to avoid Docker and your distro doesn’t provide a new enough OSTree would be to build the latest tagged release of OSTree from Git, and configure it to install into /opt/ostree. Then put something like export GI_TYPELIB_PATH=/opt/ostree/lib/girepository-1.0/ in your shell’s startup file. Make sure you have all the necessary build dependencies installed first.


Simplifying contributions

Every release of both GNOME and Builder, we try to lower the barrier a bit more for new contributions. Bastian mentioned to me at GUADEC that we could make things even simpler from the Builder side of things. After a few mockups, I finally found some time to start implementing it.

With the upcoming Nightly build of Builder, you’ll be able to warp right through cloning and building of an application that is ready for newcomer contributions. Just open Builder and click on the application’s icon.

The greeter now shows a grid of icons so newcomers can simply click on the given icon to clone and build.

There is still more to do here, like adding a language emblem and such. Of course, if you want to work on that, do get in touch.

Understanding Go panic output

My code has a bug. 😭

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x751ba4]
goroutine 58 [running]:
github.com/joeshaw/example.UpdateResponse(0xad3c60, 0xc420257300, 0xc4201f4200, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /go/src/github.com/joeshaw/example/resp.go:108 +0x144
github.com/joeshaw/example.PrefetchLoop(0xacfd60, 0xc420395480, 0x13a52453c000, 0xad3c60, 0xc420257300)
        /go/src/github.com/joeshaw/example/resp.go:82 +0xc00
created by main.runServer
        /go/src/github.com/joeshaw/example/cmd/server/server.go:100 +0x7e0

This panic is caused by dereferencing a nil pointer, as indicated by the first line of the output. These types of errors are much less common in Go than in other languages like C or Java thanks to Go’s idioms around error handling.

If a function could fail, the function must return an error as its last return value. The caller should immediately check for errors from that function.

    // val is a pointer, err is an error interface value
    val, err := somethingThatCouldFail()
    if err != nil {
        // Deal with the error, probably pushing it up the call stack
        return err
    }

    // By convention, nearly all the time, val is guaranteed to not be
    // nil here.

However, there must be a bug somewhere that is violating this implicit API contract.

Before I go any further, a caveat: this is architecture- and operating system-dependent stuff, and I am only running this on amd64 Linux and macOS systems. Other systems can and will do things differently.

Line two of the panic output gives information about the UNIX signal that triggered the panic:

[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x751ba4]

A segmentation fault (SIGSEGV) occurred because of the nil pointer dereference. The code field maps to the UNIX siginfo.si_code field, and a value of 0x1 is SEGV_MAPERR (“address not mapped to object”) in Linux’s siginfo.h file.

addr maps to siginfo.si_addr and is 0x30, which isn’t a valid memory address.

pc is the program counter, and we could use it to figure out where the program crashed, but we conveniently don’t need to because a goroutine trace follows.

goroutine 58 [running]:
github.com/joeshaw/example.UpdateResponse(0xad3c60, 0xc420257300, 0xc4201f4200, 0x16, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /go/src/github.com/joeshaw/example/resp.go:108 +0x144
github.com/joeshaw/example.PrefetchLoop(0xacfd60, 0xc420395480, 0x13a52453c000, 0xad3c60, 0xc420257300)
        /go/src/github.com/joeshaw/example/resp.go:82 +0xc00
created by main.runServer
        /go/src/github.com/joeshaw/example/cmd/server/server.go:100 +0x7e0

The deepest stack frame, the one where the panic happened, is listed first. In this case, resp.go line 108.

The thing that catches my eye in this goroutine backtrace are the arguments to the UpdateResponse and PrefetchLoop functions, because the number doesn’t match up to the function signatures.

func UpdateResponse(c Client, id string, version int, resp *Response, data []byte) error
func PrefetchLoop(ctx context.Context, interval time.Duration, c Client)

UpdateResponse takes 5 arguments, but the panic shows that it takes more than 10. PrefetchLoop takes 3, but the panic shows 5. What’s going on?

To understand the argument values, we have to understand a little bit about the data structures underlying Go types. Russ Cox has two great blog posts on this, one on basic types, structs and pointers, strings, and slices and another on interfaces which describe how these are laid out in memory. Both posts are essential reading for Go programmers, but to summarize:

  • Strings are two words (a pointer to string data and a length)
  • Slices are three words (a pointer to a backing array, a length, and a capacity)
  • Interfaces are two words (a pointer to the type and a pointer to the value)

When a panic happens, the arguments we see in the output include the “exploded” values of strings, slices, and interfaces. In addition, the return values of a function are added onto the end of the argument list.

To go back to our UpdateResponse function, the Client type is an interface, which is 2 values. id is a string, which is 2 values (4 total). version is an int, 1 value (5). resp is a pointer, 1 value (6). data is a slice, 3 values (9). The error return value is an interface, so add 2 more for a total of 11. The panic output limits the number to 10, so the last value is truncated from the output.

Here is an annotated UpdateResponse stack frame:

github.com/joeshaw/example.UpdateResponse(
    0xad3c60,      // c Client interface, type pointer
    0xc420257300,  // c Client interface, value pointer
    0xc4201f4200,  // id string, data pointer
    0x16,          // id string, length (0x16 = 22)
    0x1,           // version int (1)
    0x0,           // resp pointer (nil!)
    0x0,           // data slice, backing array pointer (nil)
    0x0,           // data slice, length (0)
    0x0,           // data slice, capacity (0)
    0x0,           // error interface (return value), type pointer
    ...            // truncated; would have been error interface value pointer
)

This helps confirm what the source suggested, which is that resp was nil and being dereferenced.

Moving up one stack frame to PrefetchLoop: ctx context.Context is an interface value, interval is a time.Duration (which is just an int64), and Client again is an interface.

PrefetchLoop annotated:

github.com/joeshaw/example.PrefetchLoop(
    0xacfd60,       // ctx context.Context interface, type pointer
    0xc420395480,   // ctx context.Context interface, value pointer
    0x13a52453c000, // interval time.Duration (6h0m)
    0xad3c60,       // c Client interface, type pointer
    0xc420257300,   // c Client interface, value pointer
)

As I mentioned earlier, it should not have been possible for resp to be nil, because that should only happen when the returned error is not nil. The culprit was in code which was erroneously using the github.com/pkg/errors Wrapf() function instead of Errorf().

// Function returns (*Response, []byte, error)

if resp.StatusCode != http.StatusOK {
    return nil, nil, errors.Wrapf(err, "got status code %d fetching response %s", resp.StatusCode, url)
}

Wrapf() returns nil if the error passed into it is nil. This function erroneously returned nil, nil, nil when the HTTP status code was not http.StatusOK, because a non-200 status code is not an error and thus err was nil. Replacing the errors.Wrapf() call with errors.Errorf() fixed the bug.

Understanding and contextualizing panic output can make tracking down errors much easier! Hopefully this information will come in handy for you in the future.

Thanks to Peter Teichman, Damian Gryski, and Travis Bischel who all helped me decode the panic output argument lists.

November 09, 2017

Welcome To The (Ubuntu) Bionic Age: A new ubuntu default theme, call for participation!

Call for participation: an ubuntu default theme lead by the community?

As part of our Unity 7 to GNOME Shell transition in 17.10, we had last August a Fit and Finish Sprint at the London office to get the Shell feeling more like Ubuntu and we added some tweaks to our default GTK theme.

The outcome can be seen in the following posts:

Some more refinements came in afterward and finale 17.10 has a slightly modified look, but the general feedback we got from the community is that the ubuntu GNOME Shell session really looks and feels like ubuntu.

So I guess, objective completed (next level, attribute your earned point to person’s skills… :p).

Default ubuntu 17.10 desktop

All done?

However, as in any good RPG, this isn’t the real end of the story (there is the next boss!): we heard as well some people (on the community hub, in blog post comments) asking for a more drastic refresh of our theme and we generally agree with this. That would be a good idea to rebase and refresh our desktop with the help of the community!

For any themes, there are multiple parts:

  • The Shell theme itself (css).
  • The GTK3 and GTK 2 theme. The first one is using css, the second is some C code.
  • An icon theme.

Here is thus a call for participation if you are interesting into getting into that journey with us. The idea is to have few people (I think, 2-3 people + Alan Pope and I), having contributed to popular shell or GTK themes already, leading this project. That way, we can define what changes to the theme feels like “ubuntu” or not. We will coordinate all the work on the community hub to ensure that every decision is public and explain why.

We will sync regularly with the Canonical design theme (we have one meeting at the end of the month already) to check progress and get advice. Once the theme for the Shell, and GTK bindings are ready, we’ll switch the default ubuntu to it. That may or may not happen for the LTS, depending on the advancement. If it’s not ready to be switched by default, we will thus give instructions for our advanced users to get a taste of what’s currently cooking up :)

How do we get that going?

The idea is to restart from scratch, basing on upstream (GNOME) work. Indeed, the current ubuntu theme is in pure css, upstream is using sass. The Shell theme itself didn’t deviate much, but the general idea is:

for GTK3 apps, starting from Adwaita (default GNOME theme), and modify constants and slight behavior modifications in the sass files. That way, we don’t deviate too much and it will be easy to rebase with the numerous theme changes every new GTK release gets. for the Shell, using their sass files and tweaking for the same reasons. for the icon theme, we might start from our unity8 Suru icon set? Also, even if we hope that contributors from popular united and other themes will come along, the fact to start back from upstream’s playground provide a nice neutral and clean ground, we didn’t prevent from cherry-picking from what exists already. :)

Come with us!

Anyone can contribute (preferably via pull request on the projects we will created), however, in design, it’s always easier to have ideas than coming with concrete technical help ;). This is why there is this call to get some ideas of the number of people willing to spend some time on this project. The needs skills are either CSS (we’ll use SASS, more on that later) or C GTK theming. Also, Icon designers are more than welcome. :)

Excited? Join us! If you are interested (in either leading or just contributing), please post on this community hub topic your intents and ideas. Also, please reference your technical skills so that we can get an idea on the amount of awesome help we’ll get!

We’ll of course post more info on the same desktop section once we are ready to kick in the project! Hope, you are as thrilled by this project as we all are. ;)

November 08, 2017

Software Freedom Law Center and Conservancy

Before I start, I would like to make it clear that the below is entirely my personal view, and not necessarily that of the GNOME Foundation, the Debian Project, or anyone else.

There’s been quite a bit of interest recently about the petition by Software Freedom Law Center to cancel the Software Freedom Conservancy’s trademark. A number of people have asked my views on it, so I thought I’d write up a quick blog on my experience with SFLC and Conservancy both during my time as Debian Project Leader, and since.

It’s clear to me that for some time, there’s been quite a bit of animosity between SFLC and Conservancy, which for me started to become apparent around the time of the large debate over ZFS on Linux. I talked about this in my DebConf 16 talk, which fortunately was recorded (ZFS bit from 8:05 to 17:30).

 

This culminated in SFLC publishing a statement, and Conservancy also publishing their statement, backed up by the FSF. These obviously came to different conclusions, and it seems bizarre to me that SFLC who were acting as Debian’s legal counsel published a position that was contrary to the position taken by Debian. Additionally, Conservancy and FSF who were not acting as counsel mirrored the position of the project.

Then, I hear of an even more confusing move – that SFLC has filed legal action against Conservancy, despite being the organisation they helped set up. This happened on the 22nd September, the day after SFLC announced corporate and support services for Free Software projects.

SFLC has also published a follow up, which they say that the act “is not an attack, let alone a “bizarre” attack“, and that the response from Conservancy, who view it as such “was like reading a declaration of war issued in response to a parking ticket“. Then, as SFLC somehow find the threat of your trademark being taken away as something other than an attack, they also state: “Any project working with the Conservancy that feels in any way at risk should contact us. We will immediately work with them to put in place measures fully ensuring that they face no costs and no risks in this situation.” which I read as a direct pitch to try and pull projects away from Conservancy and over to SFLC.

Now, even if there is a valid claim here, despite the objections that were filed by a trademark lawyer who I have a great deal of respect for (disclosure: Pam also provides pro-bono trademark advice to my employer, the GNOME Foundation), the optics are pretty terrible. We have a case of one FOSS organisation taking another one to court, after many years of them being aware of the issue, and when wishing to promote a competing service. At best, this is a distraction from the supposed goals of Free Software organisations, and at worst is a direct attempt to interrupt the workings of an established and successful umbrella organisation which lots of projects rely on.

I truly hope that this case is simply dropped, and if I was advising SFLC, that’s exactly what I would suggest, along with an apology for the distress. Put it this way – if SFLC win, then they’re simply displaying what would be viewed as an aggressive move to hold the term “software freedom” exclusively to themselves. If they lose, then it shows that they’re willing to do so to another 501(c)3 without actually having a case.

Before I took on the DPL role, I was under the naive impression that although there were differences in approach, at least we were coming to try and work together to promote software freedoms for the end user. Unfortunately, since then, I’ve now become a lot more jaded about exactly who, and which organisations hold our best interests at heart.

(Featured image by  Nick Youngson – CC-BY-SA-3.0 – http://nyphotographic.com/)

GtkSourceView fundraising – September/October report

I’ve launched two months ago a fundraising for the GtkSourceView library. I intend to write a report every two months, so that you can follow what’s going on in that project, and at the same occasion I can explain in more details some facets of the fundraising.

Only one maintainer (me)

For most of the code in GtkSourceView, there is basically only one remaining maintainer: myself. Tobias Schönberg helps to maintain and review some *.lang files (for the support of syntax highlighting), especially in the area of web development. But that’s it.

Here are the top 5 contributors, in terms of number of commits:

$ git shortlog -sn | head -5
1372 Sébastien Wilmet
531 Paolo Borelli
289 Ignacio Casal Quinteiro
200 Jesse van den Kieboom
149 Yevgen Muntyan

So you can see that I’m the first contributor. All the other developers also contributed during their free time, and they don’t have enough free time anymore (they have an unrelated full-time job, etc).

So in some way, the future of GtkSourceView rests in my hands.

File loading and saving

These past weeks my focus was on file loading and saving in Tepl (the incubator for GtkSourceView).

There are several layers for the file loading and saving:

  • The backend/toolkit part, i.e. the low-level API, it’s what I’ve added to GtkSourceView in 2014 but needs improvements.
  • The high-level API taking care of the frontend, part of the Tepl framework.
  • Some features that are built on top of the high-level API, for example an action to save all documents, or the auto-save to automatically save a document periodically.

For the first layer, the backend/toolkit part, Tepl provides two new things: file metadata (for example to save the cursor position) and a new file loader based on uchardet, to improve the character encoding auto-detection. This past month I’ve improved the new file loader, it is now in a good enough shape for most applications (but still lacks some features compared to the old file loader, so some work is still needed to be able to deprecate the old implementation).

For the second layer, I’ve started to create the high-level API. Creating the API is not the most difficult, the bulk of the work will be to improve what the implementation does internally (creating infobars, handling errors, etc).

The third layer has not yet started.

File loading and saving was not the only thing that I did these past two months, a lot of other smaller things have been done, for more details see the NEWS files:

Conclusion

Even if GtkSourceView already provides a lot of features, it is far from sufficient to create even a basic text editor (to have an implementation of a good quality). To give a concrete example, the core of gedit – if we remove all the plugins – is currently made of 40.000 lines of code! It’s a lot of work for a developer who wants to create a specialized text editor or a new IDE.

So my goal with GtkSourceView and Tepl is to make more code re-usable.

See the GtkSourceView fundraising on Liberapay. Thanks for your support!

November 07, 2017

EXCLUSIVE: Texas Massacre Hero, Stephen Willeford, Describes Stopping Gunman



To donate to the Sutherland Springs Baptist Church to help them recover from this tragedy, check out this GoFundMe campaign.


Hardware CI Tests in fwupd

Usually near the end of the process of getting a vendor on the LVFS I normally ask them to send me hardware for the tests. Once we’ve got a pretty good idea that the hardware update process is going to work with fwupd (i.e. they’re not insisting on some static linked ELF to be run…) and when they’ve got legal approval to upload the firmware to the LVFS (without an eyewateringly long EULA) we start thinking about how to test the hardware. Once we say “Product Foo from Vendor Bar is supported in Linux” we better make damn sure it doesn’t regress when something in the kernel changes or when someone refactors a plugin to support a different variant of a protocol.

To make this task a little more manageable, we have a little python script that helps automate the devices that can be persuaded to enter DFU mode themselves. To avoid chaos, I also have a little cardboard tray under a little HP Microserver with two 10-port USB hubs with everything organised. Who knew paper-craft would be such an important skill at Red Hat…

As the astute might notice, much of the hardware is a bare PCB. I don’t actually need the complete device for testing, and much of the donated hardware is actually a user return or with a cosmetic defect, or even just a pre-release PCB without the actual hardware attached. This is fine, and actually preferable to the entire device – I only have a small office!

As much of the hardware needs special handling to put it in update mode we can’t 100% automate this task, and sometimes it really is just me sitting in front of the laptop pressing and holding buttons for 30 minutes before uploading a tarball, but it’s sure it comforting to know that firmware updates are tested like this. As usual, thanks should be directed to Red Hat for letting me work on this kind of stuff, they really are a marvelous company to work for.

Linux on the T470s, suspend and fan noise


Ever since we (Red Hat's Desktop Hardware Enablement Team) received the 2017 models from Lenovo for testing (e.g. the T470s), we experienced an issue (rhbz#1480844) where sometimes the fan would run at 100% after resuming from suspend. A warm reboot alone would not make the fan go back to normal and a hard reboot was required. It seems the behavior is the result of firmware bug and kernel ACPI changes. Patches for 4.13 reduced the likelihood for the appearance of the noisy issue. Additionally, we have been working together with Lenovo to fix the firmware side and I am happy to report that for the T470s Lenovo recently released a new firmware that should completely fix the issue. Since Lenovo is not yet(!) part of Linux Vendor Firmware Service updating the BIOS is currently not super straight-forward. Thankfully, Jeff has provided detailed instructions how to do this from GNU/Linux only.

summing up 92

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

Crapularity Hermeneutics, by Florian Cramer

The problem of computational analytics is not only in the semantic bias of the data set, but also in the design of the algorithm that treats the data as unbiased fact, and finally in the users of the computer program who believe in its scientific objectivity.

From capturing to reading data, interpretation and hermeneutics thus creep into all levels of analytics. Biases and discrimination are only the extreme cases that make this mechanism most clearly visible. Interpretation thus becomes a bug, a perceived system failure, rather than a feature or virtue. As such, it exposes the fragility and vulnerabilities of data analytics. 

The paradox of big data is that it both affirms and denies this “interpretative nature of knowledge”. Just like the Oracle of Delphi, it is dependent on interpretation. But unlike the oracle priests, its interpretative capability is limited by algorithmics – so that the limitations of the tool (and, ultimately, of using mathematics to process meaning) end up defining the limits of interpretation. 

we're talking a lot about the advancement of computational analytics and artificial intelligence, but little about their shortcomings and effects on society. one of those is that for our technology to work perfectly, society has to dumb itself down in order to level the playing field between humans and computers. a very long but definitely one of the best essays i read this year.

Resisting the Habits of the Algorithmic Mind, by Michael Sacasas

Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

our reliance on machines to make decisions for us leads us to displace the most important human elements in favor of cheaper and faster technology. doing that however we outsource meaning-making, moral judgement and feeling – which is what a human being is – to machines.

Your Data is Being Manipulated, by Danah Boyd

The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape.

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

we're past the point where developing fancy new technologies is a fun project for college kids. our technologies have real implications on the world, on our culture and society. nevertheless we seem to miss a kind of moral framework on how technology is allowed to alter society.

November 06, 2017

Phoropter: A Vision Simulator

After porting Aaron’s NoCoffee extension to Firefox, I thought it would be neat to make a camera version of that. Something you can carry around with you, and take snapshots of websites, signs, or print material. You can then easily share the issues you see around you.

I’m calling it Phoropter, and you can see it here (best viewed with Chrome or Firefox on Android).

I could imagine this is what Pokémon Go is like if instead of creatures you collected mediocre designs.

Say you are looking at a London Underground map, and you notice the legend is completely color reliant. Looking through Phoropter you will see what the legend would look like to someone with protanopia, red-green color blindness.

Screenshot (Nov 2, 2017 2_10_50 PM)(1)

You can then grab a snapshot with the camera icon and get a side-by-side photo that shows the difference in perception. You can now alert the transit authorities, or at least shame them on Twitter.

A side-by-side snapshot of the London Tube's legend with typical vision on the left and protonopia on the right

Once you get into it, it’s quite addicting. No design is above scrutiny.

A page from a workbook displayed side-by-side with typical and green-red blindness.

I started this project thinking I can pull it off with CSS filters on a video element, but it turns out that is way to slow. So I ended up using WebGL via glfx.js. Tried to make is as progressive as possible, you can add it to your home screen. I won’t bore you with the details, check out the source when you have a chance.

There are still many more filters I can add later. In the meantime, open this in your mobile browser and,

Collect Them All!


FOSDEM 2018 Hardware Enablement Devroom Call for Participation

FOSDEM 2018 is approaching fast. There will be a Hardware Enablement Devroom, among many other very interesting ones. We invite everybody to come and participate:

Important dates

  • Conference date: 3 & 4 February 2018 in Brussels, Belgium
  • Devroom date: Sunday 4 February 2018
  • Submission deadline: Sunday 26 November 2017
  • Speaker notified: Sunday 10 December 2017

About

In this devroom we want to discuss topics surrounding hardware enablement. Subjects can range from the firmware running on the bare metal machine, drivers and plumbing all the way to the user interface.
We welcome a board range of presentations, including but not limited to technical talks, state of union summaries as well as discussions that facilitate the collaboration between community members, software vendors and OEMs. A particular emphasis will be given to talks covering a significant part of the software stack involved in hardware enablement, with an obvious focus on using open source throughout the whole stack.

Visit https://fosdem.org for general information about FOSDEM.

Talk Format

  • To cover the wide range of topics we will prefer short talks (about 15-25 minutes). Please include at least 5 minutes for discussions and questions.
  • Presentations will be recorded and streamed. Sending your proposal implies giving permission to be recorded. Exceptions may be possible.
  • Proposals need to be submitted via pentabarf  (see "Submission" below for details)

Topics & Examples

  • UX design to enable users to use their HW effectively
  • Firmware:
    • coreboot
    • flashrom
    • UEFI EDK2 (Tianocore)
    • Security
    • Lockdown of platform using firmware
    • Updating
  • Secure Boot
  • Hardware testing / certification
  • Thunderbolt 3 security modes
  • Gaming input devices (keyboards, mice, piper)
  • Biometric authentication
  • Miracast or controlling remote devices
  • Why vendors should facilitate upstream development

Submission

The FOSDEM Pentabarf is used for submission and scheduling:

  • Please reuse and existing account otherwise register a new one.
  • Please provide your full name and email address. If you provide bio then this will be visible publically.
  • Create a new event:
    • Select "Hardware Enablement devroom" as the track
    • Provide a descriptive title
    • Provide a public abstract for your talk
    • Add any further information for paper review into the submission notes (e.g. outline, why this devroom)

Sunset

Edit: Since writing this post, I’ve been re-hired by another UX team at Oracle. I apologise to anyone who thinks this makes me a Bad Person.

Sun badge photo, August 2000Sun badge photo, August 2000

Seventeen years and a few months after I joined Sun, today is my last day at Oracle.

I was already a grizzled 7-year usability veteran when I moved to Ireland in 2000 to work on GNOME for Solaris, and by extension, try to help the GNOME community figure out how to focus on and deal with usability issues. While it’s been a handful of years since I last actively did that, I’m posting this from our latest build of GNOME 3 on Solaris, so I guess I didn’t completely break everything.

Presenting Sun's GNOME usability study results, GUADEC 2001Presenting GNOME usability study results, GUADEC 2001

I’ve no idea what I’ll be doing next, but I do know it’s been a privilege to work with some of the smartest people in tech, not least in the the GNOME community, and the related open source projects that I worked on over the years. So to all of you reading this, thanks for that.

If anyone’s looking for a UX designer based just far enough outside Dublin that he’d prefer not to have to commute there every day, you can find me on LinkedIn, or any of the other places on the interwebs you’d expect. (As an emeritus foundation member, I even still have a gnome.org email address… but don’t use that as I haven’t updated my .forward file yet!)

Experiments with crosvm

Last week I played a bit with crosvm, a KVM monitor used within Chromium OS for application isolation. My goal is to learn more about the current limits of virtualization for isolating applications in mainline. Two of crosvm's defining characteristics is that it's written in Rust for increased security, and that uses namespaces extensively to reduce the attack surface of the monitor itself.

It was quite easy to get it running outside Chromium OS (have been testing with Fedora 26), with the only complication being that minijail isn't widely packaged in distros. In the instructions below we hack around the issue with linker environment variables so we don't have to install it properly. Instructions are in form of shell commands for illustrative purposes only.

Build kernel:
$ cd ~/src
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
$ cd linux
$ git checkout v4.12
$ make x86_64_defconfig
$ make bzImage
$ cd ..
Build minijail:
$ git clone https://android.googlesource.com/platform/external/minijail
$ cd minijail
$ make
$ cd ..
Build crosvm:
$ git clone https://chromium.googlesource.com/a/chromiumos/platform/crosvm
$ cd crosvm
$ LIBRARY_PATH=~/src/minijail cargo build
Generate rootfs:
$ cd ~/src/crosvm
$ dd if=/dev/zero of=rootfs.ext4 bs=1K count=1M
$ mkfs.ext4 rootfs.ext4
$ mkdir rootfs/
$ sudo mount rootfs.ext4 rootfs/
$ debootstrap testing rootfs/
$ sudo umount rootfs/
Run crosvm:
$ LD_LIBRARY_PATH=~/src/minijail ./target/debug/crosvm run -r rootfs.ext4 --seccomp-policy-dir=./seccomp/x86_64/ ~/src/linux/arch/x86/boot/compressed/vmlinux.bin
The work ahead includes figuring out the best way for Wayland clients in the guest interact with the compositor in the host, and also for guests to make efficient use of the GPU.

November 04, 2017

GNOME Asia 2017

Finally, I got opportunity to write about my first and awesome GNOME Asia 2017. This year is a special year for GNOME as it’s the 20th anniversary of GNOME and 10th anniversary of GNOME Asia conference.

GNOME Asia was hosted at Chongqing University, Chongqing this year which happens to be known as 3D city built on and around mountains. It was also my first experience in China as a visitor. I was excited.

The conference organization committee demonstrated a top-quality management of events and participants. Be it the awesome welcome dinner or helping people like me be picked up at the airport, I am happy and applaud everyone who was involved to give a more than smooth experience overall. Some of special mention I am very grateful for:

  • Airport Pickup
  • I had booked Airbnb apartment near to Liyuan hotel, so one of the member helped me to find my apartment, talked to the owner(only mandarin-speaking) and get things sorted for me
  • Day trip (that was a great idea)

The conference have had so many educating talks. Nuritzi’s keynote was fantastic, Flatpaks by Matthias Clasen, Endless OS by Cosimo, Lennart Pottering’s Fully protected desktop talk got a bit technical for me but I was contented that I understood the main highlights. There were many more which were good to watch and engage in QA. I am pretty intrigued by rpm+OSTree talk by David King. I also did manage to give a lightening talk regarding the Linux graphics stack. Overall, it was fun and was glad that I got the opportunity to meet all these people in person this year as I missed on GUADEC this year :-(

Again, I would like to thank all the members, volunteers who did GNOME Asia a huge success. I am now friends with few of them now as I hopefully remember their names and even in touch. I am hoping to meet you next year o/

Some captures:

Markdowm Image

Markdowm Image

Markdowm Image

Markdowm Image

Markdowm Image

Markdowm Image

Markdowm Image

Thank you GNOME Foundation for sponsoring my travel and accomodation.

Markdowm Image

November 03, 2017

GNOME.Asia 2017

GNOME.Asia 2017 was held in ChongQing, China. It is my first time to ChongQing, and I like it very much in many aspects. The city is built around mountains, so the there are lots of roads that are not straight, which is completely different with the roads in Beijing. There are lots of ups and downs, too. That’s why you can barely see someone riding a bike there. It can be dangerous and tiring, too. Besides, there are lots of overpasses, which makes the city more 3D. The city is also built along the Yangzi River, so you can see many bridges(like London, I think). Here are some photos of the city:

Back to the conference. I really like the keynote titled “The Future of GNOME is You” given by Nuritzi. It tells the students that they can make impact on what the future of GNOME is by starting to contribute. It’s also a goal of GNOME.Asia. That’s to try to get more people involved in GNOME/open source.

I gave a lighting talk in the first afternoon. It was about Google Summer of Code. In China, not many university students know about this project. I applied it in 2015 and I learned a lot from the whole process. It was an amazing experience, so I hope more students know it and apply it.

The conference is really a success and I have to thank those local organizers, volunteers and everyone who make it happen.

Here are some random photos from this trip. Enjoy.

Thanks GNOME Foundation for supporting my trip and my employer SUSE for my time and this trip.

November 02, 2017

Quirks in fwupd as key files

In my previous blog post I hinted at you just have to add one line to a data file to add support for new AVR32 microcontrollers and this blog entry should give a few more details.

A few minutes ago I merged a PR that moves the database of supported and quirked devices out of the C code and into runtime loaded files. When fwupd is installed in long-term support distros it’s very hard to backport new versions as new hardware is released. The idea with this functionalty is that the end user can drop an additional (or replace an existing) file in a .d directory with a simple format and the hardware will magically start working. This assumes no new quirks are required, as this would obviously need code changes, but allows us to get most existing devices working in an easy way without the user compiling anything.

The quirk files themselves are simple key files and are documented in the fwupd gtk-doc documentation.

November 01, 2017

scikit-survival 0.4 released and presented at PyCon UK 2017

I'm pleased to announce that scikit-survival version 0.4 has been released.

This release adds CoxnetSurvivalAnalysis, which implements an efficient algorithm to fit Cox’s proportional hazards model with LASSO, ridge, and elastic net penalty. This allows fitting a Cox model to high-dimensional data and perform feature selection. Moreover, it includes support for Windows with Python 3.5 and later by making the cvxopt package optional.

Download

You can install the latest version via Anaconda (OSX and Linux):

conda install scikit-survival

or via pip (all platforms):

pip install -U scikit-survival

PyCon UK

Last week, I presented an Introduction to Survival Analysis with scikit-survival at PyCon UK in Cardiff in front of a packed audience of genuinely interested people. I hope some people will give scikit-survial a try and use it in their work.

The slides of my presentation are available at https://k-d-w.org/pyconuk-2017/.

GNOME Bug squash month

I feel like I have failed as a maintainer of GNOME modules, due to the fact that I have been busy lately with other tasks, and could not really handle my maintainer tasks, bugfixing, but it is November again, Bug Squash Month for GNOME. I will do my best to take the challenge and do the 5-a-day (5 bugs triaged per day) for GNOME this month.


Today I had a couple of comments and fixes on System Monitor and Calculator, and probably I will continue tomorrow on these two, and jump to the games afterwards. If you have any annoyances, would like me to prioritize certain bugs (preferably from libgtop, system-monitor, gnome-calculator, swell-foop, lightsoff, five-or-more, atomix, gnome-mines), just let me know, and I will do my best.

Feeds