GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

February 06, 2016

Refocus

This is a more personal blog post. Sometimes all those GNOME programming projects are driving me crazy. These last few weeks I needed to rest (that’s why I came only one afternoon at the Developer Experience hackfest, and why I didn’t come to FOSDEM this year (although living nearby)). Yeah, it can happen.

It was time to change a little my focus.

Over the years I created or got involved in more and more projects. When we want to go in too many directions, we make little progress in each. So it’s better to focus only on one or a few projects, and try to make significant progress in those.

Projects that I will no longer actively develop or maintain (at least not during my free time):

Project that I want to “finish” this development cycle:

Project that I’m excited about for the next development cycle:

Other projects that I care about and that I want to do in the future, without any guarantees:

  • Improving file printing in gedit, to be like in Firefox
  • Write a new chapter in my GLib/GTK+ tutorial

Note that for that last item, I would use LaTeXila of course, so if there are some regressions due to some library changes (you perfectly know which library in particular I mean), I’ll probably become aware of the regressions and fix them. Without any guarantees, I repeat. So if someone wants to take over LaTeXila maintenance, I would be more than happy. In the condition that I can still recognize my old pet project afterwards and is still mostly bug-free.

Cat non-fooding

Why such a change of focus? Devhelp was initially a side project for me, and making the gedit source code more re-usable was my main focus during these last years (but along the way it involved also fixing bugs, improving the code readability and making the codebase less a mess, basically).

To explain the refocus, some facts/timeline:

  • I’ve almost always used Vim to program in C.
  • For LaTeX I also used Vim, but at some point I discovered Kile (a Qt application) to write LaTeX documents, because for LaTeX I found that a GUI application has some convenient features.
  • During the summer 2009, I started to write LaTeXila – a new LaTeX editor based on GtkSourceView – to have a LaTeX editor following the GNOME philosophy and based on GTK+.
  • I discovered Vala, and there was a nice gedit plugin for the Vala auto-completion, so I ended up using gedit to develop LaTeXila.
  • At some point I started to contribute to GtkSourceView to fix some bugs, and I used Vim since it is written in C.
  • I got hooked to GtkSourceView development and became a co-maintainer, with the project to make more code of gedit re-usable, to share more code between gedit and LaTeXila.
  • Then my dislike about Vala was growing and growing.
  • I’m no longer a student, so I now rarely use LaTeX.

Basically, I now use gedit mainly for non-programming text editing tasks, e.g. drafting a blog post. In fact, the spell checking is more convenient to use with a GUI application than with Vim. Another use-case of gedit for me is to print a text file. That’s it.

And cat-fooding is important (I’m more a cat person, so I prefer to talk about cat-fooding, or the lack thereof).

For programming, I’m now used to my tmux session with a ‘git’ tab, a ‘build’ tab and several files opened in Vim in the following tabs. I can do everything with the keyboard. I can program during one hour without touching the mouse. I think I would be much less productive if I come back to a terminal + gedit.

But I’m convinced that specialized text editors would be easier to use and configure for the first time than a general-purpose text editor with plugins, tons of options, lengthy configuration files, and such like. And developing libraries to ease the creation of new text editors was an interesting technical project to me.

Anyway, Devhelp is an application that I heavily use. So that’s why I want to improve it. And it’s still in the Developer Experience field. And with the Vim mode of GNOME Builder, I’ll maybe come back one day to a GUI text editor for programming.

Petula Clark – Downtown

February 05, 2016

FOSDEM & Dev-x Hackfest 2016

Last week I travelled to Brussels to attend and present at FOSDEM. Since I was going there anyway, I decided to also join 2.5 days of GNOME Developer Experience Hackfest.

Travelling to Brussels is usually pretty easy and fast, thanks to Eurostar but I turned it into a bit of nightmare this time. I had completely forgotten how London public transport is a total disasters in peak hours and hence ended-up arriving too late at the station. Not a big deal, they put me on the next train for free. I decided to go through security already and that's when I realized that I have forgotten my laptop at home. :( Fortunately my nephew (who is also my flatmate) was still at home and was going to travel to city centre anyway so I asked him to bring it with him. After two hours of anxiously waiting, he managed to arrive just in time for train staff to let in the very last late arriving passenger. Phew!

While I didn't have a particular agenda for the hackfest, I had a discussion with Alexander Larsson about sandboxing in xdg-app and how we will implement per-app authorization to location information from Geoclue. The main problem has always been that we have no means of reliably identifying apps and turns out that xdg-app already solved that problem. Each xdg-app has it's ID (i-e the name of it's desktop file w/o the .desktop suffix) in /proc/PID/cgroup file and app can not change that.

So I sat down and started working on this. I was able to finish off the Geoclue part of the solution already before the hackfest ended and now working on gnome-shell (currently the only geoclue app authorizing agent) part. Once done I'll then add settings in gnome-control-center so users can change their mind about whether or not they want an app to be able to access their location. Other than that, I helped test a few xdg-app bundles.

It's important to keep in mind that this solution will still involve trusting the system (non-xdg-app) application as there is no way to reliably identify those. i-e if you download a random script from internet and run it, we can not possibly guarantee that it won't access your location without your consent. Let's hope that xdg-app becomes very ubiquitous and becomes a de-facto standard for distributing your Linux apps in the near future.

FOSDEM was a fun weekend as usual. I didn't attend a lot of talks but met many interesting people and we had chat about various different open source technologies. I was glad to hear that a project I started off as a simple proof-of-concept for GUPnP, is now a days used in automobiles.

My own talk about Geospacial technologies in GNOME went fine except for the fact that I ran out of time towards the end and my Raspberry Pi demo didn't work because I forgot to plug-in the WiFi adaptor. :( Still, I was able to cover most of the topics and Maps demo worked pretty smoothly (there was  weird libchamplain bug I hit but it wasn't very critical at all).

While I came back home pumped with a lot of motivation, unfortunately I managed to catch the infamous FOSDEM flu. I've been resting most of the week and today I started to feel better so I'm writing this late blog post as the first thing, before I completely forget what happened last week.

Oh and last but not the least, many thanks to GNOME foundation for sponsoring my train tickets.


Developer Experience Hackfest

Hi! Thanks to the sponsoring of the GNOME Foundation as well as the hosts (Betacowork) and volunteers organizing the Developer Experience Hackfest I had the privilege to attend it.

Putting the most important thing for you at the front: we have created a presentation templates repository at https://git.gnome.org/browse/presentation-templates,holding some GNOME/GUADEC branded presentation templates. We have seen several other repositories popping up on services like GitHub holding GNOME branded presentation templates and decided that having a “more official” repository might get people to move their templates into a central location so we have a single point where people wanting to do presentations or creating new templates can go to. (I have also filed bugs against existing repositories to move into the GNOME template repository. If no bug has popped up yet for yours, please also try to move your slides.)

Among those things I was happy to help Bastian bringing some movement into the developer.gnome.org discussion (including wasting a few hours on a hello world internationalizable flask application with documentation testing and generation) and enjoy Christian writing some pseudo code for a coala plugin that integrates deeply into GNOME Builder. The latter may well result in a GSoC project – one way or another we (the coala developers) see forward to the promised tutorials on plugins for builder.

Cheers!

 

https://blogs.gnome.org/cosimoc/files/2013/01/sponsored-badge-shadow.png

Storyboarding (1)

Today, let’s talk about storyboarding. Often considered optional on feature films, in animation though, bypassing this very important step would be likely self-destruction (unless you go for random or glitch animation. There are a few concept movies in festivals that may have been done without clear storyboarding, I suppose).

Once your scenario is done, the director can set up the stage. The scenario will tells that “in this scene, character A does this, while B does that other thing“. But the director will decide where the camera will be, how it will move, where A will appear from, if maybe B is already in the camera field from the start, where each item and background elements are, and so on.

In feature films, the director choices are often bound to the filming set, and cameras as well. You cannot set your camera(s) anywhere (at least not easily. Prepping cameras on difficult conditions can sometimes take hours). This is probably why storyboards are often forsaken (unless 3D incrustation was planned, in which case more preparation is needed, I guess) because anyway what you imagined on your desk may not be possible once you arrive on the set. Limitations aside, the layout of the location can also give you cool ideas that you couldn’t have had before. For instance you spot a tree with a shape so strange that you think “I have to get this tree in the camera field“. Just because it is there.
Another reason is that many directors love the freedom of “last minute changes”. I guess it is in the creative process. In other words, flexibility is more important
Note: my experience is limited, and on French productions only. On big Hollywood movies with a lot of 3D, preparation steps would be a lot more detailed.

In animation films, on the other hand, flexibility is the root of all evil. No problem for changing the script back and forth. But when you ask your animators and painters to draw detailed scenes, 24 frames per second, you cannot just tell them a month later: “oh finally let’s get the character come from the right, rather than the left, and have the camera pan differently. Eh that’s only 2 minutes of movie (3072 frames) wasted!
Also animation cameras have absolutely no other limitations than the storyboarder and director’s minds. You can have a fixed camera, but also a camera falling from the sky if you wanted to. And you can have whatever you want on the screen: if you think a strangely-shaped tree is cool, just draw it in the background. So these cannot be a reason anymore to last-minute changes as they are in feature films.
These are the reason why you need to stop being flexible at some point and “freeze” your choices. This is done in the passage from script to storyboard, which is the last time where you are allowed flexibility.

Contents of a storyboard page

Storyboard - scene 2 page 2

Storyboard – scene 2 page 2

Here is an example of our storyboard for ZeMarmot. It has actually been modified since then, this is an older version, but this is for the sake of explanation.

Header

The header is important, it contains the scene number, and a page number, in order to keep your documents organized. As you can guess, Aryeom keeps these pages all well ordered in a ring binder.

Timeline

The timeline below contains key frames on the left with all main elements (main characters and important items) in their expected rough position, direction and doing the scripted action.
This gives a very good idea of what will appear on screen, while not being perfectly detailed and measured yet. You can notice that some images get out of the frame format (like the 3rd frame on the above page), and the small “Pan” arrow indicating that a camera panning will be done here. Therefore the finale background will be drawn bigger than the frame.
This is indication for the painter, animator and the editor.

The right part contains various indications of matter for the script, like specific moods, action details of the character which may not be clear on a static image, specific sound effects, rhythm sync (when you need perfect sync with a music)… Anything which needs precision. Indications are also spread on the images themselves on the left side, with arrows or various indicators when it makes understanding easier.

Actually it is not uncommon to have vertical panning drawn on several boxes of the storyboard, or horizontal panning eating the “comments” part. In the below example, panning was as big as 3 frames, so Aryeom would stick an additional piece of paper with scotch tape to the left, and fully draw on the comment box to the right:

Storyboard - Scene 2, page 3

Storyboard – Scene 2, page 3

This is where we see that perfectly organized boxes and field are not always adapted to all situations and that it is good to improvise. :-)

Paper and Digital

Well we are also quite technological people, right? Aryeom uses both medium. She often writes on paper storyboard when not in the studio, or when she wants to have the analog feel. But she also uses the computer to prepare the storyboard.

It is much easier to “add” boxes, or change some of the layout ideas with a tablet pen, right? Here for a sneak peek of the storyboard in progress by Aryeom, drawn on GIMP, as usual:

Storyboard - scene 2 WIP

Storyboard – scene 2 WIP

Well I hope you enjoyed the update and the storyboarding information. We will probably post more on this topic, so here again, I add a small (1) in the title. :-)


Note: as often, all the photographs and images in this blog post are
works by Aryeom,under Creative Commons by-sa 4.0 international.

DX Hackfest & FOSDEM

This is one of those back to work posts you intend to write and then kick yourself for forgetting… after a few starts this week I finally managed to squeeze in the time to finish this post.

Last week thanks to Codethink, I was able to travel to Brussels and attend the DX Hackfest followed by FOSDEM. What follows is a run down of things we did there.

Day 0

The Hackfest started on the 27th so I had arrived in Brussels on the 26th bright and early, after around 16 hours of travel including the layover. Feeling hungry, I stumbled out of my hotel room which was downtown by Sainte-Catherine square to fetch a kebab sandwhich. I was thoroughly enjoying my messy pita and fries at a small kebab shack beside the church and by coincidence Juan Pablo was moseying by, admiring the view and taking pictures of the church. With a healthy streak of spicy mayonnaise dripping down my face I called out his name so as not to miss him.

Juan and I had a bit of a chance to talk about what things Glade we could accomplish in our short time in Brussels.

Of course, property bindings came up, which is something that we have wanted for a long time, and Denis Washington had attempted before as his gsoc project.

No, we did not implement that, but here are a few reasons why we determined it was a no go for a few days of intense hacking:

Property Sensitivity

Glade has a notion about object properties having a sensitive or insensitive state, this is determined and driven by the widget adaptor of the object type owning a given property. This is typically used in cases where it makes no sense to set a given property, for instance we make the GtkLabel’s wrap mode property insensitive when the label is not set to wrap.

When considering that a property can be set up as a binding target, it stands to reason that the bound property editor should also be insensitive, as it makes no sense to give it a value if it’s value is driven by another property. Further, it may also make no sense to allow binding of a property at all if the given target property is meaningless in the current widget’s configuration. So, for instance when setting a GtkButton to use custom content instead of the icon name & label, we would have to undoably clear the binding state of the icon name property as well as it’s value.

Cut, Copy & Paste

When we cut, copy and paste in Glade we do so with branches of an object hierarchy. Some interesting new cases we would have to handle include:

  • When we paste a hierarchy in which contains a property source/target pair, we should have the new target property re-routed to the copied source object property.
  • When we paste a hierarchy which contains a bound property for which the source object is outside of the pasted hierarchy, we should maintain that binding in the pasted hierarchy so that it continues to reference the same out-of-hierarchy source.
  • When we paste a hierarchy which contains a bound property for which the source object is outside of the pasted hierarchy, but paste it in a separate project / glade file, the originally bound property should be cleared as it refers to a source property that is now in a different project.

So, after having considered some of the complexities of this, we decided not to be over ambitious and set our sights on lower hanging fruit.

Day 1

On day one we met up relatively bright and early at the betacowork space where the hackfest took place. Some of the morning was spent looking at the agenda and seeing if there were specific things people wanted to talk about, however, as Glade has a huge todo list it makes little sense to think too far ahead about bright and shiny desirable features so I did not add anything to the agenda.

Juan and I had decided that we can absolutely support glade files which do not always specify the ID field, which GtkBuilder has not been requiring for some time now. The benefit of adding this seemingly mundane feature to Glade is mostly better support for Glade files in the wild. Since the ID field is not required by GtkBuilder anymore, it turns out that many hand written files in the wild can no longer be loaded in Glade.

We spent around an hour discussing what issues we might face, and decided the path of least resistance would be to always have an ID internally under a special prefix __glade_unnamed_, so we just avoid serialization of the ID of those objects which are unnamed and we invent them as we load files that omit the ID.

Further, we ensure at all times that if an object is referred to as a property of another object, it must always have an explicit name. We achieve the rollover when running the object selection dialog, if any object is selected as a property of another object; the referred object is given a traditional name like label1 undoably while assigning that reference.

By the end of the day this was working pretty well…

Day 2

By now we thought we had pretty much everything covered for the ID’less widgets, and then we encountered the <action-widgets> of GtkDialog and GtkInfoBar.

These have the unfortunate history of being implemented in an odd way, and I’m not sure how far back this dates, but historically you would denote an action widget by giving it a Response ID integer property and placing the widget in the action area. Since some version of GTK+ 3.x (or possibly even 3.0 ?) we need to refer to these action widgets by their ID in the Glade file and serialize an <action-widgets> node containing those references.

This should ideally be changed in Glade so that the dialog & infobar have actual references to the action widgets (consequently forcing them to have an ID), and probably have another object selection dialog allowing one to select widgets inside of the GtkDialog / GtkInfoBar hierarchy as action widgets. If however the <action-widgets> semantic is newer than GTK+ 3.0 then it gets quite tricky to make this switch and still support the older semantics of adding buttons with response IDs into the action area.

In any case, we settled on simply forcing the action widgets to have an ID at save time, without any undo support, for the singular case of GtkDialog/GtkInfoBar action widgets, disturbingly this also includes autosave, and annoyingly modifies the Glade datamodel without any undoable user interaction, but it’s the corner case hack.

After this road block, and ironing out a few other obstacles (like serializing the ID’s even if they dont exist when launching the preview, which requires an ID to preview)… we were able to at least nail this feature by the end of Day 2.

I also closed this bug by ensuring we dont handle scroll events in the already scrolling property editor, something we probably should have done many years ago.

Also, Juan Pablo revived the old school logo (for those who recall the flaming globe logo) in Glade’s workspace so the workspace is a little more fancy. This tribute to the older logo has in fact has been present for years in the loading screen. Unfortunately… there is only a small number of users who work on projects which contain thousands of widgets, so most of you have been missing out on the awesome old logo tribute, which will now appear in it’s full glory in the background of Glade’s workspace.

Day 3

By now we are getting a bit tired, this post hasn’t covered the more gory details but as we were in Brussels, of course we had the responsibility of sampling every kind of beer. By around 4 pm I was falling asleep at my desk, but before that I was able to make a pass through the GTK+ widget catalog and update it with new deprecations and newly added properties and signals, in some cases updating the custom editors to integrate the new properties nicely. For instance GtkLabel now has a “lines” property which is only sensitive and relevant if ellipsis and word wrapping are enabled simultaneously.

We also fixed a few more bugs.

FOSDEM

And then there was FOSDEM, my first time attending this conference, I was planning on sleeping in but managed to arrive around 10am.

I enjoyed hanging around the booths and mingling mostly, which led to a productive conversation with Andre Klapper about various bug tracking and workflow solutions. I attended some talks in the distros dev room; Sam Thursfield gave his talk about the benefits of using declarative and structured data to represent build and integration instructions in build systems. I also enjoyed some libreoffice talks.

By the end of the second day and just in the nick of time, I was informed that if I had not gotten a waffle from a proper waffle van at the venue, then I had not really been to FOSDEM”. I hurried along and was lucky enough to catch one of the last waffles off of a closing van, which was indeed the most delicious waffle I’ve ever tasted.

I guess the conclusion is that waffles are not what FOSDEM is all about, and that’s a good thing – I’d rather be eating a waffle at a conference about free software, than writing free software at a conference about waffles.

 

abipkgdiff

I just wrote an article about How to review ABI changes in RPM and Debian packages. Check it out :-)

programs won’t start

So recently I got pointed to an aging blocker bug that needed attention, since it negatively affected some rawhide users: they weren’t able to launch certain applications. Three known broken applications were gnome-terminal, nautilus, and gedit. Other applications worked, and even these 3 applications worked in wayland, but not Xorg. The applications failed with messages like:

Gtk-WARNING **: cannot open display:

and

org.gnome.Terminal[2246]: Failed to parse arguments: Cannot open display:

left in the log. These messages means that the programs are unable to create a connection to the X server. There are only a few reasons this error message could get displayed:

    — The socket associated with the X server has become unavailable. In the old days this could happen if, for instance, the socket file got deleted from /tmp. Adam Jackson fixed the X server a number of years ago, to also listen on abstract sockets to avoid that problem. This could also happen if SELinux was blocking access to the socket, but users reported seeing the problem even with SELinux put in permissive mode.
    — The X server isn’t running. In our case, clearly the X server is running since the user can see their desktop and launch other programs
    — The X server doesn’t allow the user to connect because that user wasn’t given access, or that user isn’t providing credentials. These programs are getting run as the same user who started the session, so that user definitely has access.
    — GDM doesn’t require users to provide separate credentials to use the X server, so that’s not it either.
    — $DISPLAY isn’t set, so the client doesn’t know which X server to connect to. This is the only likely cause of the problem. Somehow $DISPLAY isn’t getting put in the environment of these programs.

So the next question is, what makes these applications “special”? Why isn’t $DISPLAY set for them, but other applications work fine? Every application has a .desktop file associated with it, which is a small config file giving information about the application (name, icon, how to run it, etc). When a program is run by gnome-shell, gnome-shell uses the desktop file of that program to figure out how to run it. Most of the malfunctioning programs have this in their desktop files:


DBusActivatable=true

That means that the shell shouldn’t try to run the program directly, instead it should ask the dbus-daemon to run the program on the shell’s behalf. Incidentally, the dbus-daemon then asks systemd to run the program on the dbus-daemon’s behalf. That has lots of nice advantages, like automatically integrating program output to the journal, and putting each service in its own cgroup for resource management. More and more programs are becoming dbus activatable because it’s an important step toward integrating systemd’s session management features into the desktop (though we’re not fully there yet, that initiative should become a priority at some point in the near-to-mid future). So clearly the issue is that the dbus-daemon doesn’t have $DISPLAY in its activation environment, and so programs that rely on D-Bus activation aren’t able to open a display connection to the X server. But why?

When a user logs in, GDM will start a dbus-daemon for that user before it starts the user session. It explicitly makes sure that DISPLAY is in the environment when it starts the dbus-daemon so things should be square. They’re obviously not, though, so I decided to try to reproduce the problem. I turned off my wayland session and instead started up an Xorg (actually I used a livecd since I knew for sure the livecd could reproduce the problem) and then looked at a process listing for the dbus-daemon:


/usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation

This wasn’t run by GDM ! GDM uses different command line arguments that these when it starts the dbus-daemon. Okay, so if it wasn’t getting started by GDM it had to be getting started by the systemd during the PAM conversation right before GDM starts the session. I knew this, because there isn’t really thing other than systemd that runs after the user hits enter at the login screen before gdm starts the user’s session. Also, the command line arguments above in the dbus-daemon instance say ‘–systemd-activation’ which is pretty telling. Furthermore, if a dbus-daemon is already running GDM will avoid starting a second one, so this all adds up. I was surprised that we were using the so called “user bus” instead of session bus already in rawhide. But, indeed, running


$ systemctl --user status dbus.service
● dbus.service - D-Bus User Message Bus
Loaded: loaded (/usr/lib/systemd/user/dbus.service; static; vendor preset: enabled)
Active: active (running) since Tue 2016-02-02 15:04:41 EST; 2 days ago

show’s we’re clearly starting the dbus-daemon before GDM starts the session. Of course, this poses the problem. The dbus-daemon can’t possibly have DISPLAY set in its environment if it’s started before the X server is started. Even if it “wanted” to set DISPLAY it couldn’t even know what value to use, since there’s no X server running yet to tell us the DISPLAY !

So what’s the solution? Many years ago I added a feature to D-Bus to allow a client to change the environment of future programs started by the dbus-daemon. This D-Bus method call, UpdateActivationEnvironment, takes a list of key-value pairs that are just environment variables which get put in the environment of programs before they’re activated. So, the fix is simple, GDM just needs to update the bus activation environment to include DISPLAY as soon as it has a DISPLAY to include.

Special thanks to Sebastian Keller who who figured out the problem before I got around to investigating the issue.

EggColumnLayout

The widget behind the new preferences implementation in Builder was pretty fun to write. Some of the details were tricky, so I thought I’d make the widget reusable in case others would like to use it. I’m sure you can find uses for something like this.

The widget allocates space for children based on their priority. However, instead of simply growing in one direction, we allocate vertically, but spill over into an adjacent column as necessary. This gives the feeling of something akin to GtkFloxBox, but without the rigidity of aligned rows and columns.

This widget is meant to be used within a GtkScrolledWindow (well GtkViewport really) where the GtkScrolledWindow:hscrollbar-policy is set to GTK_POLICY_NEVER.

The resulting work is called EggColumnLayout and you can find it with all the other Egg hacks.

EggColumnLayout in Builder preferences

A short description of the tune-ables are:

EggColumnLayout:column-width tunes the width of the columns. These are uniform among all columns so you should set it to something reasonable. The default is 500px. As an aside, dynamically descovering the column width uniform to all children would probably not look great, nor be straight-forward.

EggColumnLayout:column-spacing is the spacing between columns. Had we used children containers, we could have probably done this in CSS, but the property is “good enough” in my opinion.

EggColumnLayout:row-spacing is the analog to :column-spacing but for the space between the bottom of one child and the top of a subsequent child.

That is pretty much it. Just add your children to this and put the whole thing in a GtkScrolledWindow.

Happy Hacking.

February 04, 2016

guile compiler tasks

Hey! We released Guile 2.1.2, including the unboxing work, and we fixed the slow bootstrap problem by shipping pre-built bootstraps in tarballs. A pretty OK solution in my opinion; check it out!

future work

At this point I think I'm happy with Guile's compiler and VM, enough for now. There is a lot more work to do but it's a good point at which to release a stable series. There will probably be a number of additional pre-releases, but not any more significant compiler/VM work that must be done before a release.

However, I was talking with Guilers at FOSDEM last weekend and we realized that although we do a pretty good job at communicating the haps in compiler-land, we don't do a good job at sharing a roadmap or making it possible for other folks to join the hack. And indeed, it's been difficult to do so while things were changing so much: I had to get things right in my head before joining in the confusion of other people's heads.

In that spirit I'd like to share a list of improvements that it would be nice to make at some point. If you take one of these tasks, be my guest: find me on IRC (wingo on freenode) and let me know, and I'll help as I am able. You need to be somewhat independent; I'm not offering a proper mentoring or anything, more like office hours or something, where you come with the problem you are having and I commiserate and give context/background/advice as I am able.

So with that out of the way, here's a huge list of stuff! Following this, more details on each one.

  1. stripping binaries

  2. full source in binaries

  3. cps in in binaries

  4. linking multiple modules together

  5. linking a single executable

  6. instruction explosion

  7. elisp optimizations

  8. prompt removal

  9. basic register allocation

  10. optimal register allocation

  11. unboxed record fields

  12. textual CPS

  13. avoiding arity checks

  14. unboxed calls and returns

  15. module-level inlining

  16. cross-module inlining

As a bonus, in the end I'll give some notes on native compilation. But first, the hacks!

stripping binaries

Guile uses ELF as its object file format, and currently includes source location information as DWARF data. On space-constrained devices this might be too much. Your task: add a hack to the linker that can strip existing binaries. Read Ian Lance Taylor's linker articles for more background, if you don't know things about linkers yet.

full source in binaries

Wouldn't it be nice if the ELF files that Guile generates actually included the source as well as the line numbers? We could do that, in a separate strippable ELF section. This point is like the reverse of the previous point :)

cps in in binaries

We could also include the CPS IR in ELF files too. This would enable some kinds of link-time optimization and cross-module inlining. You'd need to define a binary format for CPS, like LLVM bitcode or so. Neat stuff :)

linking multiple modules together

Currently in Guile, just about every module is a separate .go file. Loading a module will cause a few stat calls and some seeks and reads and all that. Wouldn't it be nice if you could link together all the .go files that were commonly used into one object? Again this is a linker hack, but it needs support from the run-time as well: when the run-time goes to load a file, it should first check in a registry if that file has been logically provided by some other file. We'd be able to de-duplicate constant data from various modules. However there is an initialization phase when loading a .go file which effectively performs all the relocations needed by constants that need a fix-up at load-time; see the ELF article I linked to above for more. For some uses, it would be OK to produce one relocation/initialization procedure. For others, if you expected to only load a fraction of the modules in a .go file, it would be a lose on startup time,
so you would probably need to support lazy relocation when a module is first loaded.

Anyway, your task would be to write a linker hack that loads a bunch of .go files, finds the relocations in them, de-duplicates the constants, and writes out a combined .go file that includes a table of files contained in it. Good luck :) This hack would work great for Emacs, where it's effectively a form of unexec that doesn't actually rely on unexec.

linking a single executable

In the previous task, you could end up with the small guile binary that links to libguile (or your binary linking to libguile), and then a .go file containing all the modules you are interestd in. It sure would be nice to be able to link those together into just one binary, or at least to link the .go into the Guile binary. If the Guile is statically linked itself, you would have a statically linked application. If it's dynamically linked, it would remain dynamically linked. Again, a linker hack, but one that could provide a nicer way to distribute Guile binaries.

instruction explosion

Now we get more to the compiler side of things. Currently in Guile's VM there are instructions like vector-ref. This is a little silly: there are also instructions to branch on the type of an object (br-if-tc7 in this case), to get the vector's length, and to do a branching integer comparison. Really we should replace vector-ref with a combination of these test-and-branches, with real control flow in the function, and then the actual ref should use some more primitive unchecked memory reference instruction. Optimization could end up hoisting everything but the primitive unchecked memory reference, while preserving safety, which would be a win. But probably in most cases optimization wouldn't manage to do
this, which would be a lose overall because you have more instruction dispatch.

Well, this transformation is something we need for native compilation anyway. I would accept a patch to do this kind of transformation on the master branch, after version 2.2.0 has forked. In theory this would remove most all high level instructions from the VM, making the bytecode closer to a virtual CPU, and likewise making it easier for the compiler to emit native code as it's working at a lower level.

elisp optimizations

Guile implements Emacs Lisp, and does so well. However it hasn't been the focus of a lot of optimization. Emacs has a lot of stuff going on on its side, and so have we, so we haven't managed to replace the Elisp interpreter in Emacs with one written in Guile, though Robin Templeton has brought us a long way forward. We need someone to do both the integration work but also to poke the compiler and make sure it's a clear win.

prompt removal

It's pretty natural to use delimited continuations when compiling some kind of construct that includes a break statement to Guile, whether that compiler is part of Elisp or just implemented as a Scheme macro. But, many instances of prompts can be contified, resulting in no overhead at run-time. Read up on contification and contify the hell out of some prompts!

basic register allocation

Guile usually tries its best to be safe-for-space: only the data which might be used in the future of a program is kept alive, and the rest is available for garbage collection. Notably, this applies to function arguments, temporaries, and lexical variables: if a value is dead, the GC can collect it and re-use its space. However this isn't always what you want. Sometimes you might want to have all variables that are in scope to be available, for better debugging. Your task would be to implement a "slot allocator" (which is really register allocation) that keeps values alive in the parts of the programs that they dominate.

optimal register allocation

On the other hand, our slot allocator -- which is basically register allocation, but for stack slots -- isn't so great. It does OK but you can often end up shuffling values in a loop, which is the worst. Your task would be to implement a proper register allocator: puzzle-solving, graph-coloring, iterative coalescing, something that really tries to do a good job. Good luck!

unboxed record fields

Guile's "structs", on which records are implemented, support unboxed values, but these values are untyped, not really integrated with the record layer, and always boxed in the VM. Your task would be to design a language facility that allows us to declare records with typed fields, and to store unboxed values in those fields, and to cause access to their values to emit boxing/unboxing instructions around them. The optimizer will get rid of those boxing/unboxing instructions if it can. Good luck!

textual CPS

The CPS language is key to all compiler work in Guile, but it doesn't have a nice textual form like LLVM IR does. Design one, and implement a parser and an unparser!

avoiding arity checks

If you know the procedure you are calling, like if it's lexically visible, then if you are calling it with the right number of arguments you can skip past the argument check and instead do a call-label directly into the body. Would be pretty neat!

unboxed calls and returns

Likewise if a function's callers are all known, it might be able to unbox its arguments or return value, if that's a good idea. Tricky! You could start with a type inference pass or so, and maybe that could produce some good debugging feedback too.

module-level inlining

Guile currently doesn't inline anything that's not lexically visible. Unfortunately this restriction extends to top-level definitions in a module: they are treated as mutable and so never inlined/optimized/etc. Probably we need to change the semantics here such that a module can be compiled as a unit, and all values which are never mutated can be assumed to be constant. Probably you also want a knob to turn off this behavior, but really you can always re-compile and re-load a module as a whole if re-loading a function at run-time doesn't work because it was inlined. Anyway. Some semantic work here, but some peval work as well. Be careful!

cross-module inlining

Likewise Guile currently doesn't inline definitions from other modules. However for small functions this really hurts. Guile should probably serialize tree-il for small definitions in .go files, and allow peval to speculatively inline imported definitions. This is related to the previous point and has some semantic implications.

bobobobobobonus! native compilation

Thinking realistically, native compilation is the next step. We have the object file format, cool. We will need the ability to call out from machine code in .go files to run-time functions, so we need to enhance the linker, possibly even with things like PLT/GOT sections to avoid dirtying too many pages. We need to lower the CPS even further, to get closer to some kind of machine model, then go specific, with an assembler for each architecture. The priority in the beginning will be simplicity and minimal complexity; good codegen will come later. This is obviously the most attractive thing but it's also the most tricky, design-wise. I want to do at least part of this, so though you can't have it all, you are welcome to help :)

That's it for now. I'll amend the post with more things as and when I think of them. Comments welcome too, as always. Happy hacking!

Rio Design Hackfest

Rio hackfest final photo

A couple of weeks ago, I had the pleasure of attending a design hackfest in Rio de Janeiro, which was hosted by the good people at Endless. The main purpose of the event was to foster a closer relationship between the design teams at GNOME and Endless. Those of us on the GNOME side also wanted to learn more about Endless users, so that we can support them better.

The first two days of the event were taken up with field visits, first at a favela in Rio itself, and second in a more rural setting about an hour and a half’s drive out of town. In both cases we got to meet Endless field testers, ask them questions about their lives and computer usage.

After the field trips, it was time to hit the office for three days of intensive design discussions. We started from a high level, discussing the background of GNOME 3, and looking at the similarities and differences between Endless’s OS and GNOME 3. Then, over the course of three days, we focused on specific areas where we have a mutual interest, like the shell, search, Software and app stores, and content apps like Photos, Music and Videos.

DSCF9796 DSCF9812 DSCF0034 DSCF0099 DSCF0117 DSCF9903 DSCF9904 DSCF9921 DSCF0017

All in all, the event was a big success. Everyone at Endless was really friendly and easy to work with, and we had lots of similar concerns and aspirations. We’ve started on a process of working closer together, and I expect there to be more joint design initiatives in the future.

I’d like to give a big thank you to Endless for hosting, and for sponsoring the event. I’d also like to thank the GNOME Foundation for providing travel sponsorship.

sponsored-badge-simple

Thu 2016/Feb/04

We've opened a few positions for developers in the fields of multimedia, networking, and compilers. I could say a lot about why working in Igalia is way different to working on your average tech-company or start-up, but I think the way it's summarized in the announcements is pretty good. Have a look at them if you are curious and don't hesitate to apply!

FOSDEM 2016

It the beginning of the year and, surprise, FOSDEM happened :-) This year I even managed to get to see some talks and to meet people! Still not as many as I would have liked, but I’m getting there…

Lenny talked about systemd and what is going to be added in the near future. Among many things, he made DNSSEC stand out. I not sure yet whether I like it or not. One the one hand, you might get more confidence in your DNS results. Although, as he said, the benefits are small as authentication of your bank happens on a different layer.

Giovanni talked about the importance of FOSS in the surveillance era. He began by mentioning that France declared the state of emergency after the Paris attacks. That, however, is not in line with democratic thinking, he said. It’s a tool from a few dozens of years ago, he said. With that emergency state, the government tries to weaken encryption and to ban any technology that may be used by so called terrorists. That may very well include black Seat cars like the ones used by the Paris attackers. But you cannot ban simple tools like that, he said. He said that we should make our tools much more accessible by using standard FLOSS licenses. He alluded to OpenSSL’s weird license being the culprit that caused Heartbleed not to have been found earlier. He also urged the audience to develop simpler and better tools. He complained about GnuPG being too cumbersome to use. I think the talk was a mixed bag of topics and got lost over the many topics at hand. Anyway, he concluded with an interesting interpretation of Franklin’s quote: If you sacrifice software freedom for security you deserve neither. I fully agree.

In a terrible Frenglish, Ludovic presented on Python’s async and await keywords. He said you must not confuse asynchronous and parallel execution. With asynchronous execution, all tasks are started but only one task finishes at a time. With parallel execution, however, tasks can also finish at the same time. I don’t know yet whether that description convinces me. Anyway, you should use async, he said, when dealing with sending or receiving data over a (mobile) network. Compared to (p)threads, you work cooperatively on the scheduling as opposed to preemptive scheduling (compare time.sleep vs. asyncio.sleep).

Aleksander was talking on the Tizen security model. I knew that they were using SMACK, but they also use a classic DAC system by simply separating users. Cynara is the new kid on the block. It is a userspace privilege checker. A service, like GPS, if accessed via some form of RPC, sends the credentials it received from the client to Cynara which then makes a decision as to whether access is allowed or not. So it seems to be an “inside out” broker. Instead of having something like a reference monitor which dispatches requests to a server only if you are allowed to, the server needs to check itself. He went on talking about how applications integrate with Cynara, like where to store files and how to label them. The credentials which are passed around are a SMACK label to identify the application. The user id which runs the application and privilege which represents the requested privilege. I suppose that the Cynara system only makes sense once you can safely identify an application which, I think, you can only do properly when you are using something like SMACK to assign label during installation.

Daniel was then talking about his USBGuard project. It’s basically a firewall for USB devices. I found that particularly interesting, because I have a history with USB security and I do know that random USB devices pose a problem. We are also working on integrating USB blocking capabilities with GNOME, so I was keen on meeting Daniel. He presented his program, what it does, and how to use it. I think it’s a good initiative and we should certainly continue exploring the realm of blocking USB devices. It’s unfortunate, though, that he has made some weird technological choices like using C++ or a weird IPC system. If it was using D-Bus then we could make use of it easily :-/ The talk was actually followed by Krzyzstof who I reported on last time, who built USB devices in software. As I always wanted to do that, I approached him and complained about my problems doing so ;-)

Chris from wolfSSL explained how they do testing for their TLS implementation. wolfSSL is 10 years old and secures over 1 billion endpoints, he said. Most interestingly, they have interoperability testing with other TLS implementations. He said they want to be the most well tested TLS library available which I think is a very good goal! He was a very good speaker and I really enjoyed learning about their different testing strategies.

I didn’t really follow what Pam was talking about implicit trademark and patent licenses. But it seems to be an open question whether patents and trademarks are treated similarly when it comes to granting someone the right to use “the software”. But I didn’t really understand why it would be a question, because I haven’t heard about a case in which it was argued that the right on the name of the software had also been transferred. But then again, I am not a lawyer and I don’t want to become one…

Jeremiah referred on safety-critical FOSS. Safety critical, he said, was functional safety which means that your device must limp back home at a lower gear if anything goes wrong. He mentioned several standards like IEC 61508, ISO 26262, and others. Some of these standards define “Safety Integrity Levels” which define how likely risks are. Some GNU/Linux systems have gone through that certification process, he said. But I didn’t really understand what copylefted software has to do with it. The automotive industry seems to be an entirely different animal…

If you’ve missed this year’s FOSDEM, you may want to have a look at the recordings. It’s not VoCCC type quality like with the CCCongress, but still good. Also, you can look forward to next year’s FOSDEM! Brussels is nice, although they could improve the weather ;-) See you next year!

Johnson Street Bridge at night

Puente Johnson Johnson Street Bridge at night. Victoria, British Columbia, Canada.

It has been a while since I wanted to take this picture. Tonight there was a light rain, I was in the mood of taking long-exposure pictures, and I had a tripod with me.

Although iconic, this bridge is going to be replaced by a new one in 2017. Half of the bridge will be dedicated for pedestriand and cyclists.

Change of Pace

The UI changes to GNOME Maps I made landed so you can now load GeoJSON layer files, and the animations when zooming and panning around the map should be more fluid. Since then Jonas and Andreas have iterated on the UI so you can also toggle the visibility of the custom layers you have loaded.

The past 2.5 weeks have been a bit of a mental whirlwind due to stress in my personal life and a feeling of being unmoored.

I took a small step back from coding before the last official segment of my Outreachy project which is implementing a backend for KML layer files. Instead I’ve been pulled to examining my life and trying to figure out what kind of future I want to see for myself in the world. This has involved a lot of reading, a lot of time thinking, and a bit of writing (though I’m reluctant to say not on this blog…yet).

A small summary then of my late highlights:

  • Read the outstandingly eye-opening Decolonization is not a Metaphor
  • Read the stunning bell hooks’ outstanding The Will to Change: Men, Masculinity, and Love
  • Read all the links from Sumana’s mega post on disparities and unfairness in the economics of Free Software development.
  • Read the Telekommunist Manifesto on copyfarleft.
  • Decided to return to my part time $dayjob after Outreachy ends as a bicycle mechanic where efforts to organize workers democratically are making progress.
  • Ordered a new computer: a ThinkPad Yoga 12. I installed Fedora 23 and it works amazingly (considering switching from Debian)!! Thanks Bastian Nocera for iio-sensor-proxy and Carlos Garnacho for touch :)
  • Caught up on reading my two absolute favorite ongoing comics series: Saga and Monstress.
  • Figured out that my blog kept being unresponsive because of a dos attack.

And now to synthesizing all these disparate experiences into a [hopefully semi-]cohesive thoughtstream.



I feel guilty that I accepted the opportunity to do Outreachy as a mostly male-passing genderqueer person, but so far I haven’t done anything to rock the boat to shake things up for those who will come after me and those who have fewer privileges! That needs to change.

Is there a word that is kind of like navel gazing and kind of like treading water? Something to describe the Sisyphean task of spending a lot of material resources (i.e. time, energy, and money) implementing unsustainable and therefore flawed tools? Yes! It appears there is a perfect word for this especially in the context of computers: GIGO – Garbage in, Garbage Out! Yes, GIGO to me means that Free Software has some flawed premises. Absolutely this isn’t to disparage the amazing work that people have done to make Free Software what it is today (I love GNOME and friends!); rather, it is a subjective appraisal of how well we as a community are meeting of the radical goals of Free Software, namely Freedom. I find that having specifically stated goals affords a great deal of clarity; so, the following list is to help tease out what are the limits to our imaginations and business models, and what are our valued goals?

We want “freedom”, but how can there be freedom when:

  1. we are not free to use tools (e.g. when they are proprietary)?
  2. we are not free to build tools (proprietary)?
  3. we do not have the material resources (2a education, 2b money, 2c energy) to build the tools sustainably? (e.g. The ivory tower of academia is exclusive; yorba couldn’t pay the bills; many leave from harassment and thanklessness)
  4. most jobs concentrate rather than redistribute wealth/power? (i.e. most companies are not democratic worker-owned)
  5. the biggest impact so far of free tools are to make proprietary tools cheaper to manufacture? (OS X, Android, Facebook)
  6. others aren’t free? (people are coerced into poverty, illness, violence, fear)
  7. what we build uses people as a consumable input resource? (the economy continues to be propped up by present and historical genocide, colonization, and slavery)
  8. we don’t know history.
  9. we don’t see how we are our histories? (What conditions lead stereotypes, normalcy, and the status quo?)
  10. we don’t know what freedom could be?
  • Free licenses solve 0, 1 and cause 4 (permissive licenses moreso than copyleft)
  • Outreachy solves 2, 3 (for a selective and temporary 3-month span).
  • GNOME Love helps to solve 2a
  • Codes of Conduct solve 2c
  • Igalia locally solves 3 (http://wingolog.org/archives/2013/06/25/time-for-money)
  • RedHat solves 2 and exascerbates 3, 6 (By selling to the imperialist US government/military and having offices in Indigenous land in Turtle Island/USA and Palestine/Israel)
  • Copyfarleft is working to solve 4

Most people regard me as impractically obsessive either because I am adherent to a Free Software ideology or because I hold an anti-white supremacist capitalist [cis-]partriarchy ideologies.
A nonzero number of folks regard me as practical. I’ll push the boundary then:

black/indigenous/women of color working against the white supremacist capitalist [cis-]patriarchy have provided working theory to solve 2, 3, 5, 6, 7, 8, 9

So that’s where I am. I hope soon to figure out how I might be able to get paid to sustain myself and others to work on GNOME and other mechanisms for freedom. Look forward to some concrete suggestions or criticisms.

Also, I got preliminary KML layer support working in Maps (not yet merged). 😉

image showing a kml file loaded in the maps uiKML File Loaded in GNOME Maps

February 03, 2016

How’s my hacking routine at Endless

As some of you may know, I’m working with Endless for some months now. It probably is a cultural shock, but they exceed every aspect I’ve imagined. If you dear reader is not aware of what Endless is and does, I strongly suggest you to go to endlessm.com and check it out. Believe me, it’s worth the time.

I usually have to deal with some daily things: martial arts training, chores, mastering research, wife, among other less important stuff. My first surprise: in terms of work, this doesn’t matter. As long as I’m able to handle my tasks, it’s not required to work from X a.m. to Y p.m. nor my workflow is enforced.

A second to explain some cultural curiosities: here in Brazil, it’s absolutely normal for an employer to set up the exact time an employee must arrive, leave, break, talk, breathe and anything else you can do besides working. Also, one should expect a daily 2h travel to reach the workplace, usually accompanied by the 3rd (Rio de Janeiro) or 7th (São Paulo) worst traffic worldwide.

After adapting to my new routine, it is quite common for a day to run like this: wake up, do my chores, turn on my laptop and work for some hours. After lunching, some more work, study and research, train hard, work some more and sleep.

But why am I writing this? Because I’m so, so happy with this routine. I just arrived home after removing the last bits of a surgery, and I can work. And it’s something I love. I’d be doing this work even without payment.

That said, let’s return to our lives :)

On Subresource Certificate Validation

Ryan Castellucci has a quick read on subresource certificate validation. It is accurate; I fixed this shortly after joining Igalia. (Update: This was actually in response to a bug report from him.) Run his test to see if your browser is vulnerable.

Epiphany, Xombrero, Opera Mini and Midori […] were loading subresources, such as scripts, from HTTPS servers without doing proper certificate validation. […] Unfortunately Xombrero and Midori are still vulnerable. Xombrero seems to be dead, and I’ve gotten no response from them. I’ve been in touch with Midori, but they say they don’t have the resources to fix it, since it would require rewriting large portions of the code base in order to be able to use the fixed webkit.

I reported this to the Midori developers in late 2014 (private bug). It’s hard to understate how bad this is: it makes HTTPS completely worthless, because an attacker can silently modify JavaScript loaded via subresources.

This is actually a unique case in that it’s a security problem that was fixed only thanks to the great API break, which has otherwise been the cause of many security problems. Thanks to the API break, we were able to make the new API secure by default without breaking any existing applications. (But this does no good for applications unable to upgrade.)

(A note to folks who read Ryan’s post: most mainstream browsers do silently block invalid certificates, but Safari will warn instead. I’m not sure which behavior I prefer.)

Incoming! Fleet Commander 0.7

We’ve just released the 0.7 series which should be the first version that is somewhat stable to use (think of it as alpha) and as we speak is under review for inclusion with Fedora 24.

For the last year I have been massaging the prototype we had at GUADEC in Strasbourg into a reliable product, and recently Oliver Gutierrez has joined the team to help with the web development affairs, I would like to summarize some of my work here so that you guys know what’s all about and what are the future plans.

For those unfamiliar with the project, Fleet Commander intends to be a centralized configuration management interface for system administrators managing GNOME deployments. Think about schools, universities or an office (either small or big). The idea is to reduce the amount of work needed to centralize the customization of the user experience. These days most sysadmins use a bunch of scripts, packaging or manual configuration machine-after-machine to achieve this goal.

Screenshot from 2016-02-03 17-53-23.png

Fleet Commander in action

The screenshot above shows the main profile editor, those who remember the Sabayon tool might be familiar with this concept. What you see is a virtual session running on libvirt, and basically picking up a VM I created with Boxes as a template. The idea is that the sysadmin replicates the “base image” that the users have in the network inside a VM, and makes the configuration changes that he needs.

After that the profile is placed as a json file and you can control which users and groups it applies to from the admin UI. This files are served by an HTTP endpoint that is consumed by a host daemon that retrieves all the profile data, turns each profile into a dconf db, and creates a dconf profile that aggregates all those dbs at login-time.

Right now this will work on anything that uses dconf (except for some potential issues where people use relocatable schemas), Stephan Berg from LibreOffice fame has written dconf support upstream and will make things magically work.

We’re quite proud of the release but the setup is a bit more complicated that I’d like it to be (we need to exchange an SSH key to access the libvirt host), future plans include FreeIPA/SSSD integration and migrating the UI into a Cockpit plugin which will make our codebase a lot leaner to maintain.

We’re working on an updated wiki page to explain how to set it up once it hits Fedora 24, stay tuned!

Moved to wordpress

I’ve decided to move my blog to wordpress.com, originally I started using Typepad because it was where my brother was working at the time. These days the service feels quite dated and it doesn’t have the kind of community of users that wordpress does. I’ve come to realize that the thought of using Typepad’s editor has discouraged me to write a post more than a few times.

wordpress-logo-notext-rgb
On top of that I rather use a platform that openly supports free software. I have to say, as I write this post, I’m quite impressed at how advance the WordPress editor is, it feels like driving a new car.

So that’s that, I’ve migrated all my history here in case people want to read old posts in the new site.

coala 0.4 – eucalyptus

Hi! I’m happy to announce coala 0.4.0 eucalyptus – the pre-FOSDEM release of the code analysis framework that truly works for any language.

Take a look at our awesome release ASCII art, this time made by Christian Witt!

    88        .o88Oo._
   8 |8      d8P         .ooOO8bo._
  8  | 8     88                  '*Y8bo.
  8\ | /8    YA                      '*Y8b   __
 8  \|/ 8     YA                        68o68**8Oo.
 8\  Y  8      "8D                       *"'    "Y8o
 8 \ | /8       Y8     'YB                       .8D
8   \|/ /8     '8               d8'             8D
8\   Y / 8       8       d8888b          d      AY
8 \ / /  8       Y,     d888888         d'  _.oP"
8  \|/  8         q.    Y8888P'        d8
8   Y   8          "q.  `Y88P'       d8"
 8  |  8             Y           ,o8P
  8 | 8                    oooo888P"

Of course, this release features more than a neat graphic. Apart from lots of internal changes and bug fixes we polished the auto application of results a bit. We also added the ability to check the commit message of the HEAD commit. This release, although quite small with only about a hundred commits, we had six new contributors with accepted patches, while some other newcomers already have something in the making. Thank you all for your interest and support!

For more information and the changelog, check out https://github.com/coala-analyzer/coala/releases. To get live updates on what we’re doing, we’re using twitter since a few weeks: https://twitter.com/coala_analyzer.

February 02, 2016

7 months update

First of all, happy new year to you all (yes, I know we are already in February)!

Long time no post, I've been very busy with work, new projects, new clients, new technologies, preparing the move to a new home, the second child, and lot more, on the personal side.
Handling all of the above at the same time resulted in a severe change in the amount of my open-source contributions, so I haven't been able to do anything more except for code reviews and minor fixes, plus the releases of the GNOME modules I am responsible for (GNOME Games rule!).
During the winter break, between Christmas and New Year's Eve I have managed to work a bit on AppMenu integration for Atomix (which is not completely ready, as the appmenu is not displayed, in spite of being there, when checking with gtkInspector)
In the meantime lots of good things have happened, e.g. the Fedora 23 release, which is (again)  the best Fedora release of all times, thanks to everyone contributing.

All in all, I just wanted to share that I'm not dead yet, just been very busy, but hoping that I can get back to the normal life with a couple more contributions to open-source, and sharing some more experiences with gadgets, e.g. the Android+Lubuntu dual boot open-source TV box I got for Christmas.

February 01, 2016

On WebKit Security Updates

Linux distributions have a problem with WebKit security.

Major desktop browsers push automatic security updates directly to users on a regular basis, so most users don’t have to worry about security updates. But Linux users are dependent on their distributions to release updates. Apple fixed over 100 vulnerabilities in WebKit last year, so getting updates out to users is critical.

This is the story of how that process has gone wrong for WebKit.

Before we get started, a few disclaimers. I want to be crystal clear about these points:

  1. This post does not apply to WebKit as used in Apple products. Apple products receive regular security updates.
  2. WebKitGTK+ releases regular security updates upstream. It is safe to use so long as you apply the updates.
  3. The opinions expressed in this post are my own, not my employer’s, and not the WebKit project’s.

Browser Security in a Nutshell

Web engines are full of security vulnerabilities, like buffer overflows, null pointer dereferences, and use-after-frees. The details don’t matter; what’s important is that skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted HTML to gain total control of your user account on your computer (or your phone). They can then install malware, read all the files in your home directory, use your computer in a botnet to attack websites, and do basically whatever they want with it.

If the web engine is sandboxed, then a second type of attack, called a sandbox escape, is needed. This makes it dramatically more difficult to exploit vulnerabilities. Chromium has a top-class Linux sandbox. WebKit does have a Linux sandbox, but it’s not any good, so it’s (rightly) disabled by default. Firefox does not have a sandbox due to major architectural limitations (which Mozilla is working on).

For this blog post, it’s enough to know that attackers use crafted input to exploit vulnerabilities to gain control of your computer. This is why it’s not a good idea to browse to dodgy web pages. It also explains how a malicious email can gain control of your computer. Modern email clients render HTML mail using web engines, so malicious emails exploit many of the same vulnerabilities that a malicious web page might. This is one reason why good email clients block all images by default: image rendering, like HTML rendering, is full of security vulnerabilities. (Another reason is that images hosted remotely can be used to determine when you read the email, violating your privacy.)

WebKit Ports

To understand WebKit security, you have to understand the concept of WebKit ports, because different ports handle security updates differently.

While most code in WebKit is cross-platform, there’s a large amount of platform-specific code as well, to improve the user and developer experience in different environments. Different “ports” run different platform-specific code. This is why two WebKit-based browsers, say, Safari and Epiphany (GNOME Web), can display the same page slightly differently: they’re using different WebKit ports.

Currently, the WebKit project consists of six different ports: one for Mac, one for iOS, two for Windows (Apple Windows and WinCairo), and two for Linux (WebKitGTK+ and WebKitEFL). There are some downstream ports as well; unlike the aforementioned ports, downstream ports are, well, downstream, and not part of the WebKit project. The only one that matters for Linux users is QtWebKit.

If you use Safari, you’re using the Mac or iOS port. These ports get frequent security updates from Apple to plug vulnerabilities, which users receive via regular updates.

Everything else is broken.

Since WebKit is not a system library on Windows, Windows applications must bundle WebKit, so each application using WebKit must be updated individually, and updates are completely dependent on the application developers. iTunes, which uses the Apple Windows port, does get regular updates from Apple, but beyond that, I suspect most applications never get any security updates. This is a predictable result, the natural consequence of environments that require bundling libraries.

(This explains why iOS developers are required to use the system WebKit rather than bundling their own: Apple knows that app developers will not provide security updates on their own, so this policy ensures every iOS application rendering HTML gets regular WebKit security updates. Even Firefox and Chrome on iOS are required to use the system WebKit; they’re hardly really Firefox or Chrome at all.)

The same scenario applies to the WinCairo port, except this port does not have releases or security updates. Whereas the Apple ports have stable branches with security updates, with WinCairo, companies take a snapshot of WebKit trunk, make their own changes, and ship products with that. Who’s using WinCairo? Probably lots of companies; the biggest one I’m aware of uses a WinCairo-based port in its AAA video games. It’s safe to assume few to no companies are handling security backports for their downstream WinCairo branches.

Now, on to the Linux ports. WebKitEFL is the WebKit port for the Enlightenment Foundation Libraries. It’s not going to be found in mainstream Linux distributions; it’s mostly used in embedded devices produced by one major vendor. If you know anything at all about the internet of things, you know these devices never get security updates, or if they do, the updates are superficial (updating only some vulnerable components and not others), or end a couple months after the product is purchased. WebKitEFL does not bother with pretense here: like WinCairo, it has never had security updates. And again, it’s safe to assume few to no companies are handling security backports for their downstream branches.

None of the above ports matter for most Linux users. The ports available on mainstream Linux distributions are QtWebKit and WebKitGTK+. Most of this blog will focus on WebKitGTK+, since that’s the port I work on, and the port that matters most to most of the people who are reading this blog, but QtWebKit is widely-used and deserves some attention first.

It’s broken, too.

QtWebKit

QtWebKit is the WebKit port used by Qt software, most notably KDE. Some cherry-picked examples of popular applications using QtWebKit are Amarok, Calligra, KDevelop, KMail, Kontact, KTorrent, Quassel, Rekonq, and Tomahawk. QtWebKit provides an excellent Qt API, so in the past it’s been the clear best web engine to use for Qt applications.

After Google forked WebKit, the QtWebKit developers announced they were switching to work on QtWebEngine, which is based on Chromium, instead. This quickly led to the removal of QtWebKit from the WebKit project. This was good for the developers of other WebKit ports, since lots of Qt-specific code was removed, but it was terrible for KDE and other QtWebKit users. QtWebKit is still maintained in Qt and is getting some backports, but from a quick check of their git repository it’s obvious that it’s not receiving many security updates. This is hardly unexpected; QtWebKit is now years behind upstream, so providing security updates would be very difficult. There’s not much hope left for QtWebKit; these applications have hundreds of known vulnerabilities that will never be fixed. Applications should port to QtWebEngine, but for many applications this may not be easy or even possible.

Update: As pointed out in the comments, there is some effort to update QtWebKit. I was aware of this and in retrospect should have mentioned this in the original version of this article, because it is relevant. Keep an eye out for this; I am not confident it will make its way into upstream Qt, but if it does, this problem could be solved.

WebKitGTK+

WebKitGTK+ is the port used by GTK+ software. It’s most strongly associated with its flagship browser, Epiphany, but it’s also used in other places. Some of the more notable users include Anjuta, Banshee, Bijiben (GNOME Notes), Devhelp, Empathy, Evolution, Geany, Geary, GIMP, gitg, GNOME Builder, GNOME Documents, GNOME Initial Setup, GNOME Online Accounts, GnuCash, gThumb, Liferea, Midori, Rhythmbox, Shotwell, Sushi, and Yelp (GNOME Help). In short, it’s kind of important, not only for GNOME but also for Ubuntu and Elementary. Just as QtWebKit used to be the web engine for choice for Qt applications, WebKitGTK+ is the clear choice for GTK+ applications due to its nice GObject APIs.

Historically, WebKitGTK+ has not had security updates. Of course, we released updates with security fixes, but not with CVE identifiers, which is how software developers track security issues; as far as distributors are concerned, without a CVE identifier, there is no security issue, and so, with a few exceptions, distributions did not release our updates to users. For many applications, this is not so bad, but for high-risk applications like web browsers and email clients, it’s a huge problem.

So, we’re trying to improve. Early last year, my colleagues put together our first real security advisory with CVE identifiers; the hope was that this would encourage distributors to take our updates. This required data provided by Apple to WebKit security team members on which bugs correspond to which CVEs, allowing the correlation of Bugzilla IDs to Subversion revisions to determine in which WebKitGTK+ release an issue has been fixed. That data is critical, because without it, there’s no way to know if an issue has been fixed in a particular release or not. After we released this first advisory, Apple stopped providing the data; this was probably just a coincidence due to some unrelated internal changes at Apple, but it certainly threw a wrench in our plans for further security advisories.

This changed in November, when I had the pleasure of attending the WebKit Contributors Meeting at Apple’s headquarters, where I was finally able meet many of the developers I had interacted with online. At the event, I gave a presentation on our predicament, and asked Apple to give us information on which Bugzilla bugs correspond to which CVEs. Apple kindly provided the necessary data a few weeks later.

During the Web Engines Hackfest, a yearly event that occurs at Igalia’s office in A Coruña, my colleagues used this data to put together WebKitGTK+ Security Advisory WSA-2015-0002, a list of over 130 vulnerabilities disclosed since the first advisory. (The Web Engines Hackfest was sponsored by Igalia, my employer, and by our friends at Collabora. I’m supposed to include their logos here to advertise how cool it is that they support the hackfest, but given all the doom and gloom in this post, I decided perhaps they would perhaps prefer not to have their logos attached to it.)

Note that 130 vulnerabilities is an overcount, as it includes some issues that are specific to the Apple ports. (In the future, we’ll try to filter these out.) Only one of the issues — a serious error in the networking backend shared by WebKitGTK+ and WebKitEFL — resided in platform-specific code; the rest of the issues affecting WebKitGTK+ were all cross-platform issues. This is probably partly because the trickiest code is cross-platform code, and partly because security researchers focus on Apple’s ports.

Anyway, we posted WSA-2015-0002 to the oss-security mailing list to make sure distributors would notice, crossed our fingers, and hoped that distributors would take the advisory seriously. That was one month ago.

Distribution Updates

There are basically three different approaches distributions can take to software updates. The first approach is to update to the latest stable upstream version as soon as, or shortly after, it’s released. This is the strategy employed by Arch Linux. Arch does not provide any security support per se; it’s not necessary, so long as upstream projects release real updates for security problems and not simply patches. Accordingly, Arch almost always has the latest version of WebKitGTK+.

The second main approach, used by Fedora, is to provide only stable release updates. This is more cautious, reflecting that big updates can break things, so they should only occur when upgrading to a new version of the operating system. For instance, Fedora 22 shipped with WebKitGTK+ 2.8, so it would release updates to new 2.8.x versions, but not to WebKitGTK+ 2.10.x versions.

The third approach, followed by most distributions, is to take version upgrades only rarely, or not at all. For smaller distributions this may be an issue of manpower, but for major distributions it’s a matter of avoiding regressions in stable releases. Holding back on version updates actually works well for most software. When security problems arise, distribution maintainers for major distributions backport fixes and release updates. The problem is that this not feasible for web engines; due to the huge volume of vulnerabilities that need fixed, security issues can only practically be handled upstream.

So what’s happened since WSA-2015-0002 was released? Did it convince distributions to take WebKitGTK+ security seriously? Hardly. Fedora is the only distribution that has made any changes in response to WSA-2015-0002, and that’s because I’m one of the Fedora maintainers. (I’m pleased to announce that we have a 2.10.7 update headed to both Fedora 23 and Fedora 22 right now. In the future, we plan to release the latest stable version of WebKitGTK+ as an update to all supported versions of Fedora shortly after it’s released upstream.)

Ubuntu

Ubuntu releases WebKitGTK+ updates somewhat inconsistently. For instance, Ubuntu 14.04 came with WebKitGTK+ 2.4.0. 2.4.8 is available via updates, but even though 2.4.9 was released upstream over eight months ago, it has not yet been released as an update for Ubuntu 14.04.

By comparison, Ubuntu 15.10 (the latest release) shipped with WebKitGTK+ 2.8.5, which has never been updated; it’s affected by about 40 vulnerabilities fixed in the latest upstream release. Ubuntu organizes its software into various repositories, and provides security support only to software in the main repository. This version of WebKitGTK+ is in Ubuntu’s “universe” repository, not in main, so it is excluded from security support. Ubuntu users might be surprised to learn that a large portion of Ubuntu software is in universe and therefore excluded from security support; this is in contrast to almost all other distributions, which typically provide security updates for all the software they ship.

I’m calling out Ubuntu here not because it is specially-negligent, but simply because it is our biggest distributor. It’s not doing any worse than most of our other distributors.

Debian

Debian provides WebKit updates to users running unstable, and to testing except during freeze periods, but not to released version of Debian. Debian is unique in that it has a formal policy on WebKit updates. Here it is, reproduced in full:

Debian 8 includes several browser engines which are affected by a steady stream of security vulnerabilities. The high rate of vulnerabilities and partial lack of upstream support in the form of long term branches make it very difficult to support these browsers with backported security fixes. Additionally, library interdependencies make it impossible to update to newer upstream releases. Therefore, browsers built upon the webkit, qtwebkit and khtml engines are included in Jessie, but not covered by security support. These browsers should not be used against untrusted websites.

For general web browser use we recommend Iceweasel or Chromium.

Chromium – while built upon the Webkit codebase – is a leaf package, which will be kept up-to-date by rebuilding the current Chromium releases for stable. Iceweasel and Icedove will also be kept up-to-date by rebuilding the current ESR releases for stable.

(Iceweasel and Icedove are Debian’s de-branded versions of Firefox and Thunderbird, the product of an old trademark spat with Mozilla.)

Debian is correct that we do not provide long term support branches, as it would be very difficult to backport security fixes. But it is not correct that “library interdependencies make it impossible to update to newer upstream releases.” This might have been true in the past, but for several years now, we have avoided requiring new versions of libraries whenever it would cause problems for distributions, and — with one big exception that I will discuss below — we ensure that each release maintains both API and ABI compatibility. (Distribution maintainers should feel free to get in touch if we accidentally introduce some compatibility issue for your distribution; if you’re having trouble taking our updates, we want to help. I recently worked with openSUSE to make sure WebKitGTK+ can still be compiled with GCC 4.8, for example.)

The risk in releasing updates is that WebKitGTK+ is not a leaf package: a bad update could break some application. This seems to me like a good reason for application maintainers to carefully test the updates, rather than a reason to withhold security updates from users, but it’s true there is some risk here. One possible solution would be to have two different WebKitGTK+ packages, say, webkitgtk-secure, which would receive updates and be used by high-risk software like web browsers and email clients, and a second webkitgtk-stable package that would not receive updates to reduce regression potential.

Recommended Distributions

We regularly receive bug reports from users with very old versions of WebKit, who trust their distributors to handle security for them and might not even realize they are running ancient, unsafe versions of WebKit. I strongly recommend using a distribution that releases WebKitGTK+ updates shortly after they’re released upstream. That is currently only Arch and Fedora. (You can also safely use WebKitGTK+ in Debian testing — except during its long freeze periods — and Debian unstable, and maybe also in openSUSE Tumbleweed. Just be aware that the stable releases of these distributions are currently not receiving our security updates.) I would like to add more distributions to this list, but I’m currently not aware of any more that qualify.

The Great API Break

So, if only distributions would ship the latest release of WebKitGTK+, then everything would be good, right? Nope, because of a large API change that occurred two and a half years ago, called WebKit2.

WebKit (an API layer within the WebKit project) and WebKit2 are two separate APIs around WebCore. WebCore is the portion of the WebKit project that Google forked into Blink; it’s too low-level to be used directly by applications, so it’s wrapped by the nicer WebKit and WebKit2 APIs. The difference between the WebKit and WebKit2 APIs is that WebKit2 splits work into multiple secondary processes. Asides from the UI process, an application will have one or many separate web processes (for the actual page rendering), possibly a separate network process, and possibly a database process for IndexedDB. This is good for security, because it allows the secondary processes to be sandboxed: the web process is the one that’s likely to be compromised first, so it should not have the ability to access the filesystem or the network. (Remember, though, that there is no Linux sandbox yet, so this is currently only a theoretical benefit.) The other main benefit is robustness. If a web site crashes the renderer, only a single web process crashes (corresponding to one tab in Epiphany), not the entire browser. UI process crashes are comparatively rare.

Intermission: Certificate Verification

Another advantage provided by the API change is the opportunity to handle HTTPS connections more securely. In the original WebKitGTK+ API, applications must handle certificate verification on their own. This was a serious mistake; predictably, applications performed no verification at all, or did so improperly. For instance, take this Shotwell bug which is not fixed in any released version of Shotwell, or this Banshee bug which is still open. Probably many more applications are affected, because I have not done a comprehensive check. The new API is secure by default; applications can ignore verification errors, but only if they go out of their way to do so.

Remember that even though WebKitGTK+ 2.4.9 was released upstream over eight months ago, Ubuntu 14.04 is still on 2.4.8? It’s worth mentioning that 2.4.9 contains the fix for that serious networking backend issue I mentioned earlier (CVE-2015-2330). The bug is that TLS certificate verification was not performed until an HTTP response was received from the server; it’s supposed to be performed before sending an HTTP request, to prevent secure cookies from leaking. This is a disaster, as attackers can easily use it to get your session cookie and then control your user account on most websites. (Credit to Ross Lagerwall for reporting that issue.) We reported this separately to oss-security due to its severity, but that was not enough to convince distributions to update. But most applications in Ubuntu 14.04, including Epiphany and Midori, would not even benefit from this fix, because the change only affects WebKit2; remember, there’s no certificate verification in the original WebKitGTK+ API. (Modern versions of Epiphany do use WebKit2, but not the old version included in Ubuntu 14.04.) Old versions of Epiphany and Midori load pages even if certificate verification fails; the verification result is only used to change the status of a security indicator, basically giving up your session cookies to attackers.

Removing WebKit1

WebKit2 has been around for Mac and iOS for longer, but the first stable release for WebKitGTK+ was the appropriately-versioned WebKitGTK+ 2.0, in March 2013. This release actually contained three different APIs: webkitgtk-1.0, webkitgtk-3.0, and webkit2gtk-3.0. webkitgtk-1.0 was the original API, used by GTK+ 2 applications. webkitgtk-3.0 was the same thing for GTK+ 3 applications, and webkit2gtk-3.0 was the new WebKit2 API, available only for GTK+ 3 applications.

Maybe it should have remained that way.

But, since the original API was a maintenance burden and not as stable or robust as WebKit2, it was deleted after the WebKitGTK+ 2.4 release in March 2014. Applications had had a full year to upgrade; surely that was long enough, right? The original WebKit API layer is still maintained for the Mac, iOS, and Windows ports, but the GTK+ API for it is long gone. WebKitGTK+ 2.6 (September 2014) was released with only one API, webkit2gtk-4.0, which was basically the same as webkit2gtk-3.0 except for a couple small fixes; most applications were able to upgrade by simply changing the version number. Since then, we have maintained API and ABI compatibility for webkit2gtk-4.0, and intend to do so indefinitely, hopefully until GTK+ 4.0.

A lot of good that does for applications using the API that was removed.

WebKit2 Adoption

While upgrading to the WebKit2 API will be easy for most applications (it took me ten minutes to upgrade GNOME Initial Setup), for many others it will be a significant challenge. Since rendering occurs out of process in WebKit2, the DOM API can only be accessed by means of a shared object injected into the web process. For applications that perform only a small amount of DOM manipulation, this is a minor inconvenience compared to the old API. For applications that use extensive DOM manipulation — the email clients Evolution and Geary, for instance — it’s not just an inconvenience, but a major undertaking to upgrade to the new API. Worse, some applications (including both Geary and Evolution) placed GTK+ widgets inside the web view; this is no longer possible, so such widgets need to be rewritten using HTML5. Say nothing of applications like GIMP and Geany that are stuck on GTK+ 2. They first have to upgrade to GTK+ 3 before they can consider upgrading to modern WebKitGTK+. GIMP is working on a GTK+ 3 port anyway (GIMP uses WebKitGTK+ for its help browser), but many applications like Geany (the IDE, not to be confused with Geary) are content to remain on GTK+ 2 forever. Such applications are out of luck.

As you might expect, most applications are still using the old API. How does this work if it was already deleted? Distributions maintain separate packages, one for old WebKitGTK+ 2.4, and one for modern WebKitGTK+. WebKitGTK+ 2.4 has not had any updates since last May, and the last real comprehensive security update was over one year ago. Since then, almost 130 vulnerabilities have been fixed in newer versions of WebKitGTK+. But since distributions continue to ship the old version, few applications are even thinking about upgrading. In the case of the email clients, the Evolution developers are hoping to upgrade later this year, but Geary is completely dead upstream and probably will never be upgraded. How comfortable are you with using an email client that has now had no security updates for a year?

(It’s possible there might be a further 2.4 release, because WebKitGTK+ 2.4 is incompatible with GTK+ 3.20, but maybe not, and if there is, it certainly will not include many security fixes.)

Fixing Things

How do we fix this? Well, for applications using modern WebKitGTK+, it’s a simple problem: distributions simply have to start taking our security updates.

For applications stuck on WebKitGTK+ 2.4, I see a few different options:

  1. We could attempt to provide security backports to WebKitGTK+ 2.4. This would be very time consuming and therefore very expensive, so count this out.
  2. We could resurrect the original webkitgtk-1.0 and webkitgtk-3.0 APIs. Again, this is not likely to happen; it would be a lot of work to restore them, and they were removed to reduce maintenance burden in the first place. (I can’t help but feel that removing them may have been a mistake, but my colleagues reasonably disagree.)
  3. Major distributions could remove the old WebKitGTK+ compatibility packages. That will force applications to upgrade, but many will not have the manpower to do so: good applications will be lost. This is probably the only realistic way to fix the security problem, but it’s a very unfortunate one. (But don’t forget about QtWebKit. QtWebKit is based on an even older version of WebKit than WebKitGTK+ 2.4. It doesn’t make much sense to allow one insecure version of WebKit but not another.)

Or, a far more likely possibility: we could do nothing, and keep using insecure software.

leaking buffers in wayland

So in my last blog post I mentioned Matthias was getting SIGBUS when using wayland for a while. You may remember that I guessed the problem was that his /tmp was filling up, and so I produced a patch to stop using /tmp and use memfd_create instead. This resolved the SIGBUS problem for him, but there was something gnawing at me: why was his /tmp filling up? I know gnome-terminal stores its unlimited scrollback buffer in an unlinked file in /tmp so that was one theory. I also have seen, in some cases firefox downloading files to /tmp. Neither explanation sat well with me. scrollback buffers don’t get that large very quickly and Matthias was seeing the problem several times a day. I also doubted he was downloading large files in firefox several times a day. Nonetheless, I shrugged, and moved on to other things…

…until Thursday. Kevin Fenzi mentioned on IRC that he was experiencing a 12GB leak in gnome-shell. That piqued my interest and seemed pretty serious, so I started to troubleshoot with him. My first question was “Are you using the proprietary nvidia driver?”. I asked this because I know the nvidia driver has in the past had issues with leaking memory and gnome-shell. When Kevin responded that he was on intel hardware I then asked him to post the output of /proc/$(pidof gnome-shell)/maps so we could see the make up of the lost memory. Was it the heap? or some other memory mapped regions? To my surprise it was the memfd_create’d shared memory segments from my last post ! So window pixel data was getting leaked. This explains why /tmp was getting filled up for Matthias before, too. Previously, the shared memory segments resided in /tmp after all, so it wouldn’t have taken long for them to use up /tmp.

Of course, the compositor doesn’t create the leaked segments, the clients do, and then those clients share them with the compositor. So we probed a little deeper and found the origin of the leaking segments; they were coming from gnome-terminal. My next thought was to try to reproduce. After a few minutes I found out that typing:


$ while true; do echo; done

into my terminal and then switching focus to and from the terminal window made it leak a segment every time focus changed. So I had a reproducer and just needed to spend some time to debug it. Unfortunately, it was the end of the day and I had to get my daughter from daycare, so I shelved it for the evening. I did notice before I left, though, one oddity in the gtk+ wayland code: it was calling a function named _gdk_wayland_shm_surface_set_busy that contained a call to cairo_surface_reference. You would expect a function called set_something to be idempotent. That is to say, if you call it multiple times it shouldn’t add a new reference to a cairo surface each time. Could it be the surface was getting set “busy” when it was already set busy, causing it to leak a reference to the cairo surface associated with the shared memory, keeping it from getting cleaned up later?

I found out the next day, that indeed, was the case. That’s when I came up with a patch to make sure we never call set_busy when the surface was already busy. Sure enough, it fixed the leak. I wasn’t fully confident in it, though. I didn’t have a full big picture understanding of the whole workflow between compositor and gtk+, and it wasn’t clear to me if set_busy was supposed to ever get called when the surface was busy. I got in contact with the original author of the code, Jasper St. Pierre, to get his take. He thought the patch was okay (modulo some small style changes), but also said that part of the existing code needed to be redone.

The point of the busy flag was to mark a shared memory region as currently being read by the compositor. If the buffer was busy, then the gtk+ couldn’t draw to it without risking stepping on the compositors toes. If gtk+ needed to draw to a busy surface, it instead allocated a temporary buffer to do the drawing and then composited that temporary buffer back to the shared buffer at a later time. The problem was, as written, the “later time” wasn’t necessarily when the shared buffer was available again. The temporary buffer was created right before the toolkit staged some pixel updates, and copied back to the shared buffer after the toolkit was done with that one draw operation. The temporary buffer was scoped to the drawing operation, but the shared buffer wouldn’t be available for new contents until the next frame event some milliseconds later.

So my plan, after conferring with Matthias, was to change the code to not rely on getting the shared buffer back. We’d allocate a “staging” buffer, do all draw operations to it, hand it off to the compositor when we’re done doing updates and forget about it. If we needed to do new drawing we’d allocate a new staging buffer, and so on. One downside of this approach is the new staging buffer has to be initialized with the contents of the previously handed off buffer. This is because, the next drawing operation may only update a small part of the window (say to blink a cursor), and we need the rest of the window to properly get drawn in that. This read back operation isn’t ideal, since it means copying around megabytes of pixel data. Thankfully, the wayland protocol has a mechanism in place to avoid the costly copy in most cases:


→ If a client receives a release event before the frame callback
→ requested in the same wl_surface.commit that attaches this
→ wl_buffer to a surface, then the client is immediately free to
→ re-use the buffer and its backing storage, and does not need a
→ second buffer for the next surface content update.

So that’s our out. If we get a release event on the buffer before the next frame event, the compositor is giving us the buffer back and we can reuse it as the next staging buffer directly. We would only need to allocate a new staging buffer if the compositor was tardy in returning the buffer to us. Alright, I had plan and hammered out a patch on friday. It didn’t leak, and from playing with the machine for while, everything seemed to function, but there was one hiccup: i set a breakpoint in gdb to see if the buffer release event was coming in and it wasn’t. That meant we were always doing the expensive copy operation. Again, I had to go, so I posted the patch to bugzilla and didn’t look at it again until the weekend. That’s when I discovered mutter wasn’t sending the release event for one buffer until it got replaced by another. I fixed mutter to send the release event as soon as it uploaded the pixel data to the gpu and then everything started working great, so I posted the finalized version of the gtk+ patch with a proper commit message, etc.

There’s still some optimization that could be done for compositors that don’t handle early buffer release. Rather than initializing the staging buffer using cairo, we could get away with doing a lone memcpy() call. We know the buffer is linear and each row is right next to the previous in memory, so memcpy might be faster than going through all the cairo/pixman machinery. Alternatively, rather than initializing the staging buffer up front with the contents of the old buffer, we could wait until drawing is complete, and then only draw the parts of the buffer that haven’t been overwritten. Hard to say what the right way to go is without profiling, but both weston on gl and mutter support the early release feature now, so maybe not worth spending too much time on anyway.

DX hackfest 2016 aftermath

The DX hackfest, and FOSDEM, are over. Thanks everyone for coming — and thanks to betacowork, ICAB, the GNOME Foundation, and the various companies who allowed people to come along. Thanks to Collabora for sending me along and sponsoring snacks and dinner one evening.

What did we do?

Web Engines Hackfest according to me

And once again, in December we celebrated the hackfest. This year happened between Dec 7-9 at the Igalia premises and the scope was much broader than WebKitGTK+, that’s why it was renamed as Web Engines Hackfest. We wanted to gather people working on all open source web engines and we succeeded as we had people working on WebKit, Chromium/Blink and Servo.

The edition before this I was working with Youenn Fablet (from Canon) on the Streams API implementation in WebKit and we spent our time on the same thing again. We have to say that things are much more mature now. During the hackfest we spent our time in fixing the JavaScriptCore built-ins inside WebCore and we advanced on the automatic importation of the specification web platform tests, which are based on our prior test implementation. Since now they are managed there, it does not make sense to maintain them inside WebKit too, we just import them. I must say that our implementation is fairly complete since we support the current version of the spec and have almost all tests passing, including ReadableStream, WritableStream and the built-in strategy classes. What is missing now is making Streams work together with other APIs, such as Media Source Extensions, Fetch or XMLHttpRequest.

There were some talks during the hackfest and we did not want to be less, so we had our own about Streams. You can enjoy it here:

You can see all hackfest talks in this YouTube playlist. The ones I liked most were the ones by Michael Catanzaro about HTTP security, which is always interesting given the current clumsy political movements against cryptography and the one by Dominik Röttsches about font rendering. It is really amazing what a browser has to do just to get some letters painted on the screen (and look good).

As usual, the environment was amazing and we had a great time, including the traditional Street Fighter‘s match, where Gustavo found a worthy challenger in Changseok :)

Of course, I would like to thank Collabora and Igalia for sponsoring the event!

And by the way, quite shortly after that, I became a WebKit reviewer!

I'm joining Igalia!

On Monday February the First, 2016 I have the pleasure to join Igalia's WebKit team as an intern for the next five months. I will probably work on WebKitGTK+ and/or GNOME Web, but what's sure is that it will be fun and interesting! =D

January 31, 2016

Belgian Vacation

On 26th I finished the semester exams and the day afterwards went on a trip.

It kicked off with me getting around Brussels with guidance from my printscreened map mashups – looking forward to being able to use Amisha’s Print Support in GNOME Maps in the future for this as I have no internet on my phone. :)

Wednesday was spent at the developer experience hackfest, discussing the future of GNOME Developer Center with kat, lasse, afranke, matthieu and fredp. Matthieu demoed his tool hotdoc and ptomato showed off his efforts on getting GJS documentation online. My understanding of where we ended was that there was some consensus so far to try to unify the hand-written docs so they are written in a single language (fx Mallard) and make it play with hotdoc which can integrate that with source code documentation. Hotdoc also allows a lot of cool things such as being able to change the language of the code examples and online editing.

I spent another part of the hackfest on creating CSS style that can make it easier to use our TemplateFancy for applications which want to be newcomer friendly. I partly achieved to create a navigation bar – I’m having some issues finding a non-hacky way of center-aligning the li elements relative to the width of the ul elements though, so I left that effort aside after a while. Beyond that, what remains now is to create CSS classes for title, subtitle and frontpage content.

devx my small laptop in a large room with a dozen hackers. The location was provided by betacowork – they are awesome.

The rest of my productive time went into iterating on some designs for planned features of Polari:

  • IRC commands auto-completion
  • Undoing connection removal (landed)
  • NickServ handling
  • Improvements for changing nicks (landed)
  • Contextual Popovers
  • polari-small-stuff Snippets of small UI mocks I’ve been working on (some of it is still WIP).

    Then FOSDEM happened. Also this year did GNOME have a booth although not as big as in previous years. I had designed merchandise and kat printed it – we had hoodies, t-shirts, mugs, stickers and a demo computer with the latest stable version of GNOME.

    fosdem

    I watched Christian Hergert’s demo of Builder which was quite insightful and something I can recommend you watch when a video appears online. Beyond that I was mainly standing in the booth, selling wear and talking to GNOME users. Some visitors asked how we (the people in the booth) were involved with the project. Some were even interested in contributing, so I showed them our newcomers page. When they appear on IRC, make sure to show them we don’t bite (most of the time)!

    31st January is today and that marks the end of the vacation. Last semester of my bachelor degree starts tomorrow, exciting times ahead.

What’s going on with GNOME To Do

Aye folks! Since a few weeks ago, GNOME To Do saw quite a big number of changes. As some of you may not be strict git followers, a good review of the latest changes may come in handy. Let’s go!

A new list view

Yeah, that’s it – GNOME To Do now has a grid and list views of tasklists. A list view is very useful when you have many tasklists. Check this out:

Captura de tela de 2016-01-31 18-49-34The traditional grid view
Captura de tela de 2016-01-31 18-49-55The new list view

 

I’m expecting some input on it, as I strongly believe this is not the ideal UI for the list view. For example, it’d be nice to display the number of tasks each list have. Anyway, a small new feature was added as well – task headers.

Task headers on Scheduled panel

Since I started using Todoist, I absolutely fell in love with it’s organization of tasks. Seriously, Todoist designers, you’re really skillfull. Congratulations! So I decided that To Do also needed something similar to that, and here’s the result:

Captura de tela de 2016-01-31 18-58-21Not perfect yet, but at least working.

Hope this will be useful! Now, the single most important thing that this release has to offer. Ladies and gentlemen, please say hello to the new plugin system.

The plugin system

There are many things we can do with a personal task manager. We can have integration with different sources of tasks like Google Tasks, Remember the Milk, Todoist, among others. We can also have new features like user statistics, Bugzilla integration, different handling of recurrency, et cetera. Imagination is the limit.

Since I’m pretty sure I’m not enough to implement all the crazy ideas, I decided to make To Do handle plugins, so third-party developers can make their own extensions. This work is obviously inspired in the awsome work made by Christian Hergert with GNOME Builder.

A sneak peek:

Captura de tela de 2016-01-31 18-59-50Ops! Look, you’re not seeing that I’m working on Todoist integration, ok?

I want to take this paragraph just to say a big thank you for Patrick Griffis – he gave an unvaluable help on post-fixups on the plugin system. Please, dudes, applause for him!

The plugin system itself is built on top of Libpeas, which happen to have a really mature API. I took a special care to reimplement the plugin manager window to better fit GNOME HIG, with the hope it’ll serve as an inpiration for the libpeas-gtk widgets to be updated.

And yeah, you’re not being tricked by your eyes. I am working on Todoist integration – you can check it here.

Next posts will explain the new To Do API with an example, fictional plugin. Stay tuned!

Show me the way

If you need further proof that OpenStreetMap is a great project, here’s a very nice near real-time animation of the most recent edits: https://osmlab.github.io/show-me-the-way/

Show me the way

Seen today at FOSDEM, at the stand of the Humanitarian OpenStreetMap team which also deserves attention: https://hotosm.org


Comments | More on rocketeer.be | @rubenv on Twitter

2016-01-31 Sunday.

  • Up early; breakfast meeting near the ULB. Sat in a corner writing slides at great length - a terrible use of good time at FOSDEM - when I should have been chatting to people. Frustrating to see them passing and have no time; made some progress at least with Kendy's help.
  • Gave my talk; slides as hybrid-PDF below:
    Slides of Scaling and Securing LibreOffice on-line: hybrid PDF

January 30, 2016

2016-01-30 Saturday.

  • Up; mail chew, breakfast with Kendy; to the venue ... caught up with lots of old friends; exciting times. Announced our exciting partnership with Kolab - really looking forward to working closely together.
  • Out for LibreOffice dinner, not improved by transient dancer - but improved by good company. Back to the hotel late - up until 4:30 am or so working on my talk.

We are getting there...

Hey pals,

"We are getting there". Yes, these were the words said by my mentor Jonas when I the attached the latest patch, which made me extremely happy and jumping. :)

In the last two weeks, I worked on adding minimaps, enabling the different layouts for long routes and shorter routes and the refactoring of code. After discussion with design team, it was confirmed that minimaps were to be added for starting and finishing points only.

To make it happen, I learnt the concept of abstract classes and the factory method. As Print layout class is made abstract  which acts as a tool-box ,or rather sort of library which provides its sub-classes the methods used to get different surfaces (MapView, Instruction, Header, etc. )

Depending on the distance, we differentiate long and short routes and so the layouts for each. As shown in mockup, the long route layout contains minimaps surfaces and shorter one doesn't as the route is clear in complete MapView itself . So following are the Screenshots taken for both types.

Delhi to Mumbai has been my favorite route for testing purpose :p . 


.
.
.
.
.
.


For the shorter route I took infocity to DA-IICT, which I follow very frequently :)


Now next step is to refine the design to make it  more friendly to the user. Also some more code refactoring is required.

Will be back pretty soon. Till then stay tuned. :)

Cheers,
Amisha
 
 

Main character design (1)

Today let’s discuss character design! This is a complex topic, and I can foresee several other blog posts to be done on this topic. What we call character design is often called “study”, because this is what it is: the “study” of a character. It is more than just a graphics style as you could do for a standalone artwork. And as such, it involves many attempts and experiments.

About character sheets

With Aryeom, we have discussed and reviewed a lot Marmot’s design. This is the main character after all. So she did and redid him a few times. You may remember some of the early designs, which we posted exactly a year ago, and you will likely recognize the design we used in our teaser.

ZeMarmot, first version ZeMarmot, second version

These types of design sheet are called “turnaround” sheets, since they show the character on various angles, usually at least front, back and one side (or both sides if the character or one’s clothing are not really symmetric). Sometimes you may have more angles, or even specific sheets with smaller angle rotations, focusing on a particular side of the body (like a few angles on the back, etc.).

There were actually a lot other versions of Marmot, for instance one of the early conceptualization was very cute, and I still like it very much.

Marmot research

As you can see, this is a different style of character sheet. You often find these under the name of “rough” character sheet since these are quick sketches (which is what a rough is) of the character in various poses or situations. These different kinds of character sheets are complementary and in the end, you have many of these for the same design. Rough ones are usually earlier in the concept design, though it may depend on the artist.

So what are these for? They have multiple purposes. Of course a main goal is collaborative work: if several artists have to draw the same character, one has to make sure everyone doesn’t draw a personal variation. Consistency is important. But even if you were to work alone, you need consistency with yourself! With time, memory and your previous work as only reference, you lose the chosen style and the character attributes. You cannot even use your own scenes as reference because you would slowly diverge, from scene to scene. Indeed maybe the character in scene 30 looks like in scene 29. Same for scene 29 and 28. But what if you were making slight changes while barely noticing, or letting them slip since these may be seen as minor glitches, which would add up, so that the styles between scene 1 and scene 30 are actually quite different in a very noticeable way?
This is why you need a stable reference fixed in time, always the same. So you prepare your reference for various cases, on all angles, and plan a whole bunch of possible poses and actions to refer to later. These references are the character sheets.

Note that even (more?) for 3D films, you need character sheets as a detailed reference for the 3D modeler.

Finally another good purpose is to build a solid character. When you draw and redraw, you get to understand him/her, to grasp one’s purpose and behavior, one’s particular attribute. Why is he wearing this? How did one get this scar? Should one have a birthmark here? What kind of action is one likely to do? Is it a character who runs? Who cries often? Who is happy? Without this, you end up building up the character in the same time as you produce the film, and create irregularities, or changes which don’t add up. How many times have I read webcomics where a character would be right handed or left handed on different scenes, or attributes as important as scars would be forgotten sometimes. Why? Likely because these attributes had no real meaning for the creator (of course mistakes happen to anyone, with or without character design. This is not a perfect fail-safe tool. But it helps).
Once again, this is about consistency, but also about a deep character that people will love or hate, and in any case in who they will believe in.

Note also that this is all just “common knowledge”, and there are “styles” where detailed character sheets (or even character sheets at all) might not be necessary, or — why not — even undesirable (that would be rare though, but why not!). Rules are here to be broken, as long as you do it on purpose while knowing what it entails. In the end, use your common sense.

Marmot’s current design

Something we heard a few times with our teaser was: «this cat is so cute!». Now we don’t want to do a scientific documentary, but we still want that people recognize which animal we are drawing (ok it wasn’t that bad. Many people still directly recognized a marmot; also it turns out that in many countries, marmots are quite an unknown animal, which explains failure to recognize). On the other hand, we don’t want too much of anthropomorphism in our movie, but this is still an animation about a marmot traveling with a swag on the shoulder, so there is a good share of anthropomorphism anyway, right? So this makes it OK to leave some scientific aberration for the purpose of a fun movie. Let’s just make a good compromise between scientific rigor and fun.

After seeing and photographing a lot of marmots in the Alps, Aryeom tried a whole new bunch of designs. For reference, here is a real Alpine marmot:

Alpine Marmot in Saint Veran (2015-09-21)

Alpine Marmot in Saint Veran (2015-09-21)

To cut to the chase, here is the current design for our main character.

Marmot: character design 3

Marmot: character design 3

Note that we still allow ourselves to change the design before actual production (i.e. finale drawing) begins, but we thought it may be worth updating you with this new design. which you may already have noticed in our new year drawing.

What to say about it? First it obviously looks a lot more like a marmot while still keeping its share of anthropomorphism. In particular the feet look shorter than most our other designs, the snout is closer to the real deal, eyes are on the side rather than in the front, and the ears are lower than the teaser version (ears in particular were probably the attribute which made people believe it was a cat). Fingers also properly follow marmot’s anatomy with 4 fingers on the arms and 5 on the feet.

The color scheme has also been updated, brighter, brownish. The grey color was making it too close to a mouse (even though many marmots are really grey-ish). On this topic, you can notice how Aryeom places the character color palette directly on the design to easily pick from it later (because a colorized character sheet would not only be about painting consistency but also color consistency).

I think this is all for now. Soon more on character design and other topics!


Note: as often, all the photographs and images in this blog post are
works by Aryeom,under Creative Commons by-sa 4.0 international.

January 29, 2016

Crack from the Gnome hackfest

Screenshot from 2016-01-29 17-58-22After clicking the button:

Screenshot from 2016-01-29 17-58-35

Developer Experience Hackfest 2016

I’m happy to attend the Developer Experience hackfest, once again in Brussels thanks to our kind hosts at Betacowork.

My focus has been primarily on xdg-app; you can find more coverage on Alex’s blog; I’ve been helping out with the creation of new application manifests, and I’ve been able to add Documents, Weather and Clocks. I’ve also improved the nightly SDK build manifests with a few missing libraries along the way, and added a patch to GeoClue to allow building without the service backend.

I hope to see most of the GNOME applications gaining an xdg-app manifest this cycle, so that we’ll be able to install them as bundles in time for the 3.20 release, now that gnome-software can manage them!

Today, I’m looking forward to spend time with Jonas and Mattias to work on a plan for offline support in GNOME Maps.

I also want to thank the GNOME Foundation for sponsoring my travel, Betacowork again for hosting the hackfest, Collabora for sponsoring food and snacks and my employer, Endless, for giving me the chance to attend.

sponsored-badge-shadow

Instrumenting the GLib main loop with Dunfell

tl;dr: Visualise your main context and sources using Dunfell. Feedback and ideas welcome.

At the DX hackfest, I’ve been working on a new tool for instrumenting and visualising the behaviour of the GLib main context (or main contexts) in your program.

Screenshot from 2016-01-29 11-17-35

It’s called Dunfell (because I’m a sucker for hills) and at a high level it works by using SystemTap to record various GMainContext interactions in your program, saving them to a log file. The log file can then be examined using a viewer program.

The source is available on GitLab or GitHub because I still haven’t decided which is better.

In the screenshot above, each vertical line is a thread, each blue box is one dispatch phase of the main context which is currently running on that thread, each orange blob is a new GSource being created, and the green blob is a GSource which has been selected for closer inspection.

At the moment, it requires a couple of GLib patches to add some more SystemTap probe points, and it also requires a recent version of GTK+. It needs SystemTap, and I’ve only tested it on Fedora, so it might need some patching to work with the SystemTap installed on other distributions.

Screenshot from 2016-01-29 11-57-39

This screenshot is of a trace of the buffered-input-stream test from GIO, showing I/O callbacks being made across threads as idle source callbacks.

More visualisation ideas are welcome! At the moment, what Dunfell draws is quite simplistic. I hope it will be able to solve various common debugging problems eventually but suggestions for ways to do this intuitively, or for other problems to visualise, are welcome. Here are the use cases I was initially thinking about (from the README):

  • Detect GSources which are never added to a GMainContext.
  • Detect GSources which are dispatched too often (i.e. every main context iteration).
  • Detect GSources whose dispatch function takes too long (and hence blocks the main context).
  • Detect GSources which are never removed from their GMainContext after being dispatched (but which are never dispatched again).
  • Detect GMainContexts which have GSources attached or (especially) events pending, but which aren’t being iterated.
  • Monitor the load on each GMainContext, such as how many GSources it has attached, and how many events are processed each iteration.
  • Monitor ongoing asynchronous calls and GTasks, giving insight into their nesting and dependencies.
  • Monitor unfinished or stalled asynchronous calls.
  • Allow users to record logs to send to the developers for debugging on a different machine. The users may have to install additional software to record these logs (some component of Dunfell, plus its dependencies), but should not have to recompile or otherwise modify the program being debugged.
  • Work with programs which purely use GLib, through to programs which use GLib, GIO and GTK+.
  • Allow visualisation of this data, both in a standalone program, and in an IDE such as GNOME Builder.
  • Allow visualising differences between two traces.
  • Minimise runtime overhead of logging a program, to reduce the risk of disturbing race conditions by enabling logging.
  • Connecting to an already-running program is not a requirement, since by the time you’ve decided there’s a problem with a program, it’s already in the wrong state.

Rio

I was really pleased to see Endless, the little company with big plans, initiate a GNOME Design hackfest in Rio.

The ground team in Rio arranged a visit to two locations where we met with the users that Endless is targeting. While not strictly a user testing session, it helped to better understand the context of their product and get a glimpse of the lives in Rocinha, one of the Rio famous favelas or a more remote rural Magé. Probably wouldn’t have a chance to visit Brazil that way.

Points of diversion

During the workshop at the Endless offices we went through many areas we identified as being problematic in both the stock GNOME and Endless OS and tried to identify if we could converge on and cooperate on a common solution. Currently Endless isn’t using the stock GNOME 3 for their devices. We aren’t focusing as much on the shell now, as there is a ton of work to be done in the app space, but there are a few areas in the shell we could revisit.

GNOME could do a little better in terms of discoverability. We investigated the role of the app picker versus the window switcher in the overview and being able to enter the overview on boot. Some design choices have been explained and our solution was reconsidered to be a good way forward for Endless. Unified system menu, window controls, notifications, lock screen/screen shield have been analyzed.

Endless demoed how the GNOME app-provided system search has been used to great effect on their mostly offline devices. Think “offline google”.

DSC02567 DSC02589 DSC02616

Another noteworthy detail was the use of CRT screens. The new mini devices sport a cinch connection to old PAL/NTSC CRT TVs. Such small resolutions and poor quality brings more constraints on the design to keep things legible. This also has had a nice effect in that Endless has investigated some responsive layout solutions for gtk+ they demoed.

I also presented GNOME design team’s workflow, and the free software toolchain we use. Did a little demo of Inkscape for icon design and wireframing and Blender motion design.

Last but not least, I’d like to thank the GNOME Foundation for making it possible for me to fly to Rio.

Rio Hackfest Photos

January 28, 2016

Project Templates

Now that Builder has integrated Template-GLib, I started working on creating projects from templates. Today, this is only supported from the command line. Obviously the goal is to have it in the UI side-by-side with other project creation methods.

I’ve put together a demo for creating a new shared-library project with autotools. Once I’m happy with the design, I’ll document project templates so others can easily contribute new templates.

Anyway, you can give it a go as follows.

ide create-project my-project -t shared-library
cd my-project
ide build

Wikimedia in Google Code-in 2015

(Google Code-in and the Google Code-in logo are trademarks of Google Inc.)

Google Code-in 2015 is over. As a co-admin and mentor for Wikimedia (one of the 14 organizations who took part and provided mentors and tasks) I can say it’s been crazy as usual. :)

To list some of the students’ achievements:

  • More than a dozen of MediaWiki extensions converted to using the extension registration mechanism
  • Confirmation dialogs in UploadWizard and TimedMediaHandler use OOjs-UI
  • Vagrant roles created for the EmbedVideo and YouTube extensions
  • Two more scraping functions in the html-metadata node.js library (used by Citoid)
  • Many MediaWiki documentation pages marked as translatable
  • lc, lcfirst, uc and ucfirst magic words implemented in jqueryMsg
  • Screenshots added to some extension homepages on mediawiki.org
  • ReCaptchaNoCaptcha of the ConfirmEdit extension uses the UI language for the captcha
  • MobileFrontend, MultimediaViewer, UploadWizard, Newsletter, Huggle, and Pywikibot received numerous improvements (too many to list)
  • Long deprecated wfMsg* calls were removed from many extensions
  • The CommonsMetadata extension parses vcards in the src field
  • The MediaWiki core API exposes “actual watchers” as in “action=info”
  • MediaWiki image thumbnails are interlaced whenever possible
  • Kiwix is installable/moveable to the SD card, automatically opens the virtual keyboard for “find in page”, (re)starts with the last open article
  • imageinfo queries in MultimediaViewer are cached
  • The Twinkle gadget‘s set of article maintenance tags was audited and its XFD module has preview functionality
  • The RandomRootPage extension got merged into MediaWiki core
  • One can remove items from Gather collections
  • A new MediaWiki maintenance script imports content from text files
  • Pywikibot has action=mergehistory support implemented
  • Huggle makes a tone when someone writes something
  • Many i18n issues fixed and strings improved
  • Namespace aliases added to MediaWiki’s export dumps
  • The Translate extension is compatible with PHP 7
  • …and many, many, more.

Numerous GCI participants also blogged about their GCI experience with Wikimedia:

The Grand Prize winners and finalists will be announced on February 8th.

Congratulations to our many students and 35 mentors for fixing 461 tasks, and thank you for your hard work and your contributions to free software and free knowledge.
See you around on IRC, mailing lists, Phabricator tasks, and Gerrit changesets!

Graph with weekly numbers of Wikimedia GCI tasks

xdg-app at the Developer Experience Hackfest

I’m here at the gnome Developer Experience Hackfest in Brussels working on xdg-app. Just before I left I created a runtime for Gnome based on whatever is in git master. Now we’ve started to create app bundles for the gnome applications.

I’ve added builds of evince, gedit, gnome-builder, and maps. Cosimo has added Weather, and clocks and Alberto added gnome-calculator. The build manifests for these are in my github repo, and I have set up an automatic build of these (and the SDK).

Unfortunately the build machine is way underpowered, so its not yet useful for public consumption. I’m working on getting this to build on the gnome build machines, which means people can start testing the latest builds of gnome apps on any distros.

I’ve also been working with Simon to fix various issues that he’s seeing while packaging xdg-app for debian.

On to building more apps!

Thanks to the gnome foundation for sponsoring this trip, and arranging the hackfest.

January 26, 2016

The dangerous “UI team”

Background: customers hire products to do a job

I enjoyed Nikkel Blaase’s recent discussion of product design. In this article he puts it this way:

product_design

In another article he puts it this way:

product_design_2

This isn’t a new insight, but it’s still important, and well-stated here in these graphics. The product has to work and it can only work if you know what it’s supposed to do, and who it’s supposed to do it for.

“Interact with a UI” is not a job

Customers do not want to click on UI controls. Nor do they want to browse a web site, or “log in,” or “manage” anything, or for that matter interact with your product in any way. Those aren’t goals people have when they wake up in the morning. They’re more like tedious tasks they discover later.

So why does your company have a “UI team”? You’ve chartered a team with the mission people need to click on stuff, let’s give them some clicky pixels.

The “UI team” has “UI” right there in the name (sounds user-friendly doesn’t it?). But this is a bottom-up, implementation-driven way to define a team. You’ve defined the team by solution rather than by problem.

Instead, define your teams by asking them to solve real customer problems in the best way they can come up with. Product design doesn’t mean “come up with some pixels,” it means “solve the problem.”

The best UX will often be no UI at all: if the customer gets your product and their problem goes away instantly with no further steps, that’s amazing. Failing that, less UI beats more UI; and for many customers and problems, a traditional GUI won’t be right. Alternatives: command line, voice recognition, gestures, remote control, documentation, custom hardware, training, …

When we were inexperienced and clueless at Red Hat back in the day (1999 or so), we started stamping GUIs on all the things — because “hard to use” was a frequent (and true) criticism of Linux, and we’d heard that a GUI would solve that. Our naïveté resulted in stuff like this (this is a later, improved version, slightly more recent than the 1999-era):

http://s0.cyberciti.org/uploads/faq/2007/04/redhat-rhel5-system-config-network-0.jpg

We took whatever was in the config file or command line tools and translated it into GTK+ widgets. This may be mildly useful (because it’s more discoverable than the command line), but it’s still a learning curve… plus this UI was a ton of work to implement!

In the modern Linux community, people have a little more clue about UX.  There’s still a (nicer) screen like this buried somewhere. However, I never use it. The main UX is that if you plug in a network cable, the computer connects to the network. There’s no need to open a window or click any widgets at all.

Most of the work to implement “the computer connects automatically” was behind the scenes; it was backend work, not UI work.

At Red Hat, we should have gone straight for some important problem that real people had, such as “how do I get this computer on the network?”, and solved that. Eventually we did, but only after wasting time.

Don’t start with the mission “make a UI for <insert hard-to-use thing here>.” Nobody wants to click your widgets.

Define teams by the problem they’ll be solving. Put some people on the team that know how to wrangle the backend. Put some people on the team that know HTML or UI toolkit APIs. Put a designer on the team. Maybe some QA and a copywriter. Give the team all the skills they need, but ask them to solve an articulated customer problem.

Another way to put it: a team full of hammers will go around looking for nails. A team that’s a whole toolbox might figure out what really needs doing.

P.S. a related wrong belief: that you can “build the backend first” then “put a UI on it” later.

 

Feeds