GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

June 24, 2016

2016-06-24 Friday.

  • Out for a run before school with J. On Brexit - I voted in, so disappointed, but interested to see what the result will be; presumably not an Irish style neverendum; will the golden rule be applied to exiters ? why are the CAC40 and EURO STOXX 50 down 8.5% now when the FTSE 100 is down 5% ? when does the curse of "interesting times" go away again.

Flatpak builds available on a variety of architectures

Following the recent work we’ve been doing at Codethink in cooperation with Endless, it’s been a while now that we have the capability of building flatpak SDKs and apps for ARM architectures, and consequently also for 32bit Intel architectures.

Alex has been tying this together and setting up the Intel build machines and as of this week, flatpak builds are available at sdk.gnome.org in a variety of arches and flavors.

Arches

The supported architectures are as follows

  • x86_64, the 64bit Intel architecture which is the only one we’ve been building until now
  • i386, this is the name we are using for 32bit Intel, this is only i386 in name but the builds are in fact tuned for the i586 instruction set
  • aarch64, speaks for itself, this is the 64bit ARM architecture
  • arm, like i386, this is a generic name chosen to indicate 32bit arm, this build is tuned for ARMv7-A processors and will make use of modern features such as vfpv3 and the neon simd. In other words, this will not run on older ARM architectures but should run well on modern ARM processors such as the Cortex-A7 featured in the Raspberry Pi 2.

Build Bots

The build bots are currently driven with this set of build scripts, which should be able to turn an Intel or ARM machine with a vanilla Ubuntu 16.04 or RHEL 7 installation into a flatpak build machine.

ARM and Intel builds run on a few distributed build machines and are then propagated to sdk.gnome.org for distribution.

The build machines also push notifications of build status to IRC, currently we have it setup so that only failed builds are announced in #flatpak on freenode, while the fully verbose build notifications are announced in #flatpak-builds also on freenode (so you are invited to lurk in #flatpak-builds if you would like to monitor how your favorite app or library is faring on various build architectures).

 

Many thanks to all who were involved in making this happen, thanks to Alex for being exceptionally responsive and helpful on IRC, thanks to Endless for sponsoring the development of these build services and ARM support, thanks to Codethink for providing the build machines for the flatpak ARM builds and a special thanks to Dave Page for setting up the ARM build server infrastructure and filling in the IT knowledge gap where I fall short (specifically with things networking related).

June 23, 2016

2016-06-23 Thursday.

  • Up early, mail chew, customer call, pleased to see our first product integration on the market with ownCloud. Some ideas take lots of time to mature - I recall discussing what was needed with Markus Rex at CeBIT, March 2014. Some amazing work from the Collabora team to get everything together for today.
  • Lunch. Plugged away at slideware until late.

GSoC 2016 Introduction

Hi everyone! My name’s Gaurav Narula and I’m a third year undergrad student at BITS Pilani K.K. Birla Goa Campus. I’m pursuing a double major in Economics and Computer Science.

I’ve been using GNOME since a long time - the first instance I recall is when I got a Live CD of Ubuntu 6.10 via ShipIt. GNOME’s evolved a lot since then and has still remained my go to DE over all these years. I started with GTK+ development rather recently, with some contributions to gnome-mpv, a GTK+ frontend for the MPV Media Player around October 2015.

A few months later, I stumbled upon GNOME Music while searching for a new music player and it soon became my default music player. I then decided to become more involved with the project. I dived into its source and tried to fix some small bugs around February with assistance from Felipe Borges, Victor Toso and Carlos Garnacho all along.

Coming to my project, GNOME Music currently only allows access to one’s local music collection. Over the past few years, GNOME apps have integrated well with ownCloud to sync and retrieve files stored remotely and Music shouldn’t be an exception to the same :) The goal of the project is to allow playing and searching ones music collection over ownCloud. ownCloud’s music app exposes an Ampache API which will be used to develop a Grilo Plugin to allow remote media discovery in Music. Victor’s work in GSoC 2013 would allow writing the plugin in LUA and will be of great help to quickly get things working. Recent changes in ownCloud’s music App with help from Moris Jobke have set the stage and I can’t wait to begin work!

I’m fortunate to have Felipe Borges (GNOME) and Lukas Reschke (ownCloud) as my mentors along with many other people from both the organisations who’ve helped me all along. Looking forward to an exciting summer ahead! Stay tuned for more updates :)

Grilo Plugins: ownCloud source

This week concluded weeks 3-4 of my GSoC project on adding ownCloud support to GNOME Music. Carrying on from my previous post, after adding support for ownCloud Music in GOA, I’ve been working on implementing a grilo plugin for the same.

To a task that seemed fairly straightforward with ownCloud Music’s implementation of the Ampache API, it suffered a bit of a roadblock since the app didn’t support the Album Artist ID3 tag (TPE2). This was crucial since the grilo plugin would be used by gnome-music at a later stage which uses the tag quite often.

There seems to be a bit of an ambiguity around the usage of the Album Artist tag. After digging around a bit, Michael Koby’s article on the topic makes it pretty clear. Adding support for the tag in ownCloud music required changing the schema and rewriting a fair bit of the queries. On the frontend of ownCloud’s music app, the albums will now be grouped by Album Artists instead of Track Artists which makes finding albums easier at one place in case they have tracks by different artists. The code for the above changes is being reviewed and can be accessed at GitHub.

Grouping Albums by Album Artist

Moving on, I was excited to write the grilo plugin in LUA. While I didn’t have much experience with the language, it didn’t feel strange at all perhaps because of my familiarity with Python. Kudos to Tyler Neylon’s screencast on working with the C API and the LUA stack in particular, which immensely helped me in writing some wrapper methods in C.

grilo-test-api showing the ownCloud source in action

Shout out to Bastien Nocera and Victor Toso who helped me all along during the development of the plugin and the required wrapper methods :) Next up, working on queries and moving over to the gnome-music side of things!

Bad news: I’m not attending GUADEC

… not this year.

A few minutes before finishing my sponsorship email, I received a very annoying and sad news: my University appearently decided – without even asking my schedule – that I’ll make a presentation between August 14-18, otherwise they’ll kick me out.

I was already looking for paçocas (if you’re not aware, they may be the best brazilian popular sweet ever made) and gathering some money to pay out my beer debts with all the great people out there.

But no.

Feaneron is sad this night😦

June 22, 2016

GTK+ hackfest 2016

A dozen GNOME hackers invaded the Red Hat office in Toronto last week, to spend four days planning the next year of work on our favourite toolkit, GTK+; and to think about how Flatpak applications can best integrate with the rest of the desktop.

What did we do?

  • Worked out an approach for versioning GTK+ in future, to improve the balance between stability and speed of development. This has turned into a wiki page.
  • I demoed Dunfell and added support for visualising GTasks to it. I don’t know how much time I will have for it in the near future, so help and feedback are welcome.
  • There was a detailed discussion of portals for Flatpak, including lots of use cases, and the basics of a security design were decided which allows the most code reuse while also separating functionality. Simon has written more about this.
  • I missed some of the architectural discussion about the future of GTK+ (including moving some classes around, merging some things and stripping out some outdated things), but I believe Benjamin had useful discussions with people about it.
  • Allan, Philip, Mike and I looked at using hotdoc for developer.gnome.org, and possible layouts for a new version of the site. Christian spent some time thinking about integration of documentation into GNOME Builder.
  • Allison did a lot of blogging, and plotted with Alex to add some devious new GVariant functionality to make everyone’s lives easier when writing parsers — I’ll leave her to blog about it.

Thanks to Collabora for sending me along to take part!

After the hackfest, I spent a few days exploring Toronto, and as a result ended up very sunburned.

Busy at work in the hackfest room. Totem pole in the Royal Ontario Museum. The backstory for this one was trippy. The distillery district.

Behind the scenes with the developers

Photo by attente.

Photo by attente.

I had the privilege of sitting in on the GTK+ hackfest in Toronto last week, getting re-energized for my day job by hanging out with developers from Canonical, Collabora, Endless and Red Hat. Toronto is a fabulous city for a hackfest, and Red Hat provided a great workspace.

While there, I reviewed some user help and updated some of the Settings pages. The Transformer makes a good hackfest computer, lightweight enough for a great deal of walking, and comfortable to use when paired with the right keyboard. Remarkably, it has sufficient resources for running Continuous-in-a-Box.

There’s nothing quite as dramatic as a GNOME controversy at its epicentre. The decision-making process is as open and visible in person as it is on IRC. There is no behind the scenes.

Calendar Updates

Time of Coding!

Updates on my project as a Google Summer of Code Intern. As you all know, I’m working on the Gnome-Calendar to make the much-required week-view. Here is the proposed mock up for the same:

mockup

The week-view already had good amount of code written for it. So the basic files were already existing. My first task was to activate the already written week-view, which after a few changes in the main Application file, resulted in:

old

Which is pretty far from the mock up.

All of the previously written code for week-view was in a single class, but in my proposal, we divided it into multiple classes.

  • WeekHeader for displaying the week-names, dates and labels of month, week number(s) and year.
  • WeekGrid to display the time periods of the 7 days of the week.

The events would go to each class based on the duration they last.

The initial task was a major clean up of the previously written week-view code, which was a pretty difficult task, given it was 2 years old code, and LOT of things have changed. The code was a sight which one wouldn’t like to see.

The clean up done, it was time to set up new classes and templates.

So here’s how you do it for calendar

  1. Make the ui, c and header file.
  2. For making the calendar use the ui file
    1. Save the ui in the correct place, data/ui/
    2. Convert the file into binary, so that no one messes with it, by including it in the data/calendar.gresource.xml
    3. Include the file(or alias if you’ve set it to anything different) in the data/Makefile.am
  3. For making the calendar use the .c and .h files:
    1. Just include the files in the src/Makefile.am

From week-view, we will be using a separate folder to include the views, which is src/views/ So how do we include headers of files inside a subfolder? Simple enough if you know it- Add -I$(srcdir)/views to the CPPFlags of Makefile.

And there you have it newly introduced classes which are ready to use for coding and testing.

Next blog about the WeekHeader.

P.S.-My hackergotchi is here😀 , Thanks a lot danielgc for making it:)


Spreadsheet Function Semantics

Anyone who has spent time with Excel spreadsheets knows that Excel has a number of really strange behaviours. I am having a look at criteria functions.

Criteria functions come in two flavours: DCOUNT/DSUM/etc and COUNTIF/SUMIF/etc. The former lets you treat a range as a kind of database from which you can filter rows based on whatever criteria you have in mind and then compute some aggregate function on the filtered rows. For example, compute the average of the “Age” column for those records where the “State” column is either Maine or Texas. The COUNTIF group is a much older set of functions that more or less the same thing, but restricted to a single column. For example, count all positive entries in a range.

In either case, criteria are in play. 12, “>0”, “<=12.5", "=Oink", and "Foo*bar" are examples. The quotes here denote strings. This is already messed up. A syntax like “>0” is fine because the value is an integer. It is fine for a string too. However, the syntax is really crazy when the value is a floating-point number, a boolean or a date because now you just introduced a locale dependency for no good reason — mail the spreadsheet to Germany and get different results. Bravo. And for floating-point there is the question of whether precision was lost in turning the number into a string and back.

Excel being Excel there are, of course, special cases. “=” does not mean to look for empty strings. Instead it means to look for blank cells. And strings that can be parsed as numbers, dates, booleans, or whatever are equivalent to searching for such values. These are all just examples of run-of-the-mill Excel weirdness.

The thing that really makes me suspect that Excel designers were under the influence of potent psycho-active substances is that, for no good reason, pattern matching criteria like “foo*bar” mean something different for the two flavours of functions. For the “D” functions it means /^foo.*bar/ in grep terms, whereas for the “if” functions it means /^foo.*bar$/. Was that really necessary?

The thing is that there really is no good alternative to implementing the weird behaviour in any spreadsheet program that has similarly named functions. People have come to rely of the details and changing the semantics just means 3 or 4 sets of arbitrary rules instead of 2. That is not progress.

I noticed this while writing tests for Gnumeric. We now pass those tests, although I suspect there are more problems waiting there as I extend the test file. I do not know if LibreOffice has the intent of matching Excel with respect to these functions but, for the record, it does not. In fact, it fails in a handful of different ways: anchoring for “D” functions, strictness for DCOUNT, wildcards in general, and the array formula used in my sheet to count failures. (As well as anything having to do with booleans which localc does not support.)

June 21, 2016

I've bought some more awful IoT stuff

I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I've bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I'd oblige.

Today we're going to be talking about the KanKun SP3, a plug that's been around for a while. The idea here is pretty simple - there's lots of devices that you'd like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else's home.

The KanKun has all of these features and a bunch more, although when I say "features" I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn't work. I connected to the plug's network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn't created. Apparently this isn't permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn't work, but that's because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it's running. I didn't really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password ("p9z34c") and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here's a whole community of people playing with these plugs, and it's common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that's a great question and oh good lord do things start getting bad quickly at this point.

I'd grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that's surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn't find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device's IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB - since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn't have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started "wan" rather than "lan". The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That's not really a great deal of authentication. The protocol permits a password, but the app doesn't insist on it - some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn't take that long and would tell you how many of these devices are out there. If they're using the default password, that's enough to have full control over them.

There's some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution - the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn't seem to be true of the daemon that's listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that's a thing. It also downloads firmware updates over http and doesn't appear to check signatures on them, so there's the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it's in China. Sorry, Western Australia.

It's running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn't give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I've wondered is whether it's not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren't restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There's no rate-limiting on the server, so a weak password will be broken pretty quickly. It's also infringing my copyright, so I'd recommend against it on that point alone.

comment count unavailable comments

AAA game, indie game, card-board-box

Early bird gets eaten by the Nyarlathotep
 
The more adventurous of you can use those (designed as embeddable) Lua scripts to transform your DRM-free GOG.com downloads into Flatpaks.

The long-term goal would obviously be for this not to be needed, and for online games stores to ship ".flatpak" files, with metadata so we know what things are in GNOME Software, which automatically picks up the right voice/subtitle language, and presents its extra music and documents in the respective GNOME applications.
 
But in the meanwhile, and for the sake of the games already out there, there's flatpak-games. Note that lua-archive is still fiddly.
 
Support for a few Humble Bundle formats (some formats already are), grab-all RPMs and Debs, and those old Loki games is also planned.
 
It's late here, I'll be off to do some testing I think :)

PS: Even though I have enough programs that would fail to create bundles in my personal collection to accept "game donations", I'm still looking for original copies of Loki games. Drop me a message if you can spare one!

Fedora Workstation 24 is out and Flatpak is now officially launched!

This is a very exciting day for me as two major projects I am deeply involved with are having a major launch. First of all Fedora Workstation 24 is out which crosses a few critical milestones for us. Maybe most visible is that this is the first time you can use the new graphical update mechanism in GNOME Software to take you from Fedora Workstation 23 to Fedora Workstation 24. This means that when you open GNOME Software it will show you an option to do a system upgrade to Fedora Workstation 24. We been testing and doing a lot of QA work around this feature so my expectation is that it will provide a smooth upgrade experience for you.
Fedora System Upgrade

The second major milestone is that we do feel Wayland is now in a state where the vast majority of users should be able to use it on a day to day basis. We been working through the kinks and resolving many corner cases during the previous 6 Months, with a lot of effort put into making sure that the interaction between applications running natively on Wayland and those running using XWayland is smooth. For instance one item we crossed off the list early in this development cycle was adding middle-mouse button cut and paste as we know that was a crucial feature for many long time linux users looking to make the switch. So once you updated I ask all of you to try switching to the Wayland session by clicking on the little cogwheel in the login screen, so that we get as much testing as possible of Wayland during the Fedora Workstation 24 lifespan. Feedback provided by our users during the Fedora Workstation 24 lifecycle will be a crucial information to allow us to make the final decision about Wayland as the default for Fedora Workstation 25. Of course the team will be working ardently during Fedora Workstation 24 to make sure we find and address any niggling issues left.

In addition to that there is also of course a long list of usability improvements, new features and bugfixes across the desktop, both coming in from our desktop team at Red Hat and from the GNOME community in general.

There was also the formal announcement of Flatpak today (be sure to read that press release), which is the great new technology for shipping desktop applications. For those of you who have read my previous blog entries you probably seen me talking about this technology using its old name xdg-app. Flatpak is an incredible piece of engineering designed by Alexander Larsson we developed alongside a lot of other components.
Because as Matthew Garret pointed out not long ago, unless we move away from X11 we can not really produce a secure desktop container technology, which is why we kept such a high focus on pushing Wayland forward for the last year. It is also why we invested so much time into Pinos which is as I mentioned in my original annoucement of the project our video equivalent of PulseAudio (and yes a proper Pinos website is getting close :). Wim Taymans who created Pinos have also been working on patches to PulseAudio to make it more suitable for using with sandboxed applications and those patches have recently been taken over by community member Ahmed S. Darwish who is trying to get them ready for merging into the main codebase.

We are feeling very confident about Flatpak as it has a lot of critical features designed in from the start. First of all it was built to be a cross distribution solution from day one, meaning that making Flatpak run on any major linux distribution out there should be simple. We already got Simon McVittie working on Debian support, we got Arch support and there is also an Ubuntu PPA that the team put together that allows you to run Flatpaks fully featured on Ubuntu. And Endless Mobile has chosen flatpak as their application delivery format going forward for their operating system.

We use the same base technologies as Docker like namespaces, bind mounts and cgroups for Flatpak, which means that any system out there wanting to support Docker images would also have the necessary components to support Flatpaks. Which means that we will also be able to take advantage of the investment and development happening around server side containers.

Flatpak is also heavy using another exciting technology, OSTree, which was originally developed by Colin Walters for GNOME. This technology is actually seeing a lot of investment and development these days as it became the foundation for Project Atomic, which is Red Hats effort to create an enterprise ready platform for running server side containers. OStree provides us with a lot of important features like efficient storage of application images and a very efficient transport mechanism. For example one core feature OSTree brings us is de-duplication of files which means you don’t need to keep multiple copies on your disk of the same file, so if ten Flatpak images share the same file, then you only keep one copy of it on your local disk.

Another critical feature of Flatpak is its runtime separation, which basically means that you can have different runtimes for some families of usecases. So for instance you can have a GNOME runtime that allows all your GNOME applications to share a lot of libraries yet giving you a single point for security updates to those libraries. So while we don’t want a forest of runtimes it does allow us to create a few important ones to cover certain families of applications and thus reduce disk usage further and improve system security.

Going forward we are looking at a lot of exciting features for Flatpak. The most important of these is the thing I mentioned earlier, Portals.
In the current release of flatpak you can choose between two options. Either make it completely sandboxed or not make it sandboxed at all. Portals are basically the way you can sandbox your application yet still allow it to interact with your general desktop and storage. For instance Pinos and PulseAudios role for containers is to provide such portals for handling audio and video. Of course more portals are needed and during the the GTK+ hackfest in Toronto last week a lot of time was spent on mapping out the roadmap for Portals. Expect more news about Portals as they are getting developed going forward.

I want to mention that we of course realize that a new technology like Flatpak should come with a high quality developer story, which is why Christian Hergert has been spending time working on support for Flatpak in the Builder IDE. There is some support in already, but expect to see us flesh this out significantly over the next Months. We are also working on adding more documentation to the Flatpak website, to cover how to integrate more build systems and similar with Flatpak.

And last, but not least Richard Hughes has been making sure we have great Flatpak support in Software in Fedora Workstation 24 ensuring that as an end user you shouldn’t have to care about if your application is a Flatpak or a RPM.

Preparing my Chikiticluster in Frankfurt to my presentation

I am excited that I will give a poster presentation about my experiences with HPC at #ISC16 I was selected to do it as part of the Women HPC:)

13482890_10207784015290866_8450584555956809755_o

Setup the SD Master Card:

First I downloaded again the jessie iso from the Raspbian page 2016-05-27 and then copy the image to the SD 32 GB card

Before inserting the SD on your laptop, run df -h, then insert the card and check how is the device called, in this case we have: /dev/mmcblk0p1

Screenshot from 2016-06-21 03-32-11

Now you can umount the file, please notice the number 1 only refers to partition 1, then we can run umount /dev/mmcblk0*

Find the path where you Downloaded the image and then, unzip it by doing unzip [file.zip] -d [path_to_unzip] and then make sure you are allowd to run the following command:

dd bs=4M if=2016-05-27-raspbian-jessie.img of=/dev/mmcblk0
958+1 records in
958+1 records out
4019191808 bytes (4.0 GB) copied, 425.597 s, 9.4 MB/s
  • It takes considerable minutes, so you must wait and be patient:)

Edit the config.txt as follows:

Screenshot from 2016-06-21 08-15-22

… Then, the configuration of the Raspberry PI follows as I wrote in my previous post.

I do not want to miss the opportunity to give a special Thanks to GNOME friends: Tobi and Moira for the excellent hospitality I received, for the moral and material support they gave me to achieve my dreams :3

IMG_20160618_154349IMG_20160618_154433


Filed under: GNOME, τεχνολογια :: Technology Tagged: WHPC, women HPC, HPC women, Julita Inca,Julita Inca Chiroque, UNI, CTIC, Peru, Lima

June 20, 2016

Wrapping up scenario tasks

Great work from Renata, Diana and Ciarrai on scenario tasks this week! Scenario tasks are an important part of any formal usability test. A well written scenario task sets a brief context, then asks the tester to do something specific. The goal of the scenario task should be clear enough that the tester will know when they have completed the task.

There's definitely an "art" to creating good scenario tasks. As Renata, Diana and Ciarrai wrote in their blogs this week, you need to be careful not to (accidentally) provide hints for how to complete the task. You need to balance enough information to make the scenario task clear, but not so much that the tester can only complete the task in one particular way. There are often multiple ways to do things in software. Let your testers find the way that works for them.
Diana wrote: "Phrase your scenario tasks without uncommon or unique words that appear on the screen. If you do, you turn the tasks into a simple game of word-finding."

Ciarrai wrote: "Language that doesn’t just mimic that which is used in the program will help avoid leading the user in performing the tasks. Testing whether the user can achieve the goal without direct word association can make a usability test more fruitful."

Renata wrote: "Avoid statements that may end up giving too much information. Instead of saying click“this”, you should leave the participant finish the task in order to see if that feature is intuitive. Also do not force the user to execute that task in a certain way, there are many ways to accomplish a task so let the user’s choose their way to use the interface."

In their research, Renata, Diana and Ciarrai also talked about how scenario tasks need to be realistic. There's no point in asking a tester to do something that a real person wouldn't do. That's why it was important to understand scenarios before we researched scenario tasks. Understanding how and why a real person would use the software to do real tasks is invaluable in creating realistic scenario tasks.

And scenario tasks need to be written using the language that your testers would normally use. Avoid using very technical words if your users wouldn't be technical. You might use technical words and phrases if you were building a usability test for a programmer's IDE and Debugger, but you wouldn't use technical words and phrases for a general desktop environment like GNOME. It's all about finding the right balance and "voice" in your scenario tasks.

If you haven't read their blog posts this week, you should read Diana's post for the Dilbert cartoon she included. This one reminds me of a quote from The Hitchhiker’s Guide to the Galaxy (radio program, TV mini-series, then books). One of the characters says this after stealing a spaceship that has a rather monochromatic color scheme:
‘It’s the wild colour scheme that freaks me out,’ said Zaphod, whose love affair with the ship had lasted almost three minutes into the flight. ‘Every time you try and operate these weird black controls that are labeled in black on a black background, a little black light lights up in black to let you know you’ve done it.’
~Zaphod Beeblebrox, The Restaurant at the End of the Universe

While the black-on-black color scheme might look impressive, the interface will be invisible in normal light (from Dilbert).

This week wraps up our "research" phase for the usability testing project. We will now transition to applying our new knowledge to an actual usability test. I'll talk more about that in my next update, probably in a few days.
image: Outreachy

GTK Hackfest 2016

I'm back from the GTK hackfest in Toronto, Canada and mostly recovered from jetlag, so it's time to write up my notes on what we discussed there.

Despite the hackfest's title, I was mainly there to talk about non-GUI parts of the stack, and technologies that fit more closely in what could be seen as the freedesktop.org platform than they do in GNOME. In particular, I'm interested in Flatpak as a way to deploy self-contained "apps" in a freedesktop-based, sandboxed runtime environment layered over the Universal Operating System and its many derivatives, with both binary and source compatibility with other GNU/Linux distributions.

I'm mainly only writing about discussions I was directly involved in: lots of what sounded like good discussion about the actual graphics toolkit went over my head completely :-) More notes, mostly from Matthias Clasen, are available on the GNOME wiki.

In no particular order:

Thinking with portals

We spent some time discussing Flatpak's portals, mostly on Tuesday. These are the components that expose a subset of desktop functionality as D-Bus services that can be used by contained applications: they are part of the security boundary between a contained app and the rest of the desktop session. Android's intents are a similar concept seen elsewhere. While the portals are primarily designed for Flatpak, there's no real reason why they couldn't be used by other app-containment solutions such as Canonical's Snap.

One major topic of discussion was their overall design and layout. Most portals will consist of a UX-independent part in Flatpak itself, together with a UX-specific implementation of any user interaction the portal needs. For example, the portal for file selection has a D-Bus service in Flatpak, which interacts with some UX-specific service that will pop up a standard UX-specific "Open" dialog — for GNOME and probably other GTK environments, that dialog is in (a branch of) GTK.

A design principle that was reiterated in this discussion is that the UX-independent part should do as much as possible, with the UX-specific part only carrying out the user interactions that need to comply with a particular UX design (in the GTK case, GNOME's design). This minimizes the amount of work that needs to be redone for other desktop or embedded environments, while still ensuring that the other environments can have their chosen UX design. In particular, it's important that, as much as possible, the security- and performance-sensitive work (such as data transport and authentication) is shared between all environments.

The aim is for portals to get the user's permission to carry out actions, while keeping it as implicit as possible, avoiding an "are you sure?" step where feasible. For example, if an application asks to open a file, the user's permission is implicitly given by them selecting the file in the file-chooser dialog and pressing OK: if they do not want this application to open a file at all, they can deny permission by cancelling. Similarly, if an application asks to stream webcam data, the UX we expect is for GNOME's Cheese app (or a similar non-GNOME app) to appear, open the webcam to provide a preview window so they can see what they are about to send, but not actually start sending the stream to the requesting app until the user has pressed a "Start" button. When defining the API "contracts" to be provided by applications in that situation, we will need to be clear about whether the provider is expected to obtain confirmation like this: in most cases I would anticipate that it is.

One security trade-off here is that we have to have a small amount of trust in the providing app. For example, continuing the example of Cheese as a webcam provider, Cheese could (and perhaps should) be a contained app itself, whether via something like Flatpak, an LSM like AppArmor or both. If Cheese is compromised somehow, then whenever it is running, it would be technically possible for it to open the webcam, stream video and send it to a hostile third-party application. We concluded that this is an acceptable trade-off: each application needs to be trusted with the privileges that it needs to do its job, and we should not put up barriers that are easy to circumvent or otherwise serve no purpose.

The main (only?) portal so far is the file chooser, in which the contained application asks the wider system to show an "Open..." dialog, and if the user selects a file, it is returned to the contained application through a FUSE filesystem, the document portal. The reference implementation of the UX for this is in GTK, and is basically a GtkFileChooserDialog. The intention is that other environments such as KDE will substitute their own equivalent.

Other planned portals include:

  • image capture (scanner/camera)
  • opening a specified URI
    • this needs design feedback on how it should work for non-http(s)
  • sharing content, for example on social networks (like Android's Sharing menu)
  • proxying joystick/gamepad input (perhaps via Wayland or FUSE, or perhaps by modifying libraries like SDL with a new input source)
  • network proxies (GProxyResolver) and availability (GNetworkMonitor)
  • contacts/address book, probably vCard-based
  • notifications, probably based on freedesktop.org Notifications
  • video streaming (perhaps using Pinot, analogous to PulseAudio but for video)

Environment variables

GNOME on Wayland currently has a problem with environment variables: there are some traditional ways to set environment variables for X11 sessions or login shells using shell script fragments (/etc/X11/Xsession.d, /etc/X11/xinit/xinitrc.d, /etc/profile.d), but these do not apply to Wayland, or to noninteractive login environments like cron and systemd --user. We are also keen to avoid requiring a Turing-complete shell language during session startup, because it's difficult to reason about and potentially rather inefficient.

Some uses of environment variables can be dismissed as unnecessary or even unwanted, similar to the statement in Debian Policy §9.9: "A program must not depend on environment variables to get reasonable defaults." However, there are two common situations where environment variables can be necessary for proper OS integration: search-paths like $PATH, $XDG_DATA_DIRS and $PYTHONPATH (particularly necessary for things like Flatpak), and optionally-loaded modules like $GTK_MODULES and $QT_ACCESSIBILITY where a package influences the configuration of another package.

There is a stopgap solution in GNOME's gdm display manager, /usr/share/gdm/env.d, but this is gdm-specific and insufficiently expressive to provide the functionality needed by Flatpak: "set XDG_DATA_DIRS to its specified default value if unset, then add a couple of extra paths".

pam_env comes closer — PAM is run at every transition from "no user logged in" to "user can execute arbitrary code as themselves" — but it doesn't support .d fragments, which are required if we want distribution packages to be able to extend search paths. pam_env also turns off per-user configuration by default, citing security concerns.

I'll write more about this when I have a concrete proposal for how to solve it. I think the best solution is probably a PAM module similar to pam_env but supporting .d directories, either by modifying pam_env directly or out-of-tree, combined with clarifying what the security concerns for per-user configuration are and how they can be avoided.

Relocatable binary packages

On Windows and OS X, various GLib APIs automatically discover where the application binary is located and use search paths relative to that; for example, if C:\myprefix\bin\app.exe is running, GLib might put C:\myprefix\share into the result of g_get_system_data_dirs(), so that the application can ask to load app/data.xml from the data directories and get C:\myprefix\share\app\data.xml. We would like to be able to do the same on Linux, for example so that the apps in a Flatpak or Snap package can be constructed from RPM or dpkg packages without needing to be recompiled for a different --prefix, and so that other third-party software packages like the games on Steam and gog.com can easily locate their own resources.

Relatedly, there are currently no well-defined semantics for what happens when a .desktop file or a D-Bus .service file has Exec=./bin/foo. The meaning of Exec=foo is well-defined (it searches $PATH) and the meaning of Exec=/opt/whatever/bin/foo is obvious. When this came up in D-Bus previously, my assertion was that the meaning should be the same as in .desktop files, whatever that is.

We agreed to propose that the meaning of a non-absolute path in a .desktop or .service file should be interpreted relative to the directory where the .desktop or .service file was found: for example, if /opt/whatever/share/applications/foo.desktop says Exec=../../bin/foo, then /opt/whatever/bin/foo would be the right thing to execute. While preparing a mail to the freedesktop and D-Bus mailing lists proposing this, I found that I had proposed the same thing almost 2 years ago... this time I hope I can actually make it happen!

Flatpak and OSTree bug fixing

On the way to the hackfest, and while the discussion moved to topics that I didn't have useful input on, I spent some time fixing up the Debian packaging for Flatpak and its dependencies. In particular, I did my first upload as a co-maintainer of bubblewrap, uploaded ostree to unstable (with the known limitation that the grub, dracut and systemd integration is missing for now since I haven't been able to test it yet), got most of the way through packaging Flatpak 0.6.5 (which I'll upload soon), cherry-picked the right patches to make ostree compile on Debian 8 in an effort to make backports trivial, and spent some time disentangling a flatpak test failure which was breaking the Debian package's installed-tests. I'm still looking into ostree test failures on little-endian MIPS, which I was able to reproduce on a Debian porterbox just before the end of the hackfest.

OSTree + Debian

I also had some useful conversations with developers from Endless, who recently opened up a version of their OSTree build scripts for public access. Hopefully that information brings me a bit closer to being able to publish a walkthrough for how to deploy a simple Debian derivative using OSTree (help with that is very welcome of course!).

GTK life-cycle and versioning

The life-cycle of GTK releases has already been mentioned here and elsewhere, and there are some interesting responses in the comments on my earlier blog post.

It's important to note that what we discussed at the hackfest is only a proposal: a hackfest discussion between a subset of the GTK maintainers and a small number of other GTK users (I am in the latter category) doesn't, and shouldn't, set policy for all of GTK or for all of GNOME. I believe the intention is that the GTK maintainers will discuss the proposals further at GUADEC, and make a decision after that.

As I said before, I hope that being more realistic about API and ABI guarantees can avoid GTK going too far towards either of the possible extremes: either becoming unable to advance because it's too constrained by compatibility, or breaking applications because it isn't constrained enough. The current situation, where it is meant to be compatible within the GTK 3 branch but in practice applications still sometimes break, doesn't seem ideal for anyone, and I hope we can do better in future.

Acknowledgements

Thanks to everyone involved, particularly:

  • Matthias Clasen, who organised the hackfest and took a lot of notes
  • Allison Lortie, who provided on-site cat-herding and led us to some excellent restaurants
  • Red Hat Inc., who provided the venue (a conference room in their Toronto office), snacks, a lot of coffee, and several participants
  • my employers Collabora Ltd., who sponsored my travel and accomodation

Scenario tasks

A scenario task is a set of steps that the participant needs to perform to accomplish a goal. Once you’ve figured out what tasks you want to test, you’ll need to formulate some realistic task scenarios for participants to accomplish. In short, a task scenario is the action that you ask the participant to take by providing the necessary details to accomplish that task.

A scenario task should be given to the participant during the usability test and they should represent what these people actually do with that software. Scenario tasks are the best way to bring to light usability issues since you can really see the way the participant is trying to finish the task given by you. These tasks should be created carefully because their formulation determines the reliability and the validity of the overall usability testing.

 
Before I applied for Outreachy internship I had to present a “first contribution”, in my case a usability test with few testers and ten scenario tasks. This test served as a glimpse on how the final usability test result would look like, and also gave me more information on what I should and should not be doing next time while creating scenario tasks and while performing those tasks. I managed to summarize three short tips on how to create better scenario tasks:

Do not give clues
Avoid statements that may end up giving too much information. Instead of saying click“this”, you should leave the participant finish the task in order to see if that feature is intuitive. Also do not force the user to execute that task in a certain way, there are many ways to accomplish a task so let the user’s choose their way to use the interface.

Use the user’s language
Try to adapt to participant’s way of speaking based on their backgrounds and experience using that software. Using terms they do not understand well leads them to confusion and makes them feel uncomfortable.

Keep the scenarios short
The biggest challenge during the test was keeping the testers focused in their tasks. You want to keep the scenarios short with just enough information so you can keep them interested without going overboard with details.

 
In the upcoming weeks you will be able to see these tips “in action” since we will  create more scenario tasks for GNOME applications.


Maple syrup

Last week I attended the GTK+ hackfest in Toronto. We had a really good group of people for the event, which lasted 4 days in total, and felt really productive.

DSCF1490

There were a number of interesting discussion and planning sessions, from a design point of view, including a session on Flatpak “portals” and another on responsive design patterns.

DSCF1414

I also got to spend a bunch of time talking about developer documentation with Christian and the two Philips and did a bunch of work on a series of designs for a web front end for Hotdoc (work on these is ongoing).

DSCF1404

Finally, after the *ahem* online discussion around GTK+ versioning and lifecycles, I helped to document what’s being proposed in order to clear up any confusion. And I drew some diagrams, because diagrams are important.

DSCF1502

Thanks to Matthias for organising the event, Red Hat Toronto for hosting, and Allison Lortie for being a great guide!

Tracking the reference count of a GstMiniObject using gdb

As part of my work at Collabora, I'm currently adding Valgrind support to the awesome gst-validate tool. The ultimate goal is to run our hundreds of GStreamer tests inside Valgrind as part of the existing QA infrastructure to automatically track memory related regressions (invalid reads, leaks, etc).

Most of the gst-validate changes have already landed and can be very easily used by passing the --valgrind argument to gst-validate-launcher. I'm now focusing on making sure most of our existing tests are passing with Valgrind which means doing quite a lot of memory leaks debugging (everyone love doing those right?).

A lot of GStreamer types are based on GstMiniObject instead of the usual GObject. It makes a lot of sense from a performance pov but can make tracking ref count issues harder as we can't rely on tools such as RefDbg or gobject-list.

I was tracking one GstMiniObject leak today and was looking for a way to get a trace each time its reference is modified. We can use GST_DEBUG="GST_REFCOUNTING:9" to get logs each time the object is reffed/unreffed but I was actually interested in the full stack trace. With to the help of the French gang (kudos to Dodji, Bastien and Christophe!) I managed to do so using this good old gdb.

First thing is to break when the object you want to track is created, you can either do this by using in gdb b mysource:line or just add a G_BREAKPOINT() in your source code. Start your app with gdb as usual then uses:

set logging on
set pagination off

The ouput can be pretty long so this will ensure that logs are saved to a file (gdb.txt by default) and that gdb won't bother you asking for confirmation before printing ouput. Now start your app (run) and once it has paused use the following command:

watch -location ((GstMiniObject*)caps)->refcount

caps is the name of the instance of the object I want to track, as defined in the scope where I installed my breakpoint; update it to match yours. This command adds a watchpoint on the refcount of the object, that means gdb will now stop each time its value is modified. The -location option ensures that gdb watches the memory associated with the expression, not using it would limit us to the local scope of the variable.

Now we want to display a backtrace each time gdb pauses when this watchpoint is hit. This is done using commands:

commands
bt
continue
end

All the gdb instructions between commands and end will be automatically executed each time gdb pauses because of the watchpoint we just defined. In our case we first want to display a stack trace (bt) and then continue the execution of the program.

We are now all set, we just have to ask gdb to resume the normal execution of the program we are debugging:

continue

This should generate a gdb.txt log file containing something like:

Old value = 1
New value = 2
gst_mini_object_ref (mini_object=0x7ffff001c8f0) at gstminiobject.c:362
362	  return mini_object;
#0  0x00007ffff6f384ed in gst_mini_object_ref (mini_object=0x7ffff001c8f0) at gstminiobject.c:362
#1  0x00007ffff6f38b00 in gst_mini_object_replace (olddata=0x7ffff67f0c58, newdata=0x7ffff001c8f0) at gstminiobject.c:501
#2  0x00007ffff72573ed in gst_caps_replace (old_caps=0x7ffff67f0c58, new_caps=0x7ffff001c8f0) at ../../../gst/gstcaps.h:312
#3  0x00007ffff72578fa in helper_find_suggest (data=0x7ffff67f0c30, probability=GST_TYPE_FIND_MAXIMUM, caps=0x7ffff001c8f0) at gsttypefindhelper.c:230
#4  0x00007ffff6f7d606 in gst_type_find_suggest_simple (find=0x7ffff67f0bf0, probability=100, media_type=0x7ffff5de8a66 "video/mpegts", fieldname=0x7ffff5de8a4e "systemstream") at gsttypefind.c:197
#5  0x00007ffff5ddbec3 in mpeg_ts_type_find (tf=0x7ffff67f0bf0, unused=<optimized out>) at gsttypefindfunctions.c:2381
#6  0x00007ffff6f7dbc7 in gst_type_find_factory_call_function (factory=0x6dbbc0 [GstTypeFindFactory], find=0x7ffff67f0bf0) at gsttypefindfactory.c:215
#7  0x00007ffff7257d83 in gst_type_find_helper_get_range (obj=0x7e8280 [GstProxyPad], parent=0x7d0500 [GstGhostPad], func=0x7ffff6f2aa37 <gst_proxy_pad_getrange_default>, size=10420224, extension=0x7ffff00010e0 "MTS", prob=0x7ffff67f0d04) at gsttypefindhelper.c:355
#8  0x00007ffff683fd43 in gst_type_find_element_loop (pad=0x7e47d0 [GstPad]) at gsttypefindelement.c:1064
#9  0x00007ffff6f79895 in gst_task_func (task=0x7ef050 [GstTask]) at gsttask.c:331
#10 0x00007ffff6f7a971 in default_func (tdata=0x61ac70, pool=0x619910 [GstTaskPool]) at gsttaskpool.c:68
#11 0x0000003ebb070d68 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:307
#12 0x0000003ebb0703d5 in g_thread_proxy (data=0x7ceca0) at gthread.c:764
#13 0x0000003eb880752a in start_thread (arg=0x7ffff67f1700) at pthread_create.c:310
#14 0x0000003eb850022d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Hardware watchpoint 1: -location ((GstMiniObject*)caps)->refcount

Old value = 2
New value = 1
0x00007ffff6f388b6 in gst_mini_object_unref (mini_object=0x7ffff001c8f0) at gstminiobject.c:442
442	  if (G_UNLIKELY (g_atomic_int_dec_and_test (&mini_object->refcount))) {
#0  0x00007ffff6f388b6 in gst_mini_object_unref (mini_object=0x7ffff001c8f0) at gstminiobject.c:442
#1  0x00007ffff6f7d027 in gst_caps_unref (caps=0x7ffff001c8f0) at ../gst/gstcaps.h:230
#2  0x00007ffff6f7d615 in gst_type_find_suggest_simple (find=0x7ffff67f0bf0, probability=100, media_type=0x7ffff5de8a66 "video/mpegts", fieldname=0x7ffff5de8a4e "systemstream") at gsttypefind.c:198
#3  0x00007ffff5ddbec3 in mpeg_ts_type_find (tf=0x7ffff67f0bf0, unused=<optimized out>) at gsttypefindfunctions.c:2381
#4  0x00007ffff6f7dbc7 in gst_type_find_factory_call_function (factory=0x6dbbc0 [GstTypeFindFactory], find=0x7ffff67f0bf0) at gsttypefindfactory.c:215
#5  0x00007ffff7257d83 in gst_type_find_helper_get_range (obj=0x7e8280 [GstProxyPad], parent=0x7d0500 [GstGhostPad], func=0x7ffff6f2aa37 <gst_proxy_pad_getrange_default>, size=10420224, extension=0x7ffff00010e0 "MTS", prob=0x7ffff67f0d04) at gsttypefindhelper.c:355
#6  0x00007ffff683fd43 in gst_type_find_element_loop (pad=0x7e47d0 [GstPad]) at gsttypefindelement.c:1064
#7  0x00007ffff6f79895 in gst_task_func (task=0x7ef050 [GstTask]) at gsttask.c:331
#8  0x00007ffff6f7a971 in default_func (tdata=0x61ac70, pool=0x619910 [GstTaskPool]) at gsttaskpool.c:68
#9  0x0000003ebb070d68 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:307
#10 0x0000003ebb0703d5 in g_thread_proxy (data=0x7ceca0) at gthread.c:764
#11 0x0000003eb880752a in start_thread (arg=0x7ffff67f1700) at pthread_create.c:310
#12 0x0000003eb850022d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Hardware watchpoint 1: -location ((GstMiniObject*)caps)->refcount

If you see this kind of error in your log

Error evaluating expression for watchpoint 1 value has been optimized out

this means you may have to rebuild your application and/or its libraries without any optimization. This is pretty easy with autotools:

make clean
CFLAGS="-g3 -ggdb3 -O0" make

Now all you need is to grab a good cup of tea and start digging through this log to find your leak. Good fun!

Edit

Daniel pointed out to me that watchpoints can be pretty unreliable in gdb. So here is another version where we ask gdb to break when our object is reffed/unreffed. You just have to figure out its address using gdb or simply by printing the value of its pointer.

b gst_mini_object_ref if (mini_object == 0xdeadbeef)
b gst_mini_object_unref if (mini_object == 0xdeadbeef)
commands 1 2
bt
cont
end

Kernel hacking workshop

As part of our "community" program at Collabora, I've had the chance to attend to a workshop on kernel hacking at UrLab (the ULB hackerspace). I never touched any part of the kernel and always saw it as a scary thing for hardcore hackers wearing huge beards, so this was a great opportunity to demystify the beast.

We learned about how to create and build modules, interact with userspace using the /sys pseudo-filesystem and some simple tasks with the kernel internal library (memory management, linked lists, etc). The second part was about the old school /dev system and how to implement a character device.

I also discovered lxr.free-electrons.com which is a great tool for browsing and find your way through a huge code base. It's definitely the kind of tool I'd consider re-using for other projects.

So a very a cool experience, I'm still not seeing myself submitting kernel patches any time soon but may consider trying to implement a simple driver or something if I ever need to. Thanks a lot to UrLab for hosting the event, to Collabora for letting me attend and of course to Hastake who did a great job explaining all this and providing fun exercises (I had to reboot only 3 times! But yeah, next time I'll use a VM :) )

kernel-workshop.jpg Club Mate, kernel hacking and bulgur

June 19, 2016

Progress so far

My exams ended two days ago and I must say that it’s been quite a month. I started working on my GSoC project before my exams and i worked as much as i could before the exams started. Then, I pretty much had some very full days, but still I managed to organize my time in such a way that I was able to code in between my exams:).

The first part of my work was called ‘user tracking’ and it involved, well (you guessed it), tracking users, more specifically their status (either online or offline). It’s not as if this functionality hadn’t been present before, it actually was, only that there was a case in which it would provide faulty results. So my work actually involved enhancing the user tracking functionality already present in Polari.

What was it all about? Well, if the same person would be online with multiple clients at the same time (let’s say user ABCD is logged in on both his PC and laptop and that he is on the same IRC channel with both, with names like ‘ABCD_PC’ on his PC and ‘ABCD_LAPTOP’ on his laptop so that the two nicknames are not identical). The tricky part was that if the user would disconnect either of the clients (only one of them, not both), both nicknames would be marked as offline, even if one client still remained active. This behavior would persist until the still active client would send a message so that Polari would notice that he is actually active.

After userTracking was finished, I started working on the next part, called ‘contextual popups’. This part involves both the UI and the functionality behind it. This is a completely new feature and I’m really excited about out it begins to look. Also, it relies a lot on the previous feature, userTracking.

Here’s a picture about the current progress on the popovers:

crop

There are still things to be done in order for it to be complete, and i will focus on those in the following days. Visually speaking, there is only a button left to add (and the functionality behind it) and I will write a new post as soon as it is ready, so stay tuned!


What is Usability

The purpose of technology is to empower humans, our purpose is to be able to understand it and usability stands between these two concepts.

Usability testing is not just a collection of different opinions and research, instead it involves watching people trying to use the product to accomplish their tasks, at the same time it measures the product’s capacity in accomplishing its intended purpose.  Basically you have the chance to experience a “live feedback” from users. The word “usability” also refers to methods for improving and measuring the ease-of-use during the design process. Usability Evaluation focuses on how well users can learn and use a product to achieve their goals. It also refers to how satisfied users are with that process. In short, Usability is the relationship between the program and its audiences.

In my opinion usability is a wide definition which is hard to narrow down on a one solid statement, here are some definitions of usability merged together:

According to ISO 9241 definition, Usability means:    

The effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments.

Effectiveness: the accuracy and completeness with which specified users can achieve specified goals in particular environments.

Efficiency: the resources expended in relation to the accuracy and completeness of goals achieved.

Satisfaction: the comfort and acceptability of the work system to its users and other people affected by its use.

Another definition:

Intuitive design: a nearly effortless understanding of the architecture and navigation of the site

Ease of learning: how fast a user who has never seen the user interface before can accomplish basic tasks

Efficiency of use: How fast an experienced user can accomplish tasks

Memorability: after visiting the site, if a user can remember enough to use it effectively in future visits

 Error frequency and severity: how often users make errors while using the system, how serious the errors are, and how users recover from the errors

Subjective satisfaction: If the user likes using the system

‘A Practical Guide to Usability Testing’ (rev. ed., Dumas and Redish, 1999). Dumas and Redish use four points to their definition:

1. Usability means focusing on users

2. People use products to be productive

3. Users are busy people trying to accomplish tasks

4. Users decide when a product is easy to use

What usability is not

Out of the scope                                                                                                                         

Thinking that usability requires knowledge of psychology or visual arts and it is outside the scope of what programmers do it is false. Also thinking that there should be only one person who is in completely in charge of usability testing it is false. Of course there will always be a role for usability specialists, but basic competence in usability engineering should be part of every programmer’s craft.

A finishing touch   

Usability testing should not be treated as a “finishing touch” after the design phase. After conducting a usability test, we can never assume that that software is now usable. Usability is a process that changes over time, what was considered to be usable some time ago may not be considered usable now. So to ensure good usability, we need to test in an iterative way throughout the design and development process, always involving users.

Spot the differences

Usability vs UX  

UX is the overall experience of a person using a product such as a website or computer application, especially in terms of how easy or pleasing it is to use, it is more about users emotional connection and experience using the product.

honeycombThe User Experience Honeycomb

Usability is one of the components that influence the overall User Experience. Is the ability to do something intuitively and easily. So,just because something is easy to use, does not necessarily mean that is a good user experience.

Usability vs Utility

Definition of Utility = whether it provides the features you need.

Definition of Usability = how easy & pleasant these features are to use.

Definition of Useful = usability + utility

Usability vs Functionality                                                                                                

Functionality is the ability of an interface or device to perform according to a specifically defined set of parameters.Functional testing requires the mindset of “CAN I do what I need to do, does this product work?” whereas usability has the mindset of “HOW can I do what I need to do, does this make sense?”

So, this is a wrap for this weeks post where the main goal was to build up an understanding of usability testing Feel free to leave any comments, I’d love to know your opinion and discuss more about usability.

See you next week:)

References:        

http://www.catb.org/esr/writings/taouu/taouu.html

https://www.nngroup.com/articles/usability-101-introduction-to-usability/

http://semanticstudios.com/user_experience_design/


Audio tag editing for GNOME Music

So I got a mail a few days back that read “Congratulations! Your proposal with GNOME has been accepted” for Google Summer of Code.

I would have leapt in joy were it not for a nasty cold. Nonetheless this is a huge opportunity for me to get to learn and experience real software development with great support from the GNOME devs. The past few months that I’ve spent on the channels and contributing (mostly to gnome-music) have taught me way more about software than I had learnt in the previous one and a half year that I’ve been coding. I did not understand what APIs were and I was not even aware of version control systems. But I’ll cover more of my failures some other day…

Back to the project. Its what you would guess. A feature to allow users to edit and add metadata to their music files. This is a much desired feature, in my opinion, as this not only lets the users organize their music collection better but also get rid of ‘Unknown’ tags that may riddle one’s library. I wish I was able to add some images of how it would work, but we have yet to finalize on a design mockup for the GUI. I’ll make up for this during the developmental phase with my regular updates on how it goes.

The cool thing to know about this feature would be that it would allow for fetching metadata using acoustic fingerprints which has prevailed in closed source software but is gaining popularity fast in open source projects as well. This is a very neat algorithm for comparing music data for very accurate identifications. You can read more about it here.

The wiki page for my project contains a summary of how I expect it to go as well as the link to the enhancement bug for this feature. I’ll be adding more content on this over the next four months, so stay tuned!


The UX for GNOME Music’s tag editor

Its the second week of GSoC 2016.  The development of a functional UI editor dialog is in progress. The editor at first should be able to allow the user to edit common tags (‘title’, ‘album’, ‘artist’, etc.) for a single song. If done properly it would pave a way to implement automatic tag suggestions and extensions for editing multiple songs (related or not) at once through the dialog.

Here’s a demonstration of how the user would go about from one of the Views in GNOME Music to edit a single song:

Selecting a song

A song needs to be first selected if it is to be edited. This can be done through the Songs view or the AlbumWidget for a particular album. When the user selects a (single) song from one of these places, the actionbar reveals itself which now offers an ‘Edit Details’ button along with one of the playlist management buttons.

GNOME Music: selecting songs in 'Songs' viewSelecting songs from ‘Songs’ view
GNOME Music: selecting songs from an albumSelecting songs from an album

If the user chooses to Edit Details of the selected song, the editor dialog is brought up.

The tag editor dialog

The editor responsible managing details of a single song offers common and relevant data that the average user would care about. The provided fields, namely Title, Album, Artist, Composer, Genre, Track, Disc and Year, are all editable. Apart from this the media-art associated with the song is also displayed. Clicking this cover will open up another dialog that allows file selection (only image types) that will now be used as the cover art for that song, replacing the old one.

GNOME Music: tag editor dialogThe tag editor dialog allows common tags to be edited
GNOME Music: tag editoy dialog fieldsEdited entries are written as soon as ‘Enter’ is pressed or when entry loses focus

Each edit is physically stored into the music file as well as updated in the database. This part is taken care of by the tracker-writeback daemon. Once the user is satisfied with all the changes he/she can close the dialog.

Player accommodates to changes

Any changes made to a song or any music entity, should be reflected in the database as well as the current graphical user interface. Following the close of editor dialog, the player and the different views adjust their contents to accommodate the changes made. The edited song is inserted into the Songs list and the old item is removed. The song is also added to the container provided for the song’s album in the Albums view.

the edited song gets placed where it shouldNew song entry is inserted into the Songs list
edited song placed in the album containerThe albums view as well as the album container reflect the changes written

Its worth noting that the above UI behavior is subject to change and it will quite possibly if I can find better ways to do things. The manual selection of cover art and an Undo option for immediately edited songs are two other planned features. They’re next in my cross-hairs.

Long road ahead…

There are still a few bugs and defects lurking about, that keep the UI from behaving as it should ideally. I’ll be working on them pretty soon, once I implement a functional dialog for manual edits to its full extent (like storing cover-art and and the undo option, both features I didn’t cover in this post). In fact my ‘bugs-to-solve’ list keeps growing  by the day. But I suppose that’s unavoidable when you’re developing software.

Thanks for reading and your thoughts are welcome! :)

 

 


GSoC 2016

index

Hey!

Firstly, I’d like to say that I’m really excited that I got accepted to Google Summer of Code 2016 and for having the opportunity to work on such an interesting project as Nautilus.

The first time I learned about open-source and Linux was in the first year at University and since then I’ve been a user of open-source software and now I’m really glad to say that I’ve become a contributor. I started contributing in February this year and one thing that I love about GNOME is the community, which has some really helpful and friendly people. The person who helped me the most is my mentor, Carlos Soriano, who offered me his help every time I needed it, for which I thank him a lot!

This is my first real project that I’m working on and I can say that doing something that you know will actually benefit people gives an amazing feeling. The goal of my project is to provide the ability of renaming multiple files in Nautilus easily, using a simple, nice and intuitive UI. Until now, you could do batch renaming only by using a third party tool, but that meant that you had to spend extra time for a simple task. The way I picture this new feature is by offering several modes of renaming like append/prepend, replace or format.

Here I’ll add updates about my work, so you can follow this blog to see how my project will progress:)

Alex


June 18, 2016

Thoughts on the Linux Mint X-Apps forks

You may be aware that Linux Mint has forked several GNOME applications, either directly from GNOME (Totem -> Xplayer, Evince -> Xreader, Eye of GNOME -> Xviewer), or indirectly via MATE (gedit -> pluma -> XEd).

GNOME is like the Debian of the Linux desktops. But is it a good thing? In the current state of the code, I don’t think so and I’ll explain why, with a solution: creating more shared libraries.

At the end of the day, it’s just a matter of design and usability concerns. We can safely say that the main reason behind the forks is that the Linux Mint developers don’t like the new design of GNOME applications with a GtkHeaderBar.

And there are perfectly valid reasons to not like headerbars. For gedit for example, see the list of usability regressions at this wiki page.

Currently the trend is GtkHeaderBar, but what will it be in 5 years, 10 years? Let’s face it, GNOME is here just following the new trend that came with smartphones and tablets.

So, a GNOME application developer needs to know that:

  • A GUI is an ever-changing thing, exactly like the clothes that you bought last year are already obsolete, right?
  • When the GUI changes too much, other developers don’t like it and fork the project. For valid reasons or not, this doesn’t matter.

The four X-Apps forks account for roughly 200k lines of code. In the short-term it works, Linux Mint has apps with a traditional UI. But hey, porting the code to GTK+ 4 will be another beast, because the four X-Apps still use the deprecated GtkUIManager and GtkAction APIs, among other things.

But when we look at the codebase, there are a lot of code that could be shared between a GNOME app and its fork(s). So there is a solution: creating more shared libraries. The shared libraries would contain the backend code, of course, but also some basic blocks for the UI. The application would just need to glue things up together, assembling objects, binding GObject properties to GSettings, create the main GtkWindow and a few other things.

The difference would be that instead of forking 200k lines of code, it would be forking maybe 20k lines, which is more manageable to maintain in the long term.

In the case of gedit, making its code more re-usable is exactly what I do since several years, but for another reason: being able to create easily specialized text editors or small IDEs.

Beside avoiding code duplication, creating a shared library has the nice side effect that it is much better documented (usually), and with an API browser like Devhelp, it’s a breeze to discover and understand a new codebase, it permits to have a nice overview of the classes. It’s of course possible to have such documentation for application code, but in practice few developers do that, although it would be a big step towards lowering the barrier to entry for newcomers.

When untangling some code from an application and putting it in a shared library, it is also easier to make the code unit testable (and unit tested!), or at least write a mini interactive test in case of frontend code. Making the code more stable, getting closer to bug-free code and thus more successful software.

Developing a shared library doesn’t necessarily mean to provide backward compatibility during 10 years. Nothing prevents you from bumping the major version of the library every 6 months if needed, making the new version parallel-installable with the previous major versions. So that applications are not forced to update the code when there is an API break.

But creating a library is more difficult, API design is hard. But in my opinion it is worth it. GNOME is not only a desktop environment with an application suite, it is also a development platform.

June 17, 2016

Travel.

Middle East

In late March 2016, I attended some Wikimedia gatherings in the Middle East: The WikiArabia conference in Amman (Jordan), a Technical Meetup in Ramallah (Palestinian territories), and the Wikimedia Hackathon in Jerusalem (Israel).

Your browser cannot play HTML5 video. Download it.


(Video above by Victor Grigas [CC BY-SA 3.0], via Wikimedia Commons)

I gave an introduction to the many technical areas in Wikimedia anyone can contribute to. I also gave an introduction how to use Phabricator, the project management suite used (for mostly technical aspects) by the Wikimedia community which allows managing and following progress in projects and collaborating with developers.

Your browser cannot play HTML5 video. Download it.


(Video above by Victor Grigas [CC BY-SA 3.0], via Wikimedia Commons)

As I love discussing society and politics I was not sure initially how much I’d have open and blunt conversations. But on the first evening I was already sitting together with folks from Tunesia, Egypt and Saudi-Arabia who were comparing the situations in their home countries. People also allowed me to learn a little bit about how daily life is in Iraq or Saudi-Arabia.

Petra

Petra

After a short trip to Petra, we spent an entire day to get to and cross the border between Jordan and the West Bank. If you look at the mere distance it feels ridiculous. It definitely makes you appreciate open borders.

At the border crossing between Jordan and the West Bank

At the border crossing between Jordan and the West Bank

Afterwards, we were very lucky that Maysara (one of our hosts) took the time and his car to drive us around in the Westbank to visit a bunch of spots, pass settlements, look at walls, or wonder which streets to take (sometimes a checkpoint with a soldier pointing a machine gun at you helps making decisions).

The old city center of Nablus

The old city center of Nablus

Graffiti on graffiti in Ramallah

Graffiti on graffiti in Ramallah

At some point, Maysara simplified it in a single quoted sentence: For Israelis it’s fear. For Palestinians it’s humiliation.

Street sign in the West Bank

Street sign in the West Bank

Imagery of dead fighters in Nablus

Imagery of dead fighters in Nablus

In Israel, we walked through Jerusalem’s old town, visited Masada and took a bath in the Dead Sea.

Dead Sea: Past war zones

Dead Sea: Past war zones

View from Masada

View from Masada (the squares were siege camps)

On the last day I visited the Yad Vashem Holocaust memorial with some co-workers (thanks to Moriel for organizing it). It’s obviously an activity you cannot “look forward to”. I am still impressed by our guide who explained and summarized history extremely well.
The architecture of Yad Vashem makes you go through several rooms on the left and right of the passageway in a chronological way and our guide mentioned several times that you “cannot yet see what is coming a few rooms (means: a few years) later”, and the question “Why did Jewish citizens not flee” got answered by “Where would you try to escape to if even outside of ghettos and concentration camps everybody is hostile”. Which explained very well the self-understanding why to found a state for Jews.

I am incredibly thankful to those many great people I could meet and who shared their points of views on the social and political situation, always in a pretty reflected and respectful way despite of all the frustration being around.
And whatever my question was to locals, the answer pretty much always was “It’s more complicated than you thought.”

India

Afterwards I spent some time in India to attend Rootconf, visit GeekSkool to learn a lot about why the concept works, and attend GNOME.Asia (thanks to Shobha and everybody organizing it!).

Hardware recycling via badge lanyards

Hardware recycling via badge lanyards

Breakdance competitions

Breakdance competitions

In a society where the path of welfare could be expressed by “walk → motorbike → car”, I received some grins admitting I had never had a motorbike ride before. In Indian traffic I’d call that an experience, for a tourist like me.

GNOME.Asia 2016 venue

GNOME.Asia 2016 venue

GNOME.Asia 2016 music

GNOME.Asia 2016 music

As usual, it’s wonderful to finally meet folks in person who you’ve only spoken to online beforehand, and to hang out with old friends. (I sound like a broken record here. I am sorry I could not see everybody. I’ll be back.)

June 16, 2016

Examining transit tracks on the map

Since last time I've implemented some more of Andreas' nice transit routing mockups.

After performing a search, the map view is zoomed and positioned to accomodate the starting and ending points, as can be seen here, and since at this point no itinerary is selected for further viewing, there are no trails drawn on the map yet.

After selecting an individual itinerary it is drawn out in detail as shown in the following shots:



And zooming in on the start, and you'll see the walking path in this case until reaching the first transit.

The little icons shown in the map marker for boarding locations will match the transit mode icon as shown in the overview list (buses in this case).

And in case the transit data has information about line colors this will reflect the trail segments on the map as well:


The next step on this journey (pun intended :) ) will be to allow expanding each leg of an itinerary to view the intermediate stops, and in case of walking, show the turn point instructions, and also being able to highligt these on the map.

Oh, and as a little word of warning, in case someone is planning on trying this out at home, there is currently a bug in the latest git master of OpenTripPlanner that makes useage without OSM data loaded in the server (as is what I have intended for GNOME usage, since we already have GraphHopper, and as OTP would probably not scale well loading many large regions worth of raw OSM data) querying for routes using pure coordinates doesn't work in that case, so I'm on a couple of weeks old commit right now.
I might wait until this is resolve. Or I might actually look into trying to query for transit stops near the start and finish point and use that when performing the actual query, which might actually yield better result when selecting a subset of allowed transit modes.

It is also probably time to start trying to find funding for a machine hosting an OTP instance for GNOME :-)

And that's that for today!

Stepping towards Proxies.

Hello GNOME,

I’m doing a GSoC project this summer which in a single line is  to “handle proxies in our system”. Some of us may not have encountered this headache ever . The problem starts arising from the time we start thinking of multiple connections with proxies enabled . Firefox or any browser can’t be helpful in this case ( it doesn’t know which proxy to choose for an inserted URL). Env vars like http_proxy, https_proxy ? No!.  We can’t use a LAN thing with a VPN, so there’s no scope for a generic proxy ( Proxies are meant to be separate for each connection like all other network resources ) . So what we needed ?

  • Obtain Proxies for multiple connections .
  • Should “Just work” (behind silently).
  • Proxies shouldn’t be limited to browsers, available to all clients.

“Just Work” philosophy comes from NetworkManager which i understand is “minimize user input as much as you can” . WPAD via DHCP is the most efficient and safest way to maintain/obtain proxies for/from our networks . So, this project is basically performing WPAD from core of NetworkManager and store the details to a storehouse(say), Pacrunner in this case. Clients should be asking Pacrunner “what is the proxy for this URL?” , and Pacrunner will answer using the details stored by NM. Exchanges here take place via DBus.

Screenshot_2016-06-16_23-23-34

To be in par with “Just Work” philosophy auto mode (DHCP->WPAD) will be default until someone opts for “manual”. So, there’s the plan to provide a simple window  for manually setting up proxies, if users want to override WPAD obtained value . I haven’t the UI design yet, we never worried for it . It will be simple with Entry fields (assume it to look as we have for firefox) but that would be available for each connection . We’ll simply need to $nm-connection-editor <return> , click on “edit” of whichever connection we like to edit , a tab named “Proxy” will be there .

I’m lucky to work with David Woodhouse as my mentor , a very-2 supporting person . Pacrunner part was independent and we have finished that, code is in master . NM part has been divided into two steps. We are almost done with writing the first part. I hope to see our code in NM as soon as i can do .

Thanks!


Making pie charts

This isn't related to "open source software and usability" but I thought this was neat so I wanted to share it here.

On my Coaching Buttons blog, I'm planning a new article about how we divide our time between Lead, Manage and Do tasks. For example, "Lead" is about setting a direction, getting people on board with a vision, and generally providing leadership. You can lead at all levels in an organization. "Manage" is the day-to-day of keeping things running, coordinating people and processes, and allocating resources and budgets. And "Do" are the hands-on tasks like coding, analyzing data, or responding to issues. Email and writing reports can also be "Do" tasks.

The goal of the "Lead-Manage-Do" project is to focus your time appropriately. Think of your available time as a "pie," and how you divide your time as "slices" of the pie. That's your time for the week. You can't make the pie any bigger, unless you want to work through the weekend. How do you spend this available time? I'm looking to build on my Lead-Manage-Do since I've moved to my new role.

My new post will require pie charts to illustrate how I spend my time. I could generate a few quick pie charts in a spreadsheet and post them to my blog. That's what I did in my previous blog posts on this topic. But lately, I have tried to use more SVG graphics on my personal websites, due to the reduced size and greater flexibility. So I wondered if I could create a pie chart using SVG.

When I searched for examples in creating pie charts in SVG, many sources pointed me to Designing Flexible, Maintainable Pie Charts With CSS And SVG by Lea Verou. It's an interesting article. But I found the article's method limiting, using one slice in a larger pie. My "Lead-Manage-Do" charts need a balance of three pie slices. There's a section at the end using a conical gradient method that might do the job, but I knew there had to be a cleaner way to do it.

And there is: SVG arcs.

Mozilla Developer Network has a great article on SVG Paths. Skip to the end where they talk about arcs. The general syntax for an arc is:
A rx ry x_axis_rotation large_arc_flag sweep_flag x y
Arcs are tricky because the arc can take different paths (clockwise or counterclockwise) and they can go "in" or "out." And if your arc is based on an oval, you can rotate the oval. Hence the additional arguments. The MDN article explains the large arc flag and the sweep flag:
The first argument is the large-arc-flag. It simply determines if the arc should be greater than or less than 180 degrees; in the end, this flag determines which direction the arc will travel around a given circle. The second argument is the sweep-flag. It determines if the arc should begin moving at negative angles or positive ones, which essentially picks which of the two circles you will travel around.
You can use the SVG arc to create a neat pie wedge.

If you haven't used SVG before, think of SVG as a form of Turtle graphics (I suppose this analogy only helps if you have used Turtle graphics). You define a shape by moving to (M) a starting x,y position, then you can draw lines (L) or arcs (A) or other elements as needed. The "Close Path" command (Z) automatically closes the shape by connecting a "line" from the ending position back to the starting position.

And with that, we can define a pie wedge! Let's define a box 400×400. A circle in this box would be centered at 200,200. Since it's a circle, I don't need an x axis rotation, so I'll set that to zero. By default, the SVG will have 0,0 in the upper-left corner.

To draw a pie wedge, move (M) to a starting point on the circle's edge, then draw an arc defined by that circle. To close the pie wedge, create a line back to the circle's center at 200,200. Since I'm using absolute coordinates, use uppercase M, A, and L. Here's the code:
<path d="M 400 200 A 200 200 0 0 0 200 0 L 200 200 Z" style="fill: green;">
</path>
A simple example:

That defines a green wedge from 400,200 to 200,0 (0 degrees to 90 degrees) and a line back to the center of the circle at 200,200. The Z command automatically closes the shape by projecting a line from the end (200,200) to the start (400,200). The above drawing also provides a pink circle with red center dot and a red outline so you can see the circle that defines the arc. The arc has x radius 200 and y radius 200.

To help me create multiple pie wedges that can start and stop at any angle, I needed a little extra help. To run the math, I created a simple spreadsheet where I define the starting θ and ending θ, both in degrees, and the spreadsheet converts that to radians and calculates appropriate x,y values based on the size of the pie chart I want to make. For a circle this size, two decimal places is more than enough precision. (For a larger SVG image, you might use only integer coordinates. For a smaller SVG image, you might need another decimal place.)

You can easily work the math out on your own. An x,y coordinate on the edge of a circle is defined by rcosθ and rsinθ. With that, it's trivial to work out the start and stop coordinates for the arc. Just remember 0,0 is at the top left and positive values are right and down. Here's some pseudo-code for the x,y of any point on a circle of radius r, given an angle θ:
x = xcenter + r cosθ
y = ycenter - r sinθ

Also remember that the large arc flag "determines if the arc should be greater than or less than 180 degrees" (from the MDN article). So the limitation here is that my wedges need to be less than 180 degrees if I use a zero large arc flag.

And you can take a shortcut. If you first define a circle behind the wedges, you really only need to create n–1 wedges. This is especially helpful if you know one of the wedges will be larger than 180 degrees.

Putting it all together, you have a 2D pie chart in SVG:

The SVG code first draws a background circle (orange), then defines a red wedge from 30 to 110 degrees, and a blue wedge from 110 to 280 degrees. Visually, this leaves an orange wedge from 280 to 30 degrees.
<svg height="400" width="400">
<circle cx="200" cy="200" fill="orange" r="200"></circle>
<path d="M 373.21 100.02 A 200 200 0 0 0 131.66 12.04 L 200 200 Z" style="fill: firebrick;"></path>
<path d="M 131.66 12.04 A 200 200 0 0 0 234.55 396.99 L 200 200 Z" style="fill: royalblue;"></path>
</svg>
You can also apply other styles to create outlines, etc. on the pie chart. But I'll leave that to you.

Batch Renaming – Call for design ideas

Hello,

Alex Pandelea has been working on batch renaming for Nautilus. So far we though about taking most of the design ideas from what Finder, the file manager of MacOSX does.

Here is a video of Finder’s batch renaming

However, after trying a prototype implementation following Finder, I have been changing my mind about the design being good enough.

Let’s take a look at Finder’s design and find the ugly parts.

First mode, “replace text”

replace

I can see few issues:

  • The “Replace Text” combobox doesn’t really provide me a hint that it’s a combobox for changing modes. It looks like part of the mode itself, doesn’t give a hint about having more importance than the entries under it or the cancel and rename buttons
  • The cancel and rename buttons are at the same level than the example and information. I would prefer to have the information and example in one line, then the final decision about cancelling and renaming apart of those. This is now usual in dialogs on the GNOME HIG where the dialog buttons occupy the whole width and are in a different line level.

Let’s change the mode. “Add Text”

add.png

One big issue here, suddenly the entry and combobox for the mode is in the same line as the combobox to change the mode. I guess they wanted to give a connection between the UI items in there, so it makes a sentence like “Add text …. blah blabh…. after name”. But that is not the case in other modes, so it’s more confusing than anything.

Next mode is “Format”

format.png

And this is the one with more UX issues in my opinion. It has the ones of the first mode plus:

  • There is no visual connection between the “Name format” and “Start numbers at”.
  • “Start numbers at” is only useful for “Name and Index”, not for other name format modes, still it’s there for other modes.
  • It’s using combobox for just three and two options… which is kinda useless. It’s better to just present all the options. Much straightforward, much clearer, one click less.
  • The labels “Name format” and “Custom format” are misleading. Either I want a custom format or something else.

Apart of these issues, I also noticed it doesn’t give you feedback if some file in the directory conflicts with the names that are going to be generated beforehand. We want this on Nautilus.

So I came up with this draft as my idea to cover these issues (my drawing skills are not the best). Note that the mode selected is”Format”, which is the most complex UI wise:

batch renaming mockup.png

I would like to know your thoughts about this, and even better, engage you on providing design and ideas. Now it’s your turn:)

Here’s the svg of the picture above,  you can modify and put your ideas https://my.owndrive.com/index.php/s/TK4JfCPOzUnDlzN


Translation parameters in angular-gettext

As a general rule, I try not to include new features in angular-gettext: small is beautiful and for the most part I consider the project as finished. However, Ernest Nowacki just contributed one feature that was too good to leave out: translation parameters.

To understand what translation parameters are, consider the following piece of HTML:

<span translate>Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}.</span>

The resulting string that needs to be handled by your translators is both ugly and hard to use:

msgid "Last modified: {{post.modificationDate | date : 'yyyy-MM-dd HH:mm'}} by {{post.author}}."

With translation parameters you can add local aliases:

<span translate
      translate-params-date="post.modificationDate | date : 'yyyy-MM-dd HH:mm'"
      translate-params-author="post.author">
    Last modified: {{date}} by {{author}}.
</span>

With this, translators only see the following:

msgid "Last modified: {{date}} by {{author}}."

Simply beautiful.

You’ll need angular-gettext v2.3.0 or newer to use this feature.

More information in the documentation: https://angular-gettext.rocketeer.be/dev-guide/translate-params/.


Comments | More on rocketeer.be | @rubenv on Twitter

June 15, 2016

Long term support for GTK+

Dear Morten,

A belief that achieving stability can be done after most of the paid contributors have run off to play with new toys is delusional. The record does not support it.

The record (in terms of commit history) seems to not support your position — as much as you think everyone else is “delusional” about it, the commit log does not really lie.

The 2.24.0 release was cut in January, 2011 — five and half years ago. No new features, no new API. Precisely what would happen with the new release plan, except that the new plan would also give a much better cadence to this behaviour.

Since then, the 2.24 branch — i.e. the “feature frozen” branch has seen 873 commits (as of this afternoon, London time), and 30 additional releases.

Turns out that people are being paid to maintain feature-frozen branches because that’s where the “boring” bits are — security issues, stability bugs, etc. Volunteers are much more interested in getting the latest and greatest feature that probably does not interest you now, but may be requested by your users in two years.

Isn’t it what you asked multiple times? A “long term support” release that gives you time to port your application to a stable API that has seen most of the bugs and uncertainty already squashed?

Gtk+ Versioning

New thoughts are being expressed about Gtk+ versioning.

There is something about numbering. Whatever. The numbering of Gtk+ versions is a problem I do not have. A problem I do not expect to have. Hence “whatever”.

But there is also a message about stability and it is a scary one.

A cynical reading or, one might claim, any reading consistent with persistent prior behaviour, would come to the conclusion that the Gtk+ team wants to be released from all responsibility of medium and long term stability. If, for no good reason, they feel like breaking the scroll wheel behaviour of all applications again then evidently so be it.

But maybe that is too dark a view. There is some hint that there will be something that is stable. I just do not see how the versioning plan can possibly provide that.

What is missing from the plan is a way to make things stable. A snapshot at a semi-random time does not do it. Specifically, in order to provide stability of the “stable” release, I believe that a 3-6 months long period before or after the stable release is declared should be devoted exclusively to making the release stable. Bug fixes, automated tests, running with Valgrind, Coverity, gcc -fsanitize=undef, bug triaging, etc. No new feature work.

A belief that achieving stability can be done after most of the paid contributors have run off to play with new toys is delusional. The record does not support it.

GUADEC 2016

I’m going to GUADEC 2016 Karlsruhe, Germany

I’ve arranged pretty much everything for Karlsruhe. Meaning: transportation (train) and the hotel. According to the feedback I received, the public transport will be pretty terrible during the conference. So make sure to get a hotel nearby. The one I chose (Leonardo Hotel) is about 20 minutes walking. Make sure to read the reviews of this hotel. The closest hotel will be Achat and the GUADEC website should soon have a discount code for that. Achat with discount is still more expensive than Leonardo; though Achat should be worth it.

Aside from the likely non-working (UPDATE/correction: mostly working except maybe 1 tram line) public transport, they’ll also have bikes for around 1 EUR/30 minutes. So that’s probably what I’ll use to get around. The bikes likely require a data connection to rent.

To get there I’ll go by train; seemed like the best option (100 EUR round trip for about 6 hours one way). Coach would take 8+ hours and cost at least 100 EUR. Anyone looking into doing the same I recommend booking asap, trains are getting more expensive. I gambled on not paying the additional 8 EUR to reserve a seat on the high speed train part. Let’s see if that was a wise choice :P Still remember the time I reserved and the train was pretty much empty, as well as the time I did not and I had to change seats multiple times!

User conference

In case you use GNOME, the GUADEC conference is also for users. In case you’re wondering if you’ll fit in: Everyone is usually super friendly. First year you go you go to see talks and maybe a few drinks (alcohol is optional). Second year you talk more with the people you met from last year. Third year onward the talks are an excuse to go and the only talks you see are the ones where the speakers asked you to please attend :P

June 14, 2016

Dispatches from the GTK+ hackfest

A quick update from the GTK+ hackfest. I don’t really want to talk about the versioning discussion, except for two points:

First, I want to apologize to Allison for encouraging her to post about this – I really didn’t anticipate the amount of uninformed, unreasonable and hateful reactions that we received.

Second, I want to stress that we just at the beginning of this discussion, we will not make any decisions until after Guadec. Everybody who has opinions on this topic should feel free to give us feedback. We are of course particularly interested in the feedback from parties who will be directly affected by changes in GTK+ versioning, e.g. gstreamer and other dependent libraries.

20160613_155602Todays morning discussion was all about portals. Portals are high-level D-Bus apis that will allow sandboxed applications to request access to outside resources, such as files or pictures, or just for showing URIs or launch other applications. We have early implementations of some of these now,  but after the valuable feedback in todays discussion, we will likely make some  changes to it.

An expectation for portals is that there will be a user interaction before an operation is carried out, to keep users in control and let tem cancel requests. The portal UIs will be provided by the desktop session.

We want to have a number of portals implemented during the summer, starting with the most important ones, like file chooser,  ‘open a uri’, application launcher, proxy support, and a few others. The notes from this discussion can be found here.

20160613_105754In the afternoon, the discussion moved to developer documentation, how to improve it, let readers provide feedback and suggestions, and integrate them in gnome-builder.

We also discussed ways to make GTK+ better for responsive designs.

“Gtk 5.0 is not Gtk 5”

…and it’s still OK.

Today is the second day at the GTK hackfest, and we are discussing portals, interactions between applications, and security. That’s not really what this post is about.

Yesterday’s post got quite a lot of feedback. I had a number of discussions with various stakeholders on IRC, and got many nice comments on the post itself. People from a few different distributions, a couple of non-GNOME Gtk-based desktop environments, and a number of app authors have all weighed in positively.

The last post explained the new version policy that we hope to use for the future of Gtk. This post aims to explain some of the benefits of that system, and also why we considered, but ultimately rejected some other options.

The first major takeaway from the new system (and the part that people have been most enthusiastic about) is the fact that this gives many application authors and desktop environments something that they have been asking for for a long time: a version of Gtk that has the features of Gtk 3, but the stability of Gtk 2. Under the proposed scheme, that version of Gtk 3 will be here in about a year. Gtk version (let’s say) 3.26 will be permanently stable, with all of the nice new features that have been added to Gtk 3 over the last 5+ years.

The new system also means that we aim for doing this for our users once per two years from now on. Part of the problem with deciding between Gtk 3 and Gtk 2 was that Gtk 2 was just so old. If we keep the intended two-year cadence, then applications will never have to target a release of Gtk that is more than 18 months older than the most current release. This will reduce a lot of the pain with sticking with the stable version, which is why we anticipate that many more people will choose to do this.

The new arrangement also formalises what a lot of people have been complaining about for a while: Gtk 3 releases, although under an official policy of API compatibility, have often failed to completely live up to this promise. We have also been less than straightforward about exactly what the stability guarantees in Gtk 3 are, which has led to an awful lot of reasonable hesitation about uptake. The new system makes it clear that, during the first 18 months of releases, the Gtk 4 series is absolutely going to be unstable, and you should not target it. On the other hand, when it does become stable, this fact will be very clearly communicated, and you can trust that there will not be any changes of the sort that have hurt some of the users of Gtk 3.

In short: we get more freedom to make major changes during minor releases during the “early days” of a new major version. We advertise this fact clearly. The speed of development will increase. Once we are done breaking things, we very clearly advertise that “Gtk 4 is stable” (around Gtk 4.6). At this point, people know that they can port from Gtk 3 to Gtk 4 and get two years worth of new features with none of the fear about instability.

Most of the negative feedback on yesterday’s post came in the form of confusion about the seemingly complicated scheme that we have selected for the version numbers. Gtk 4.0 is unstable, and so is 4.2 and 4.4, but suddenly 4.6 is the stable version that everyone should use. Meanwhile, Gtk 5.0 is released, which means that everyone should (of course!) use Gtk 4.6… What’s going on here? Have you all lost your minds? Haven’t you people heard about semantic versioning?

This apparent oddness is something that we discussed in quite a lot of detail, and is also the part of the plan that we are probably most open to making changes to as we discuss it in the following months. I don’t think it will change very much though, and here is why:

One possibility that we discussed was to release Gtk 4.0 (gtk-4.0.pc) and tell people that this is the new stable API that they should use. This would not work well with our current situation. There are currently many hundreds of applications using Gtk 3 which would have to go through a mostly-pointless exercise of changing to “gtk-4.pc”, with almost no changes, except a different version. We wanted these people to just be able to continue using Gtk 3, which would suddenly become much more stable than it has been. This is pretty much exactly what everyone has been asking for, in fact.

Let’s say we ignored that and did Gtk 4.0 as the new stable release, anyway. We would want to start working on “Gtk 5” immediately. Where would that work go? What would the version numbers look like? Gtk 4.90-rc, progressing toward Gtk 5.0? We would never have any minor numbers except “.0” followed by alpha/RC numbers in the 90s. This also means that most releases in the “4” series (4.90, etc) would have “gtk-5.pc”. This approach was just too weird. At the end of the 4 cycle, it is reasonable to imagine that instead of 4.7, we might call it 4.99, and have a pkg-config file named “gtk-5.pc”, but we will never have a non-development version (non-odd minor number) with this inconsistency.

We also have to consider that GNOME releases will want to use the new features of Gtk, and that we need to have a versioning scheme that makes sense in context of GNOME. We need to have the Gtk minor releases that go along with each GNOME version. These releases will also need to receive micro releases to fix bugs in the same way that they always have.

Finally: we don’t believe that there is a huge amount of value in following every aspect of semantic versioning just for the sake of it. This sort of thing might seem interesting to users who are not developers, but if you are an actual application author who develops applications in Gtk, then it stands to reason that you probably know some basics about how Gtk works. Part of this knowledge will be understanding the versioning. In any case, if you look at our proposed system, you see that we still mostly follow semver, with only one exception: we allow for API breaks between early even-numbered minor versions. According to many of our critics, we were already kinda doing that anyway.

So that’s all for version numbers.

There is one other complaint that I encountered yesterday that I’d like to address. There is a perception that we have been bad about reviewing patches against Gtk 2.24, and that this lack of attention will mean that we will treat Gtk 3.26 (or whatever) in the same way. This is something that we discussed seriously at the hackfest, after it was raised. One simple thing that I can say is that Gtk 2 has actually seen an awful lot of attention. It has had 30 point releases, including one three months ago. It has had over 100 patches applied so far this year. We could still be doing better. A lot of this comes down to insufficient resources. At the same time, it’s pretty awkward to say this, when people are showing up with patches-in-hand, and we are simply not reviewing them. These people, with a demonstrated interest in bug-fixing stable releases could become new contributors to our project, working in an area where they are sorely needed. We are going to continue discussions about how we can improve our approach here.

tl;dr: The approach we have taken lets everyone make informed decisions about which Gtk version to use, based on public and official information. Today many people say “I’d like to use Gtk 3, but I don’t think it has stabilised enough yet.” Fair enough — we heard you. Soon, Gtk 3 will be much more stable, at which point it will be safer to upgrade. At the same time, we are not going to pretend that Gtk 4.0 is stable at all, or that Gtk 4.2 will look anything like it. When Gtk 4 is stable, we will tell you.

GTK versioning and distributions

Allison Lortie has provoked a lot of comment with her blog post on a new proposal for how GTK is versioned. Here's some more context from the discussion at the GTK hackfest that prompted that proposal: there's actually quite a close analogy in how new Debian versions are developed.

The problem we're trying to address here is the two sides of a trade-off:

  • Without new development, a library (or an OS) can't go anywhere new
  • New development sometimes breaks existing applications

Historically, GTK has aimed to keep compatible within a major version, where major versions are rather far apart (GTK 1 in 1998, GTK 2 in 2002, GTK 3 in 2011, GTK 4 somewhere in the future). Meanwhile, fixing bugs, improving performance and introducing new features sometimes results in major changes behind the scenes. In an ideal world, these behind-the-scenes changes would never break applications; however, the world isn't ideal. (The Debian analogy here is that as much as we aspire to having the upgrade from one stable release to the next not break anything at all, I don't think we've ever achieved that in practice - we still ask users to read the release notes, even though ideally that wouldn't be necessary.)

In particular, the perceived cost of doing a proper ABI break (a fully parallel-installable GTK 4) means there's a strong temptation to make changes that don't actually remove or change C symbols, but are clearly an ABI break, in the sense that an application that previously worked and was considered correct no longer works. A prominent recent example is the theming changes in GTK 3.20: the ABI in terms of functions available didn't change, but what happens when you call those functions changed in an incompatible way. This makes GTK hard to rely on for applications outside the GNOME release cycle, which is a problem that needs to be fixed (without stopping development from continuing).

The goal of the plan we discussed today is to decouple the latest branch of development, which moves fast and sometimes breaks API, from the API-stable branches, which only get bug fixes. This model should look quite familiar to Debian contributors, because it's a lot like the way we release Debian and Ubuntu.

In Debian, at any given time we have a development branch (testing/unstable) - currently "stretch", the future Debian 9. We also have some stable branches, of which the most recent are Debian 8 "jessie" and Debian 7 "wheezy". Different users of Debian have different trade-offs that lead them to choose one or the other of these. Users who value stability and want to avoid unexpected changes, even at a cost in terms of features and fixes for non-critical bugs, choose to use a stable release, preferably the most recent; they only need to change what they run on top of Debian for OS API changes (for instance webapps, local scripts, or the way they interact with the GUI) approximately every 2 years, or perhaps less often than that with the Debian-LTS project supporting non-current stable releases. Meanwhile, users who value the latest versions and are willing to work with a "moving target" as a result choose to use testing/unstable.

The GTK analogy here is really quite close. In the new versioning model, library users who value stability over new things would prefer to use a stable-branch, ideally the latest; library users who want the latest features, the latest bug-fixes and the latest new bugs would use the branch that's the current focus of development. In practice we expect that the latter would be mostly GNOME projects. There's been some discussion at the hackfest about how often we'd have a new stable-branch: the fastest rate that's been considered is a stable-branch every 2 years, similar to Ubuntu LTS and Debian, but there's no consensus yet on whether they will be that frequent in practice.

How many stable versions of GTK would end up shipped in Debian depends on how rapidly projects move from "old-stable" to "new-stable" upstream, how much those projects' Debian maintainers are willing to patch them to move between branches, and how many versions the release team will tolerate. Once we reach a steady state, I'd hope that we might have 1 or 2 stable-branched versions active at a time, packaged as separate parallel-installable source packages (a lot like how we handle Qt). GTK 2 might well stay around as an additional active version just from historical inertia. The stable versions are intended to be fully parallel-installable, just like the situation with GTK 1.2, GTK 2 and GTK 3 or with the major versions of Qt.

For the "current development" version, I'd anticipate that we'd probably only ship one source package, and do ABI transitions for one version active at a time, a lot like how we deal with libgnome-desktop and the evolution-data-server family of libraries. Those versions would have parallel-installable runtime libraries but non-parallel-installable development files, again similar to libgnome-desktop.

At the risk of stretching the Debian/Ubuntu analogy too far, the intermediate "current development" GTK releases that would accompany a GNOME release are like Ubuntu's non-LTS suites: they're more up to date than the fully stable releases (Ubuntu LTS, which has a release schedule similar to Debian stable), but less stable and not supported for as long.

Hopefully this plan can meet both of its goals: minimize breakage for applications, while not holding back the development of new APIs.

June 13, 2016

“Gtk 4.0 is not Gtk 4”

… and that’s OK.

This morning in Toronto, many GNOME developers met to kick off the GTK hackfest.

The first topic that we discussed was how to deal with a problem that we have had for some time: the desire to create a modern toolkit with new features vs. the need to keep a stable API. Gtk 3 has seen a very rapid pace of development over its lifetime and we have a much better toolkit than we started out with. Unfortunately, this has often come with the cost of less-than-perfect API stability. In addition, the need to keep the API mostly stable for years at a time has somewhat slowed the pace of development and created a hesitation to expose “too much” API for fear of having to live with it “forever”.

We want to improve this, and we have a plan.

Everyone present for the discussion was excited about this plan as the best possible way forward, but it is not yet official. We will need to have discussions about this with distributors and, particularly, with the GNOME release team. Those discussions are likely to occur over the next couple of months, leading up to GUADEC.

We are going to increase the speed at which we do releases of new major versions of Gtk (ie: Gtk 4, Gtk 5, Gtk 6…). We want to target a new major release every two years. This period of time was chosen to line up well with the cadence of many popular Linux distributions.

The new release of Gtk is going to be fully parallel-installable with the old one. Gtk 4 and Gtk 3 will install alongside each other in exactly the same way as Gtk 2 and Gtk 3 — separate library name, separate pkg-config name, separate header directory. You will be able to have a system that has development headers and libraries installed for each of Gtk 2, 3, 4 and 5, if you want to do that.

Meanwhile, Gtk 4.0 will not be the final stable API of what we would call “Gtk 4”. Each 6 months, the new release (Gtk 4.2, Gtk 4.4, Gtk 4.6) will break API and ABI vs. the release that came before it. These incompatible minor versions will not be fully parallel installable; they will use the same pkg-config name and the same header file directory. We will, of course, bump the soname with each new incompatible release — you will be able to run Gtk 4.0 apps alongside Gtk 4.2 and 4.4 apps, but you won’t be able to build them on the same system. This policy fits the model of how most distributions think about libraries and their “development packages”.

Each successive minor version will be growing toward a new stable API. Before each new “dot 0” release, the last minor release on the previous major version will be designated as this “API stable” release. For Gtk 4, for example, we will aim for this to be 4.6 (and so on for future major releases). Past this point there will not be disruptions; this “stable API” will be very stable. There will certainly not be the kind of breaks that we have seen between recent Gtk minor releases.

In this way, “Gtk 4.0” is not “Gtk 4”. “Gtk 4.0” is the first raw version of what will eventually grow into “Gtk 4”, sometime around Gtk 4.6 (18 months later).

The first “API stable” version under this new scheme is likely to be something like Gtk 3.26.

Applications authors will have two main options.

The first option is to target a specific version of the Gtk API, appearing once per two years, that stays stable forever. Application can continue to target this API until the end of time, and it will be available in distributions for as long as there are applications that depend on it. We expect that most third party applications will choose this path.

If two years is too long to wait for new feature, application authors can also choose to target the unstable releases. This is a serious undertaking, and requires a commitment to maintaining the application during a two year period, and keeping up with any changes in the toolkit. We expect that GNOME applications will do this.

Third party application authors can also follow the unstable releases of Gtk, but we encourage caution here. This will work for GNOME because if a single maintainer disappears, there will be others to step in and help keep the app up to date, at least until the next 2 year stable release. This approach may also work for other large projects that are similar to GNOME. With individual app authors, life can change a lot during two years, and they may not be able to make such a commitment. Distributions may decide that it is too risky to ship such applications (or update to new versions), if they use the unstable API.

In general, we feel that this approach is the best possible combination of two distinct and valid desires. Gtk users that have been asking for stability will get what they need, and the authors of Gtk will get additional freedom to improve the toolkit at a faster pace.

… now let’s see what we discuss this afternoon.

June 08, 2016

Experiments in Meson

Last GUADEC I attended Jussi Pakkanen’s talk about his build system, Meson; if you weren’t there, I strongly recommend you watch the recording. I left the talk impressed, and I wanted to give Meson a try. Cue 9 months later, and a really nice blog post from Nirbheek on how Centricular is porting GStreamer from autotools to Meson, and I decided to spend some evening/weekend time on learning Meson.

I decided to use the simplest project I maintain, the one with the minimal amount of dependencies and with a fairly clean autotools set up — i.e. Graphene.

Graphene has very little overhead in terms of build system by itself; all it needs are:

  • a way to check for compiler flags
  • a way to check for the existence of headers and types
  • a way to check for platform-specific extensions, like SSE or NEON

Additionally, it needs a way to generate documentation and introspection data, but those are mostly hidden in weird incantations provided by other projects, like gtk-doc and gobject-introspection, so most of the complexity is hidden from the maintainer (and user) point of view.

Armed with little more than the Meson documentation wiki and the GStreamer port as an example, I set off towards the shining new future of a small, sane, fast build system.

The Good

Meson uses additional files, so I didn’t have to drop the autotools set up while working on the Meson one. Once I’m sure that the results are the same, I’ll be able to remove the various configure.ac, Makefile.am, and friends, and leave just the Meson file.


Graphene generates two header files during its configuration process:

  • a config.h header file, for internal use; we use this file to check if a specific feature or header is available while building Graphene itself
  • a graphene-config.h header file, for public use; we expose this file to Graphene users for build time detection of platform features

While the autotools code that generates config.h is pretty much hidden from the developer perspective, with autoconf creating a template file for you by pre-parsing the build files, the part of the build system that generates the graphene-config.h one is pretty much a mess of shell script, cacheable variables for cross-compilation, and random m4 escaping rules. Meson, on the other hand, treats both files exactly the same way: generate a configuration object, set variables on it, then take the appropriate configuration object and generate the header file — with or without a template file as an input:

# Internal configuration header
configure_file(input: 'config.h.meson',
               output: 'config.h',
               configuration: conf)

# External configuration header
configure_file(input: 'graphene-config.h.meson',
               output: 'graphene-config.h',
               configuration: graphene_conf,
               install: true,
               install_dir: 'lib/graphene-1.0/include')

While explicit is better than implicit, at least most of the time, having things taken care for you avoids the boring bits and, more importantly, avoids getting the boring bits wrong. If I had a quid for every broken invocation of the introspection scanner I’ve ever seen or had to fix, I’d probably retire on a very small island. In Meson, this is taken care by a function in the gnome module:

    import('gnome')

    # Build introspection only if we enabled building GObject types
    build_gir = build_gobject
    if build_gobject and get_option('enable-introspection')
      gir = find_program('g-ir-scanner', required: false)
      build_gir = gir.found() and not meson.is_cross_build()
    endif

    if build_gir
      gir_extra_args = [
        '--identifier-filter-cmd=' + meson.source_root() + '/src/identfilter.py',
        '--c-include=graphene-gobject.h',
        '--accept-unprefixed',
        '-DGRAPHENE_COMPILATION',
        '--cflags-begin',
        '-I' + meson.source_root() + '/src',
        '-I' + meson.build_root() + '/src',
        '--cflags-end'
      ]
      gnome.generate_gir(libgraphene,
                         sources: headers + sources,
                         namespace: 'Graphene',
                         nsversion: graphene_api_version,
                         identifier_prefix: 'Graphene',
                         symbol_prefix: 'graphene',
                         export_packages: 'graphene-gobject-1.0,
                         includes: [ 'GObject-2.0' ],
                         install: true,
                         extra_args: gir_extra_args)
    endif

Meson generates Ninja rules by default, and it’s really fast at that. I can get a fully configured Graphene build set up in less that a couple of seconds. On top of that, Ninja is incredibly fast. The whole build of Graphene takes less than 5 seconds — and I’m counting building the tests and benchmarks, something that I had to move to be on demand for the autotools set up because they added a noticeable delay to the build. Now I always know if I’ve just screwed up the build, and not just when I run make check.


Jussi is a very good maintainer, helpful and attentive at issues reported to his project, and quick at reviewing patches. The terms for contributing to Meson are fairly standard, and the barrier for entry is very low. For a project like a build system, which interacts and enables other projects, this is a very important thing.

The Ugly

As I said, Meson has some interesting automagic handling of the boring bits of building software, like the introspection data. But there are other boring bits that do not have convenience wrappers, and thus you get into overly verbose section of your meson.build — and while it’s definitely harder to get those wrong, compared to autoconf or automake, it can still happen.

Even in the case of automagic handling, though, there are cases when you have to deal with some of the magic escaping from under the rug. Generally it’s not hard to understand what’s missing or what’s necessary, but it can be a bit daunting when you’re just staring at a Python exception barfed on your terminal.


The documentation is kept in a wiki, which is generally fine for keeping it up to date; but it’s hard to search — as all wikis are — and hard to visually scan. I’ve lost count of the times I had to search for all the methods on the meson built-in object, and I never remember which page I have to search for, or in.

The inheritance chain for some objects is mentioned in passing, but it’s hard to track; which methods does the test object have? What kind of arguments does the compiler.compiles() method have? Are they positional or named?

The syntax and API reference documentation should probably be generated from the code base, and look more like an API reference than a wiki.


Examples are hard to come by. I looked at the GStreamer port, but I also had to start looking at Meson’s own test suite.


Modules are all in tree, at least for the time being. This means that if I want to add an ad hoc module for a whole complex project like, say, GNOME, I’d have to submit it to upstream. Yeah, I know: bad example, Meson already has a GNOME module; but the concept still applies.


Meson does not do dist tarballs. I’ve already heard people being skeptical about this point, but I personally don’t care that much. I can generate a tarball from a Git tag, and while it won’t be self-hosting, it’s already enough to get a distro going. Seriously, though: building from a Git tag is a better option than building from a tarball, in 2016.

The Bad

There shocking twist is that nothing stands out as “bad”. Mostly, it’s just ugly stuff — caused either by missing convenience functionality that will by necessity appear once people start using Meson more; or by the mere fact that all build systems are inherently ugly.

On the other hand, there’s some badness in the tooling around project building. For instance, Travis-CI does not support it, mostly because they use an ancient version of Ubuntu LTS as the base environment. Jhbuild does not have a Meson/Ninja build module, so we’ll have to write that one; same thing for GNOME Builder. While we wait, having a dummy configure script or a dummy Makefile that would probably help.

These are not bad things per se, but they definitely block further adoption.

tl;dr

I think Meson has great potential, and I’d love to start using it more for my projects. If you’re looking for a better, faster, and more understandable build system then you should grab Meson and explore it.

Feeds