24 hours a day, 7 days a week, 365 days per year...

July 26, 2016

Testing for Usability

I recently came across a copy of Web Redesign 2.0: Workflow That Works (book, 2005) by Goto and Cotler. The book includes a chapter on "Testing for Usability" which is brief but informative. The authors comment that many websites are redesigned because customers want to add new feature or want to drive more traffic to the website. But they rarely ask the important questions: "How easy is it to use our website?" "How easily can visitors get to the information they want and need?" and "How easily does the website 'lead' visitors to do what you want them to do?" (That last question is interesting for certain markets, for example.)

The authors highlight this important attribute of usability: (p. 212)
Ease of use continues to be a top reason why customers repeatedly return to a site. It usually only takes one bad experience for a site to lose a customer. Guesswork has no place here; besides, you are probably too close to your site to be an impartial judge.

This is highly insightful, and underscores why I continue to claim that open source developers need to engage with usability. As an open source developer, you are very close to the project you are working on. You know where to find all the menu items, you know what actions each menu item represents, and you know how to get the program to do what you need it to do. This is obvious to you because you wrote the program. It probably made a lot of sense to you at the time to label a button or menu item the way you did. Will it make the same sense to someone who hasn't used the program before?

Goto and Cotler advise to test your current, soon-to-be-redesigned product. Testing the current system for usability will help you understand how users actually use it, and will indicate rough spots to focus on first.

The authors also provide this useful advice, which I quite like: (p. 215)
Testing new looks and navigation on your current site's regular users will almost always yield skewed results. As a rule, people dislike change. If a site is changed, many regular users will have something negative to say, even if the redesign is easier to use. Don't test solely on your existing audience.

Emphasis is mine. People dislike change. They will respond negatively to any change. So be cautious and include new users in your testing.

But how do you test usability? I've discussed several methods here before, including Heuristic Review, Prototype Test, Formal Usability Test, and Questionnaires. Similarly, Goto and Cotler recommend traditional usability testing, and suggest three general categories of testing: (p. 218)

Informal testingMay take place in the tester's work environment or other setting.Co-workers or friends.Simple task list, observed and noted by a moderator.
Semiformal testingMay or may not take place in a formal test environment.Testers are pre-screened and selected.Moderator is usually a member of the team.
Formal testingTakes place in a formal facility.Testers are pre-screened and selected.Scenario tasks, moderated by a human factors specialist. May also include one-way mirror and video monitoring.

The authors also recommend building a task list that represents what real people actually would do, and writing a usability test plan (essentially, a brief document that describes your overall goals, typical users, and methodology). Goto and Cotler follow this with a discussion about screening test candidates, then conducting the session. I didn't see a discussion about how to write scenario tasks.

The role of the moderator is to remain neutral. When you introduce yourself as moderator, remind the tester that you will be a silent observer, and that you are testing the system, not them. Encourage the testers to "think aloud"—if they are looking for a "Print" button, they should say "I am looking for a 'Print' button." Don't describe the tasks in advance, and don't set expectations (such as "This is an easy task").

It can be hard to moderate a usability test, especially if you haven't done it before. You need to remain an observer; you cannot "rescue" a tester who seems stuck. Let them work it out for themselves. At the same time, if the tester has given up, you should move on to the next task.

Goto and Cotler recommend you compile and summarize data as you go; don't leave it all for the end. Think about your results while it is still fresh in your mind. The authors prefer a written report to summarize the usability test, showing the results of each test, problem areas, comments and feedback. As a general outline, they suggest: (p. 231)
  1. Executive summary
  2. Methodology
  3. Results
  4. Findings and recommendations
  5. Appendices

This is fine, but I prefer to summarize usability test results via a heat map. This is a simple visual device that concisely displays test results in a colored grid. Scenario tasks are on rows, and testers are on columns. For each cell, use green to represent a task that was easy for the tester to accomplish, yellow to represent a more difficult task, orange for somewhat hard, red for very difficult, and black for tasks the tester could not figure out.

Whatever your method, at the end of your test, you should identify those items that need immediate attention. Work on those, make improvements as suggested by the usability test, then test it again. The value in usability testing is to make it an iterative part of the design process: create a design, test it, update the design, test it again, repeat.

Save. Load. Reset. Shortcuts get new features.

In the recent days, I have been working on implementing the functionality behind saving and loading of custom shortcuts. It was important to define the logic of storing the default shortcuts, so that we have them accessible at any time, even if user decides to change all of them.
Moreover, Pitivi's save logic can store the shortcuts in an external file, which holds the user's preferred settings. This file's contents can then be loaded and used to fill the Shortcuts Window - if the file exists. If it does not exist, Shortcut Manager class will simply use the default accelerator settings.

Secondly, I made a step forward in implementing the reseting functionality for the shortcuts. Whenever a user will have customised shortcuts, she'll be able to reset it to Pitivi factory settings by a single click of a button. The important point here is that the back-end functionality for this button has just been implemented and so we are ready for putting it all together to some nice UI. Users will be able to reset either a particular action's accelerators, or reset them all by a single click.

Furthermore, I was able to practice unit testing a lot. For all the pieces of work - save, load and reset functionality - I provided unit tests with fairly extensive use of mock library. I learnt a lot through this, since I have to admit at the very beginning the idea behind what kind of tricks mock is actually doing behind the scenes was hard to understand for me. By now however, Pitivi has a code of good quality for all the work I have done (Alex thanks for the excellent reviews) supported by relevant tests.

So what is next?
Over the course of this and the next week, I will be concentrating on the UI primarily and bringing all the little pieces of work I have done together. Hopefully by the end of this week, I will be able to present an implemented and working UI for the Preferences shortcuts section and also add the reset buttons to reset one or all shortcuts.

Bringing your kids to GUADEC 2016

If you’re coming to GUADEC 2016 and bringing your kids along, there’s a handy wiki page where you can look at for tips on what to do while in Karlsruhe:

If you’re coming with small kids, I’ll bring along some toys, colors, and other essentials to the conference. To make that happen, put the age of your child in the wiki at the link above.

Going to GUADEC

See you there.

July 25, 2016

GNOME Keysign - Report #2 GSoC 2016

More than a week ago I blogged about the new GUI made with GtkBuilder and Glade [1].  Now, I will talk about what has changed since then with the GUI and also the new functionality that has been added to it.

I will start with the new "transition" page which I've added for the key download phase. Before going more in depth, I have to say that the app knows at each moment in what state it is, which really helps in adding more functionality.

The transition page will give user more feedback about the download status of a key, because in the old gnome-keysign GUI, when the download was interrupted the GUI didn't show anything. Now, the GUI is more explicit about the process:

If the download fails, the spinner widget stops and an error message is shown. If the download is successful, the app will auto-advance to the confirmation page and the user is presented with details for the key he's about to sign:

A few people noticed that I am displaying short key ids in the GUI. I want to say that the entire key fingerprint is used when authenticating a downloaded key. The other info shown in the GUI are just key details that I'm getting from GnuPG and I'm displaying in the interface.
Though, I will stop displaying the 8 chars ID , because user may be influenced somehow by this.

Other changes that have been done since the last post were:

  • added a "Confirm" button to sign a key
  • added a transition phase for the key signing also
  • implement the "Refresh" keylist button
  • minor GUI adjustments
  • use logging instead of print
  • improve code quality

Apart from these, one major change is the GPG functionality added to the new GUI. The file made by Tobias acts as a common interface for what gpg libraries we'll use in the future. For now, you can test the new GUI with your own keys on the gpgmh branch [2]. This requires having the 'monkeysign' package installed [3].

In the following week I'm adding the widgets for the QR code and the QR scanner, as well as making a simple script that will create a flatpack app.


on discoverability

i recently stumbled on a bunch of videos by clubinternet, exposing people who have never used a smartphone to google. their task was to search for photos of their favorite actress. you'd guess there are not many products out there which are easier to use than a google search box. well, watch this:

while i can't deny a slightly humoristic touch, this video has troubled me. touch interfaces have improved drastically in recent years, and even allow non-tech savvy people to successfully interact with digital devices. nevertheless i always felt that they are not the goose that lays golden eggs. you see, we are actually just moving objects below a screen made of glass. what other object in the world behaves like this? i am of the opinion that there has to be a better way to interact with devices. in the words of bret victor:

I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.

Is that so bad, to dump the tactile for the visual? Try this: close your eyes and tie your shoelaces. No problem at all, right? Now, how well do you think you could tie your shoes if your arm was asleep? Or even if your fingers were numb? When working with our hands, touch does the driving, and vision helps out from the back seat.

Pictures Under Glass is an interaction paradigm of permanent numbness. It denies our hands what they do best. And yet, it's the star player in every Vision Of The Future.

To me, claiming that Pictures Under Glass is the future of interaction is like claiming that black-and-white is the future of photography. It's obviously a transitional technology. And the sooner we transition, the better.

but this is not the only problem touch interfaces have. maybe it is because of the way we move objects below a screen of glass, maybe it is because a screen does not give us a tactile feedback and maybe we need something completely different. but touch interfaces lack discoverability. like almost all digital products of today's time and age. interaction elements are concealed in the user interface, buttons are disguised in text, input fields are not obviously marked as such and interaction elements don't give feedback. we probably can tell what elements we can interact with based on our experience, but there is now way to tell just by looking at the screen. this issue is amazingly well summarized by don norman and bruce tognazzini:

Today’s devices lack discoverability: There is no way to discover what operations are possible just by looking at the screen. Do you swipe left or right, up or down, with one finger, two, or even as many as five? Do you swipe or tap, and if you tap is it a single tap or double? Is that text on the screen really text or is it a critically important button disguised as text? So often, the user has to try touching everything on the screen just to find out what are actually touchable objects.

the truth is this: if there is no way to discover what operations are possible just by looking at the screen and the interaction is numbed with no feedback by the devices, what's left? the interaction gets reduced to experience and familiarity where we only rely on readily transferred, existing skills.

with our digital products we are building environments, not just tools. yet we often think only about the different tasks inside our product. we have to view our products in a context of how and where they are being used. our products are more than just static visual traits, let's start to see them as such.

July 23, 2016

Need for PC - The plan

I have realized quite some time ago that my PC is struggling to keep up with the pace, so I have decided that it is time for an upgrade (after almost 6 years with my Dell Inspiron 560 minitower with C2D Q8300 quad-core).

I have "upgraded" the video card a couple of months ago due to the old one not supporting OpenGL3.2 needed by GtkGLArea. First I went with an ATI Radeon HD6770 I received from my gamer brother, but it was loud and I did not use it as much as it's worth using, with a high cost (108W TDP, bumped the consumption of the idle PC by 30-40W from 70-80W to 110-120W), so I have traded it for another one: a low-consumption (Passively cooled - 25W TDP) Ati Radeon HD4550 working well with Linux and all my Steam games whenever I am gaming (casual gamer). Consumption went back to 90-100W.

After that came the power supply, replacing the Dell-Provided 300W supply with a more efficient one, a 330W Seasonic SS330HB. This resulted in another 20W drop in power consumption, idling below 70W.

The processor being fairly old, and having a 95W TDP, but with the performance way below today's i7 processors with the same TDP, it might be worth upgrading, but that means motherboard + CPU + cooler + memory upgrade, but as I have the rest of the components, I will reuse them, and add a new (old) case to the equation, a PowerMac G5 from around 2004.

So here's the basic plan:
Case - PowerMac G5 modded for mATX compatibility, and repainted - metallic silver the outer case, matt black the inner case - inspired by Mike 7060's G5 Mod
CPU - Intel core i7 6700T - 35W TDP
Cooler - Arctic Alpine 11 Plus - silent, bigger brother of the fanless Arctic Alpine 11 Passive (for up to 35 W TDP, the i7 6700T being right at the edge, I did not want to risk)
MotherBoard - 1151 socket, DDR4, USB3, 4-pin CPU and case fan controller socket, HDMI and DVI video outs being the requirements - I chose the MSI B150M Mortar because of guaranteed Linux compatibility (thanks Phoronix), 2 onboard PWM case fan controllers + PWM controlled CPU fan
Memory - 2x8GB DDR4 Kit - Kingston Hyperx Fury
PSU - Seasonic SS-330HB mounted inside the G5 PSU case, original G5 PSU fans replaced with 2x 60mm Scythe Mini Kaze for silent operation
Case Cooling - Front 2x 92mm - Arctic F9 PWM PST in the original mounts

Video card - Onboard Intel or optional ATI Radeon HD4550 if (probably will not happen) the onboard will not be enough
Optical drive (not sure if it is required) - start with existing DVD-RW drive
Storage - 120 GB Kingston V300 + 1TB HDD - existing

Plans for later
(later/optional) update optical drive to a Blu-Ray drive
(later/optional)  Arctic F9 PWM PST, in the original G5 intake mounts or 120 mm Arctic F12 PWM PST in new intake mounts.

I'll soon be back with details on preparing the case, probably the hardest part of the whole build. The new parts are already ordered (the CPU was pretty hard to find on stock, and will be delivered in a week or so instead of the usual 1-2 days).

July 22, 2016

2016-07-22 Friday.

  • Up early, chat with Eloy, Tor, ownCloud Webinar practice with Lenny, Snezana & Cornelius.

July 21, 2016

The new Keyboard panel

After implementing the new redesigned Shell of GNOME Control Center, it’s now time to move the panels to a bright new future. And the Keyboard panel just walked this step.

After Allan give his usual show with new mockups (didn’t you see? Check it here), . Check this out:

Captura de tela de 2016-07-21 20-45-22The new Keyboard panel

Working on this panel had me take a few conclusions:

  • The new programming tools and facilities that Gtk+ and GLib landed make a huge difference in code legibility and code quality.
  • GObject is awsome. Really.
  • Since GObject is awsome, lets use all the functionality it gives us for free:)
  • I tend to overdocument my code.

And our beloved set of sequential pictures and music:


Excited? This is still unders heavy development, and we just started the reviews. You can check the current state here, or test the wip/gbsneto/new-keyboard-panel branch. As always, share your comments, ideas and suggestions!

2016-07-21 Thursday.

  • J's birthday - presents at breakfast; dug through the mountainous admin-queue, synched with Andras & Kendy.

The state of gamepad support in Games

Gamepad support has now been merged into GNOME Games v3.21.4 !!! This means that you can play your favorite retro games using a gamepad!!!

Which gamepads are supported?

But you may wondering which gamepads are supported out of the box. The answer is a lot of them! We use the SDL mappings format to map your gamepad to a standard gamepad (by this I mean a seventh generation/XBox-360 kind of gamepad). And we use a huge community maintained database of mappings, so your device would most likely be there. We use a slightly modified version of this database. See #94 and #95 for more details.

Custom mappings?

Well I just realized while writing this post that we had forgotten about this :sweat_smile:. But I have made a PR for it, so it should get merged soon. But as of now there is no GUI for it. Currently you can use Steam or the SDL test/controllermap tool to generate a custom mapping string as described here. Then you should paste in in a file in the user’s config directory. As per this PR this file is <config_dir>/gnome-games/gamecontrollerdb.txt (<config_dir> is mostly ~/.config).

Multiplayer support

Multiplayer games are quite well supported. As of now there is no GUI for reassigning gamepads to other players, but the default behaviour is quite predictable. Just plugin the gamepads in the order of the players and all will be well.

The exact behaviour is this:

  • the first player with no gamepad will be assigned the keyboard
  • if there are N initially plugged-in gamepads, then they are assigned to the first N players and keyboard is assigned to player N + 1
  • when a gamepad is plugged in, it is assigned to the first player with no gamepad (it may not be the last one), it can replace the keyboard
  • when a gamepad is plugged out, its player shouldn’t have any gamepad assigned but it shouldn’t change the player to which other gamepads are assigned

Next steps

The next steps involve adding a UI to remap the gamepads assigned too the players and then maybe a UI for remapping the controls if time permits.

Happy gaming!

GSoC progress part #3

My last week has been quite busy, but it all paid off in the end as I’ve managed to overcome the issue that I had with the login phase. Thankfully, I was able to take a look at how the postMessage() API is used to do the login in Firefox iOS and implement it myself in Epiphany.

To summarize it, this is how it’s done:

  1. Load the FxA iframe with the service=sync parameter in a WebKitWebView.
  2. Inject a few JavaScript lines to listen to FirefoxAccountsCommand events (sent by the FxA Server). This is done with a WebKitUserContentManager and a WebKitUserScript.
  3. In the event listener use postMessage() to send to back to WebKit the data received from the server.
  4. In the C code, register a script message handler with a callback that gets called whenever something is sent through the postMessage() channel. This is done with webkit_user_content_manager_register_script_message_handler().
  5. In the callback you now hold the server’s response to your request. This includes all the tokens you need to retrieve the sync keys.
  6. Profit!

Basically, postMessage() acts like a forwarder between JavaScript and WebKit. Cool!

With this new sign in method, users can also benefit of the possibility to create new Firefox accounts. The iframe contains a “Create an account” link that shows a form by which the users will create a new account. The user will have to verify the account before he signs in.

Using modern gettext

gettext has seen quite some enhancements in recent years, after Daiki Ueno started maintaining it. It can now extract (and merge back) strings from diverse file formats, including many of the formats that are important for desktop applications. With gettext 0.19.8, there is really no need  anymore to use intltool or GLib’s dated gettext glue (AM_GLIB_GNU_GETTEXT and glib-gettextize).

Since intltool still sticks around in quite a few projects, I thought that I should perhaps explain some of the dos and don’ts for how to get by with plain gettext. Javier Jardon has been tirelessly fighting a battle for using upstream gettext, maybe this will help him reaching the finish line with that project.

Extracting strings

xgettext is the tool used to extract strings from sources into .pot files.

In addition to programming languages such as C, C++, Java, Scheme, etc, it recognizes the following files by their typical file extensions (and it is smart enough to disregard a .in extension):

    • Desktop files: .desktop
    • GSettings schemas: .gschema.xml
    • GtkBuilder ui files: .ui
    • Appdata files: .appdata.xml and .metainfo.xml

You can just add these files to, without the extra type hints that intltool requires.

One important advantage of xgettext’s xml support, compared to intltool, is that you can install .in files that are valid; no more tag mutilation like <_description> required.

Merging translations

The trickier part is merging translations back into the various file formats. Sometimes that is not necessary, since the file has a reference to the gettext domain, and consumers know to use gettext at runtime: that is the case for GSettings schemas and GtkBuilder ui files, for example.

But in other cases, the translations need to be merged back into  the original file before installing it. In these cases, the original file from which the strings are extracted often has an extra .in extension. The tool that is doing this task is msgfmt.

Intltool installs autotools glue which can define make rules for many of these cases, such as @INTLTOOL_DESKTOP_RULE@. Gettext does not provide this kind of glue, but the msgfmt tool is versatile enough that you can write your own rules fairly easily, for example:

        msgfmt --desktop -d $(top_srcdir)/po \
               --template $< -o $@

Extending gettext

Gettext can be extended to understand new xml formats. To do so, you install .its and .loc files. The syntax for these files is explained in detail in the gettext docs. Libraries are expected to install these files for ‘their’ formats (GLib and GTK+ already do, and  PolicyKit will do the same soon.

If you don’t want to wait for your favorite format to come with built-in its support, you can also include its files with your application; gettext will look for such files in $XDG_DATA_DIRS/gettext/its/.


Writing an ebook about usability?

I write more about this on my Coaching Buttons blog, that I'm thinking about writing an ebook. Actually, it's several ebooks. But the one that applies here is Open Source Usability.

Open Source Usability
In short, this would be an ebook about how to examine and improve the usability in free software and open source software. While many books exist about usability, none specifically focus on free and open source software. Intended for developers, this ebook will discuss what usability is, different ways to examine usability, how to measure usability, and ways to integrate usability test results back into the development stream to improve the next versions of the software.

Most of the material would likely be based on what I write here in my blog, but condensed and easier to learn from.

I might release this for free. If I publish through Amazon or Apple iTunes Bookstore or Google's Play Bookstore, I think I'd have to set the price to the minimum level, probably $1.

But writing an ebook is a lot of work. I'm not sure if people would really be that interested in an ebook like this. Would you read an ebook about Open Source Usability? Please let me know in the comments.

July 20, 2016

BOF session of Nautilus – GUADEC


As the tittle says, we will have a discussion/bof session about Nautilus in GUADEC.

As you may know, discussing on internet is rarely a great experience, with the big disadvantage that influencing people over it doesn’t work well. From a developer point of view, I don’t know to who I am talking with and why, so in projects where discussions are a daily experience is difficult to know what we should put on the top priority list.

The small hackfest is going to be focused more on the philosophical side, with specific actionable items for the future.

We will talk and discuss about what we have done wrong in the past; what users are missing, like dual panel and type ahead search, why they are missing it and how we can improve those use cases; we will also talk about the transition from Nautilus 2 to Nautilus 3 and what we can learn from it in order to make changes a smoother experience.

The program is here. I will gladly add anything people would like to talk about.

If you ever wanted to influence Nautilus, this is your opportunity, come to GUADEC.

Do not use comment sections to discuss them:) just grab your backpack and come to GUADEC.

libinput is done

Don't panic. Of course it isn't. Stop typing that angry letter to the editor and read on. I just picked that title because it's clickbait and these days that's all that matters, right?

With the release of libinput 1.4 and the newest feature to add tablet pad mode switching, we've now finished the TODO list we had when libinput was first conceived. Let's see what we have in libinput right now:

  • keyboard support (actually quite boring)
  • touchscreen support (actually quite boring too)
  • support for mice, including middle button emulation where needed
  • support for trackballs including the ability to use them rotated and to use button-based scrolling
  • touchpad support, most notably:
    • proper multitouch support on touchpads [1]
    • two-finger scrolling and edge scrolling
    • tapping, tap-to-drag and drag-lock (all configurable)
    • pinch and swipe gestures
    • built-in palm and thumb detection
    • smart disable-while-typing without the need for an external process like syndaemon
    • more predictable touchpad behaviours because everything is based on physical units [2]
    • a proper API to allow for kinetic scrolling on a per-widget basis
  • tracksticks work with middle button scrolling and communicate with the touchpad where needed
  • tablet support, most notably:
    • each tool is a separate entity with its own capabilities
    • the pad itself is a separate entity with its own capabilities and events
    • mode switching is exported by the libinput API and should work consistently across callers
  • a way to identify if multiple kernel devices belong to the same physical device (libinput device groups)
  • a reliable test suite
  • Documentation!
The side-effect of libinput is that we are also trying to fix the rest of the stack where appropriate. Mostly this meant pushing stuff into systemd/udev so far, with the odd kernel fix as well. Specifically the udev bits means we
  • know the DPI density of a mouse
  • know whether a touchpad is internal or external
  • fix up incorrect axis ranges on absolute devices (mostly touchpads)
  • try to set the trackstick sensitivity to something sensible
  • know when the wheel click is less/more than the default 15 degrees
And of course, the whole point of libinput is that it can be used from any Wayland compositor and take away most of the effort of implementing an input stack. GNOME, KDE and enlightenment already uses libinput, and so does Canonical's Mir. And some distribution use libinput as the default driver in X through xf86-input-libinput (Fedora 22 was the first to do this). So overall libinput is already quite a success.

The hard work doesn't stop of course, there are still plenty of areas where we need to be better. And of course, new features come as HW manufacturers bring out new hardware. I already have touch arbitration on my todo list. But it's nice to wave at this big milestone as we pass it into the way to the glorious future of perfect, bug-free input. At this point, I'd like to extend my thanks to all our contributors: Andreas Pokorny, Benjamin Tissoires, Caibin Chen, Carlos Garnacho, Carlos Olmedo Escobar, David Herrmann, Derek Foreman, Eric Engestrom, Friedrich Schöller, Gilles Dartiguelongue, Hans de Goede, Jackie Huang, Jan Alexander Steffens (heftig), Jan Engelhardt, Jason Gerecke, Jasper St. Pierre, Jon A. Cruz, Jonas Ådahl, JoonCheol Park, Kristian Høgsberg, Krzysztof A. Sobiecki, Marek Chalupa, Olivier Blin, Olivier Fourdan, Peter Frühberger, Peter Hutterer, Peter Korsgaard, Stephen Chandler Paul, Thomas Hindoe Paaboel Andersen, Tomi Leppänen, U. Artie Eoff, Velimir Lisec.

Finally: libinput was started by Jonas Ådahl in late 2013, so it's already over 2.5 years old. And the git log shows we're approaching 2000 commits and a simple LOCC says over 60000 lines of code. I would also like to point out that the vast majority of commits were done by Red Hat employees, I've been working on it pretty much full-time since 2014 [3]. libinput is another example of Red Hat putting money, time and effort into the less press-worthy plumbing layers that keep our systems running. [4]

[1] Ironically, that's also the biggest cause of bugs because touchpads are terrible. synaptics still only does single-finger with a bit of icing and on bad touchpads that often papers over hardware issues. We now do that in libinput for affected hardware too.
[2] The synaptics driver uses absolute numbers, mostly based on the axis ranges for Synaptics touchpads making them unpredictable or at least different on other touchpads.
[3] Coincidentally, if you see someone suggesting that input is easy and you can "just do $foo", their assumptions may not match reality
[4] No, Red Hat did not require me to add this. I can pretty much write what I want in this blog and these opinions are my own anyway and don't necessary reflect Red Hat yadi yadi ya. The fact that I felt I had to add this footnote to counteract whatever wild conspiracy comes up next is depressing enough.

July 19, 2016

Cosimo in BJGUG

Last Month Cosimo came Beijing, and we had  a meet up with Beijing GNOME User Group and Beijing Linux User Group in SUSE Office, Cosimo introduced ‘Looking ahead to GNOME 3.22 and beyond’, the flatpak bring lots of attention. Here I just shared some photos. Thanks for Cosimo’s coming!


The child of Martin which is from BLUG.BIN_6500ss

Niclas Hedhman which is from Apache Software FoundationBIN_6499ss

GUADEC Flatpak contest

I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.


To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

You will have to send me an email about the location of that repository.

I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.


A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

Good luck to one and all!

Automatic decompression of archives

With extraction support in Nautilus, the next feature that I’ve implemented as part of my project is automatic decompression. While the name is a bit fancy, this feature is just about extracting archives instead of opening them in an archive manager. From the UI perspective, this only means a little change in the context menu:


Notice that only the “Open With <default_application>” menu item gets replaced. Archives can still be opened in other applications

Now why would you want to do this? The reason behind is to reduce working with files in a compressed format. Instead of opening the archive in a different application to take out some files, you can extract it all from the start and then interact with the files straight away from the file manager. Once the files are on your system, you can do anything that you could have done from an archive manager and more. For example, if you only wanted to extract a few files, you can just remove the rest.

One could argue that extracting an archive could take way longer than just opening it in an archive manager. While this can be true for very large compressed files, most of the times the process takes only a few seconds – about the same time it takes to open a different application. Moreover, if you just want to open a file inside the archive manager, the manager will first extract it to your disk anyway.

This might be a minor change in terms of code and added functionality, but it is quite important when it comes to how we interact with compressed files. For users that are not fond of it, we decided on adding a preference for disabling automatic decompression.


That’s pretty much it for extraction in Nautilus. Right now I’m polishing compression, so I’ll see you in my next post where I talk about it! As always, feedback, suggestions and ideas are much appreciated:)

REMINDER! systemd.conf 2016 CfP Ends in Two Weeks!

Please note that the systemd.conf 2016 Call for Participation ends in less than two weeks, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are interested in yours even more!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

And keep in mind:

REMINDER: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

July 18, 2016

Builder Happenings

Over the last couple of weeks I’ve started implementing Run support for Builder. This is somewhat tricky business since we care about complicated manners. Everything from autotools support to profiler/debugger integration to flatpak and jhbuild runtime support. Each of these complicates and contorts the problem in various directions.

Discovering the “target binary” in your project via autotools is no easy task. We have a few clever tricks but more are needed. One thing we don’t support yet is discovering bin_SCRIPTS. So launching targets like python or gjs applications is not ready yet. Once we discover .desktop files that should start working. Pretty much any other build system would be easier to implement this.

So much work behind the scenes for a cute little button.

Screenshot from 2016-07-17 22-09-21

While running, you can click stop to kill the spawned application’s process group.

Screenshot from 2016-07-17 22-10-38

But that is not all that is going on. Matthew Leeds has been working on search and replace recently (as well as a whole bunch of paper cuts). That work has landed and it is looking really good!

Screenshot from 2016-07-17 22-12-45

Also exciting is that thanks to Sebastien Lafargue we have a fancy new color picker that integrated with Builder’s source editor. You can visualize colors in your source files and change them using the dropper or numerous sliders based on how you’d like to tweak the color.

Screenshot from 2016-07-17 22-11-38

I still have a bunch of things I want to land before GUADEC, so back to the hacker den for me.

July 17, 2016

Extraction support in Nautilus

The first feature added to Nautilus as part of my project is support for extracting compressed files. They can be extracted to the current directory or to any other location. The actions are available in the context menu:

extract_context_menuNow you might be wondering, why add these to Nautilus if they look exactly the same as file-roller’s extension? Well, handling extraction internally comes with a few changes:

  • improved progress feedback, integrated into the system used by the rest of the operations
  • fine-grained control over the operation, including conflict situations which are now handled using Nautilus’ dialogs
  • and probably, the most important change, extracting files in a way that avoids cluttering the user’s workspace. No matter what the archive’s contents are, they will always be placed in a single file or top-level folder – I’ll elaborate on that in a moment.




As I mentioned in my first post, the goal of this project is to simplify working with archives, and creating just one top-level item as a result of an extraction really reduces complexity. It is done in a pretty simple way:

  • if the archive has a root element and they have the same base names (like which contains image.png or a folder named image/), the root element is extracted as it is
  • if the root element has a different name or the archive has multiple elements, they are extracted in a folder having the same name as the archive, without its extension

As a result, the output will always have the name of the source archive, making it easy to find after an extraction. Also, the maximum number of conflicts an extraction can have is just one, the output itself. Hurray, no more need to go through a thousand dialogs!

If you have any suggestion or idea on how to improve this operation, feel free to drop a comment with it! Feedback is also much appreciated:) See you in the next one!

July 16, 2016

Getting a network trace from a single application

I recently wanted a way to get a network packet trace from a specific application. My googling showed me an old askubuntu thread that solved this by using Linux network namespaces.

You create a new network namespace, that will be isolated from your regular network, you use a virtual network interface and iptables to make the traffic from it reach your regular network. Then you start an application and wireshark in that namespace and then you have a trace of that application.

I took that idea and made it into a small program, hosted on github, nsntrace.

> nsntrace
usage: nsntrace [-o file] [-d device] [-u username] PROG [ARGS]
Perform network trace of a single process by using network namespaces.

-o file send trace output to file (default nsntrace.pcap)
-d device the network device to trace
-u username run PROG as username 

It does pretty much the same as the askubuntu thread above describes but with just one step.

> sudo nsntrace -d eth1 wget
Starting network trace of 'wget' on interface eth1.
Your IP address in this trace is
Use ctrl-c to end at any time.

--2016-07-15 12:12:17--
Location: [following]
--2016-07-15 12:12:17--
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html [ <=> ] 10.72K --.-KB/s in 0.001s

2016-07-15 12:12:17 (15.3 MB/s) - ‘index.html’ saved [10980]

Finished capturing 42 packets.

> tshark -r nsntrace.pcap -Y 'http.response or http.request'
16 0.998839 -> HTTP 229 GET HTTP/1.1
20 1.010671 -> HTTP 324 HTTP/1.1 302 Moved Temporarily (text/html)
22 1.010898 -> HTTP 263 GET HTTP/1.1
31 1.051006 -> HTTP 71 HTTP/1.1 200 OK (text/html)

If it is something you might have use for or find interesting, please check it out, and help out with patches. It turns out I have a lot to learn about networking and networking code.

All the best!

Maps and tiles

Hello all!

Right now we are having infrastructural problems with Maps. We can no longer use the MapQuest tiles, see mail thread from maps list archive here for more information.

We are working on getting past this and getting a working Maps application released soon. But this is also showing us more clear that we need to get a better grip around the tiles infrastructure if we want to have a Map application and/or a Map infrastructure in GNOME. We are having good discussions and I think we will get through this with a nice plan forward to prevent stuff like this happening. And also with a plan to do better in the future and do cooler stuff with tiles.

All the best!

Generic C++ GObject signals wrapper

Recently I've discovered, that connecting to signals in gstreamermm can be really inconvenient. The problem doesn't exist in the other mm libraries, because most of the classes and their signals are wrapped.
But GStreamer allows to create user-defined elements, so it's actually impossible to wrap everything in gstreamermm (for now library supports wrappers for gstreamer plugins core and base).

Currently, if you want to connect to a signal in gstreamermm, you have two options:
  1. Using pure C API:

  2. auto typefind = Gst::ElementFactory::create_element("typefind");

    g_signal_connect (typefind->gobj(),
    (gpointer *)typefind->gobj());

    static void cb_typefind_havetype (GstTypeFindElement *typefind,
    guint probability,
    GstCaps *caps,
    gpointer user_data)
    // callback implementation
    Well, it's not very bad. But... you have to use C structures in the callback instead of C++ wrappers.

  3. Using gstreamermm API
  4. As I mentioned, gstreamermm provide wrappers for core and base plugins, so some of the elements (and their signals) are already wrapped in the library:
    auto typefind = Gst::TypeFindElement::create();
    [] (guint probability, const Glib::RefPtr& caps)
    // callback implementation
However, many plugins are not wrapped (and they never will), so usually you need to either write wrapper for element which you wanted to use (and then, maintain this wrapper as well), or use pure C API.
Moreover, I'm going to remove plugin API in the next release [1], so user won't be able to use gstreamermm API even for the base and core plugins. I was wondering, if it would be possible to write generic wrapper for the GObject signals. So... there you are! The solution is not perfect yet, and I haven't tested it so much, but so far it works fine with few plugins and signals.

namespace Glib
template <typename T>
static constexpr T wrap (T v, bool=true)
return v;

template <typename T>
static constexpr T unwrap (T v, bool=true)
return v;

template<typename T>
using unwrapped_t = decltype(unwrap(*((typename std::remove_reference<T>::type*)nullptr)));

template<typename T>
constexpr T return_helper()
typedef unwrapped_t<T> Ret;
return Ret();

constexpr void return_helper()
return void();

class signal_callback;

template<typename Ret, typename ...T>
class signal_callback<Ret(T...)>
template<typename ...Args>
static auto callback(void* self, Args ...v)
using Glib::wrap;
typedef sigc::slot< void, decltype(wrap(v))... > SlotType;

void* data = std::get<sizeof...(Args)-1>(std::tuple<Args...>(v...));

// Do not try to call a signal on a disassociated wrapper.
if(dynamic_cast<Glib::Object*>(Glib::ObjectBase::_get_current_wrapper((GObject*) self)))
if(sigc::slot_base *const slot = Glib::SignalProxyNormal::data_to_slot(data))
(*static_cast<SlotType*>(slot))(wrap(std::forward<Args>(v), true)...);

return Glib::return_helper<Ret>();

auto operator()(const std::string& signal_name, const Glib::RefPtr<Glib::Object>& obj)
using Glib::unwrap;
static std::map<std::pair<GType, std::string>, Glib::SignalProxyInfo> signal_infos;

auto key = std::make_pair(G_TYPE_FROM_INSTANCE (obj->gobj()), signal_name);
if (signal_infos.find(key) == signal_infos.end())
signal_infos[key] = {
(GCallback) &callback<Glib::unwrapped_t<T>..., void*>,
(GCallback) &callback<Glib::unwrapped_t<T>..., void*>

return Glib::SignalProxy<Ret, T... >(obj.operator->(), &signal_infos[key]);

auto typefind = Gst::ElementFactory::create_element("typefind"),
signal_callback<void(guint, const Glib::RefPtr<Gst::Caps>&)> signal_wrapper;

signal_wrapper("have-type", typefind).connect(
[&ready, &cv] (guint probability, const Glib::RefPtr<Gst::Caps>& caps) {
std::cout << "have-type: probability=" << probability << std::endl;
Gst::Structure structure = caps->get_structure(0);
const Glib::ustring mime_type = structure.get_name();
std::cout << "have-type: mime_type=" << mime_type << std::endl;

structure.foreach([] (const Glib::QueryQuark& id, const Glib::ValueBase& value) {
const Glib::ustring str_id = id;
gchar* str_value = g_strdup_value_contents(value.gobj());
std::cout << "Structure field: id=" << str_id << ", value=" << str_value << std::endl;
return true;
Full source of the code can be found on the github [2].
As you see, you still have to know the type of the callback, but at least you can use gstreamermm C++ classes.
There is couple of things to do in this code, like getting last parameter from the list in more efficient way than through the tuple, etc.
I don't feel it is stable enough to integrate it with gstreamermm, but probably in the future I'll do that. Also, we could even internally use it in the glibmm to reduce amount of generated code.


I’m going to GUADEC!

Hey people – I will be coming to GUADEC!

I am exited to be a speaker this time. So this will happen:

  • We will have a workshop about contributing to open source – this is for you! For all the newcomers struggling to get started.
  • I will be able to hold a talk about how to grow an open source community – I have spent a lot of time on this one an people started asking me questions about it.
  • We will have the awesome GSoC lightning talks! The admin team is already working with the students to get you an interesting session about the latest news from our GSoC students – and possibly the first time they’re on stage!
  • If nobody keeps me from it, I will attempt explaining coala again and what happened to it since the last GUADEC. There has been lots of changes, companies and OS projects started using it productively and I would *love* to work together with you to improve code quality in GNOME.

Many thanks to the GNOME foundation for providing travel as well as accommodation to me! I see forward to meeting you people again!

GNOME Logs GSoC Progress Report

Hello everyone,

I will be mentioning all the progress done in the GNOME Logs GSoC project in the previous weeks. The search popover is completed, but the patches related to it are yet to be merged, which I hope, will get merged in the coming week. In this post, I will be telling you about features implemented by me in search popover. If you want a brief recap about the search popover, please see my earlier blog post about it.

The implemented search popover looks like this when the drop down button besides the search bar is clicked :


Clicking on the button related to what label, it opens up this treeview which allows us to select the journal parameters to search the user entered text from:

popover-select-parameter       popover-select-parameter-2

If we select any particular field from the treeview, it shows an option to select the search type:


The search type option is hidden when “All Available Fields” is selected as exact search type doesn’t make sense in that case. So, only substring search type is available by default in that case.

Clicking on the button related to when label shows this treeview from where we select the journal range filters:


If we select the “Set Custom Range” option, we go to another submenu:


It allows us to set a custom range for the journal, including the starting date time and ending date time. Clicking on the either of the select date button shows this calendar from which we can select the date:


Clicking on either of the select time button shows these time selection spinboxes:

custom-range-time12-hour format
custom-range-time-2424-hour format

The time spinboxes change depending upon the time format used in the current timezone.

This was all about the search popover as it is currently implemented by me. From next week, I will be working on writing a search provider for GNOME Logs which can be used to query journal entries according to the user entered text in the GNOME Shell search bar. Also, I will be working on getting the patches for search popover merge into the master branch of GNOME Logs. After all the patches related to search popover get merged, I will be making a video on how the search popover works. I would like to thank my mentor David King for guiding me in the right direction and helping me get my patches reviewed and merged.

My next post will be about the search provider for GNOME Logs coming next week.

So, stay tuned and Happy Coding:)

July 15, 2016

Progress so far

As things are beginning to take shape, it's high time for a progress update! If you aren't aware of what my project consists in, be sure to check my introductory post first. Shortly though, the main focus of the project implies wrapping the map horizontally in Champlain in order to ensure smoother panning.

So far a few bugs regarding coordinates translation have been fixed so as to have a clear ground to work on. Other minor fixes concerned dragging markers between map clones and zooming animation behaving weirdly.

The main challenge was making the clones responsive to mouse events: simply cloning the map layer (using ClutterClones) implied having completely static (unresponsive to events) user layers  (e.g. markers). The solution to this implied moving the real map (the source of the clones) in real time based on the cursor location. Always keeping the real map under the cursor creates the seamless illusion that the map is actually responsive throughout the horizontal wrap.

However, the fix came along with some other unexpected issues, some of which were easy to fix, like markers spanning over two clones, and others still requiring some work. 

The upcoming period will mostly consist in fixing bugs and getting closer to a polished horizontal map wrapping. If you are interested in my work you can take a look at my personal Github repository
which contains most of the work I've done so far.

GSoC Updates: ownCloud music Ampache API

Continuing from the grilo owncloud plugin last month, I’ve been working towards integrating the source with GNOME Music. In order to minimize the network requests, we’ve decided to cache the results in a local database. This would also improve user experience since cached results would populate relatively faster in the UI. Victor Toso suggested I look into GOM for implementing the cache and querying the data. My initial thought was to use raw SQL queries to query an sqlite database but this abstraction would help indeed.

While implementing the cache it’s important to consider when we decide to update it. The grilo plugin depends on the ownCloud music app which exposes an Ampache API for the music collection stored remotely. There are two cases to consider if the cache should be updated

  • The music collection has been updated, i.e. some new files have been added to the collection or the metadata of the existing ones have changed
  • The Ampache session is valid only for a fixed amount of time. In this scenario, we must update the cache in regular intervals to that the path to tracks remain valid

The Ampache API spec does indeed have an endpoint that can help with both the cases. The handshake endpoint returns an XML similar to

  <update>Last Update ISO 8601 Date</update>
  <add>Last Add ISO 8601 Date</add>
  <clean>Last Clean ISO 8601 Date</clean>
  <songs>Total # of Songs</songs>
  <artists>Total # of Artists</artists>
  <albums>Total # of Albums</albums>
  <tags>Total # of Tags</tags>
  <session_expire>Session Expire ISO 8601 Date</session_expire>
  <videos>Total # of Videos</videos>

The update tag can certainly be used to check if there have been any updates and update the cache if so. Unfortunately, the current implementation in ownCloud music doesn’t conform to the spec rigidly and both the update and add tags just return the current time.

Lately, I’ve been working on improving the Ampache API implementation in ownCloud music. Morris Jobke has been occupied lately with other exciting projects and has been kind enough to grant me commit permissions on the music app. Thomas Pulzer’s patches for music have have also helped in the goal to achieve better spec compatibility. While the API is still not completely compliant with the Ampache XML-API spec yet, I think I’d be able to proceed with the cache implementation once PR #514 which deals with update and add time, gets merged :)

Until then, stay tuned for more updates!

Fri 2016/Jul/15

  • Update from La Mapería

    La Mapería is working reasonably well for now. Here are some example maps for your perusal. All of these images link to a rather large PDF that you can print on a medium-format plotter — all of these are printable on a 61 cm wide roll of paper (or one that can put out US Arch D sheets).

    Valladolid, Yucatán, México, 1:10,000

    Ciudad de México
    Centro de la Ciudad de México, 1:10,000

    Ajusco y Sur de la Ciudad de México, 1:50,000

    Victoria, BC
    Victoria, British Columbia, Canada, 1:50,000

    Boston, Massachusetts, USA, 1:10,000

    Walnut Creek
    Walnut Creek, California, USA, 1:50,000

    Butano State Park
    Butano State Park and Pescadero, California, USA, 1:20,000

    Provo, Utah, USA, 1:50,000

    Nürnberg, Germany, 1:10,000

    Karlsruhe, Germany, 1:10,000

    That last one, for Karlsruhe, is where GUADEC will happen this year, so enjoy!

    Next steps

    La Mapería exists right now as a Python program that downloads raster tiles from Mapbox Studio. This is great in that I don't have to worry about setting up an OpenStreetMap stack, and I can just worry about the map stylesheet itself (this is the important part!) and a little code to render the map's scale and frame with arc-minute markings.

    I would prefer to have a client-side renderer, though. Vector tiles are the hot new thing; in theory I should be able to download vector tiles and render them with Memphis, a Cairo-based renderer. I haven't investigated how to move my Mapbox Studio stylesheet to something that Memphis can use (... or that any other map renderer can use, for that matter).

    Also, right now making each map with La Mapería involves extracting geographical coordinates by hand, and rendering the map several times while tweaking it to obtain just the right area I want. I'd prefer a graphical version where one can just mouse around.

    Finally, the map style itself needs improvements. It works reasonably well for 1:10,000 and 1:50,000 right now; 1:20,000 is a bit broken but easy to fix. It needs tweaks to map elements that are not very common, like tunnels. I want to make it work for 1:100,000 for full-day or multi-day bike trips, an possibly even smaller scales for motorists and just for general completeness.

    So far two of my friends in Mexico have provided pull requests for La Mapería — to fix my not-quite-Pythonic code, and to make the program easier to use the first time. Thanks to them! Contributions are appreciated.

GNOME Keysign new GUI and updates

As of now we have a new and better GUI for GnomeKeysign app which I've made using Glade3 and Gtk.Builder. The new user interface looks more sharp and integrates better with other gnome apps.

Here's are some windows comparison between the old GUI and the new one. The new GUI can be found on my github repo [1].

  • The app window for presenting User's own PGP keys



  • The window for presenting the fingerprint and QR code of the key

  • The window for scanning a QR code or type a key fingerprint

Thanks to Allan Day for the mock-ups which I've followed when creating the new GUI [2].

Some widgets aren't yet integrated, such as the QR code generator or the integrated webcam, but the base functionality of the GUI is working and the code for it looks more clean.

I've discovered that using Gtk.Builder and Glade is a better approach than coding the GUI manually, as I've done in the past GSoC. Also, the builder object gives a more robust way of separating the user interface from the application logic.

The future work is to finish connecting the remaining functionality with the new GUI, and afterwards make it use Flatpak [3].


Secure C/C++ Coding practices

Dear Software Engineers and Amateur Programmers,

In today's scenario, writing secure code is not a choice anymore, it's a necessity.

As a result of me attending Paul Ionescu's webcast "Inside the mind of a Hacker" ( (where he talks about how crackers crack their way through your code and what loopholes and vulnerabilities they exploit) and being trained overtime with strong review comments from peers laying emphasis on secure programming, I've begun giving a keen eye to best coding practices.

One such link I googled for yesterday and thought of sharing is:

The following usage in the correctly marked answer there:
strncpy(buff, "String 1", BUFFER_SIZE - 1);
[BUFFER_SIZE - 1] = '\0';
is actually correct and not incorrect as pointed out by one of the commenters. See for yourself to know why!
(I couldn't add a comment there due to lack of enough points to comment on StackOverflow.)

I found many instances of insecure invocation of strncpy in the open source package I am currently working on like
strncpy(buff, "String 1", sizeof(buf));
and wanted to give a alert to the maintainers/programmers if they are using such lines often in their code so that they stop making this mistake.

Will keep posting updates in this space with more such important links.

Till then,
Cheers and Happy Coding!

Why synclient does not work anymore

More and more distros are switching to libinput by default. That's a good thing but one side-effect is that the synclient tool does not work anymore [1], it just complains that "Couldn't find synaptics properties. No synaptics driver loaded?"

What is synclient? A bit of history first. Many years ago the only way to configure input devices was through xorg.conf options, there was nothing that allowed for run-time configuration. The Xorg synaptics driver found a solution to that: the driver would initialize a shared memory segment that kept the configuration options and a little tool, synclient (synaptics client), would know about that segment. Calling synclient with options would write to that SHM segment and thus toggle the various options at runtime. Driver and synclient had to be of the same version to know the layout of the segment and it's about as secure as you expect it to be. In 2008 I added input device properties to the server (X Input Extension 1.5 and it's part of 2.0 as well of course). Rather than the SHM segment we now had a generic API to talk to the driver. The API is quite simple, you effectively have two keys (device ID and property number) and you can set any value(s). Properties literally support just about anything but drivers restrict what they allow on their properties and which value maps to what. For example, to enable left-finger tap-to-click in synaptics you need to set the 5th byte of the "Synaptics Tap Action" property to 1.

xinput, a commandline tool and debug helper, has a generic API to change those properties so you can do things like xinput set-prop "device name" "property name" 1 [2]. It does a little bit under the hood but generally it's pretty stupid. You can run xinput set-prop and try to set a value that's out of range, or try to switch from int to float, or just generally do random things.

We were able to keep backwards compatibility in synclient, so where before it would use the SHM segment it would now use the property API, without the user interface changing (except the error messages are now standard Xlib errors complaining about BadValue, BadMatch or BadAccess). But synclient and xinput use the same API to talk to the server and the server can't tell the difference between the two.

Fast forward 8 years and now we have libinput, wrapped by the xf86-input-libinput driver. That driver does the same as synaptics, the config toggles are exported as properties and xinput can read and change them. Because really, you do the smart work by selecting the right property names and values and xinput just passes on the data. But synclient is broken now, simply because it requires the synaptics driver and won't work with anything else. It checks for a synaptics-specific property ("Synaptics Edges") and if that doesn't exists it complains with "Couldn't find synaptics properties. No synaptics driver loaded?". libinput doesn't initialise that property, it has its own set of properties. We did look into whether it's possible to have property-compatibility with synaptics in the libinput driver but it turned out to be a huge effort, flaky reliability at best (not all synaptics options map into libinput options and vice versa) and the benefit was quite limited. Because, as we've been saying since about 2009 - your desktop environment should take over configuration of input devices, hand-written scripts are dodo-esque.

So if you must insist on shellscripts to configure your input devices use xinput instead. synclient is like fsck.ext2, on that glorious day you switch to btrfs it won't work because it was only designed with one purpose in mind.

[1] Neither does syndaemon btw but it's functionality is built into libinput so that doesn't matter.
[2] xinput set-prop --type=int --format=32 "device name" "hey I have a banana" 1 2 3 4 5 6 and congratulations, you've just created a new property for all X clients to see. It doesn't do anything, but you could use those to attach info to devices. If anything was around to read that.

GSoC coding - Part 3

We have finally come up with a systematic scheme by which we can handle sharing of photos more efficiently. The problem with earlier scheme was, it was too focused on sharing image on Google Photos. Soon, the maintainer (Debarshi Ray) realized that the scheme was inefficient and will not scale as we might have multiple places to share in future (for example, bluetooth sharing).

In the new scheme, we have a manager known as “Share point Manager” and the source of sharing to, which is known as a “share point”. Every share point included in application will be a derived class from share-point. For example, currently there has been implementation of google-share-point which enables to upload photos on Google. Similarly, bluetooth-share-point, facebook-share-point, flickr-share-point, email-share-point will all be derived class of share-point. This enables to add as many as share-point as required in a systematic fashion.

The share dialog previously implementated, communicate with the share point manager to get a list of all available share points. The share point object contains the icon, label name, identifier and its source object. Using this object, share point creates GIcon for every share point available and displays it in the UI. Additional members for particular share point can be included in the derived class.

So, are we all set?

Not yet :(

Duplication problem. Online miners in gnome-photos fetch images from services like Google, Facebook, Flickr. If the user uploads a local image to these services (which is what my current project is), the overview mode gets duplicated; one with local image and other as remote copy. Therefore, my current focus is avoid this duplication and provide merging of images in the UI. Things get complicated when the image is the part of the album.

There has been some solution on hand; embedding metadata bits in the image file providing a local -> remote mapping identifier. This could be used to leverage the de-duplication. We are still on the way the most optimum way in which we can handle this problem.

Stay tuned for more. Happy Hacking.

July 14, 2016

Semi-Object-Oriented Programming in C

Although I said last year that I’m not the right person to write a book about developing GLib/GTK+ applications and libraries, I continue slowly but surely.

There is now a new section called “Semi-Object-Oriented Programming in C”. This serves as a nice warming up for GObject.

Go grab it while it’s hot:

The GLib/GTK+ Development Platform (version 0.5)

See also the NEWS file to know what changed since my last blog post on the subject.

Any comments or patches are welcome (the Git repo is here).

GSoC Report #1

For the past few weeks I’ve been working on the first part of my project which consisted in getting rid of the deprecated GtkAction API (with the related GtkActionGroup, GtkUIManager) and port everything to GAction. This blog post is long overdue as I hoped I could finish the task before reporting on my project’s progress, considering the port one of the milestones of my project. However, without much knowledge about the specifics of GtkAction and GAction, I greatly underestimated the time I will spend on finishing my task, and therefore, I delayed my post up to this very moment, when my task is nearing completion. I will submit the patch after passing the patch by Michael for a final review.

Due to the nature of GtkAction, except working on the more difficult problems, I spent most of the time trying to figure out the best ways of replacing entire files consisting of mostly boilerplate code with only one or two functions using the new GAction API. The resulted patch is pretty big (41 files changed, 2848 insertions, 3541 deletions), but reordering code and renaming functions to increase consistency accounts for most of the changes.

One of the bigger problems I encountered was related to WebKitContextMenu, the menu that pops up when right clicking the page content rendered by WebKit. The way GNOME Web handles this menu can be summarized in a few steps:

  • connect a callback to the “context-menu” signal, which is emitted right before the menu is about to be displayed
  • save the relevant items from the menu for later use
  • remove everything with webkit_context_menu_remove_all()
  • create new items from stock actions or GtkAction’s
  • populate the menu item by item, using the items previously saved and the newly created items, taking into account the position of the cursor when right click button was pressed (different actions if the user right-clicked on a link, an image, a video etc.)

My problem consisted in WebKit not having an API that I can use to create a context item from a GAction, the default constructer, webkit_context_menu_item_new(), taking a GtkAction as parameter. The way I handled this was by creating a new proxy function, that takes a GAction and a label as parameters, webkit_context_menu_item_new_from_gaction_with_label(), and creates a GtkAction that in turn is used to create a WebKitContextMenu item with the existing API. Before creating the menu item, a callback is connected to the “activate” signal of the GtkAction and a binding is created between the ‘enabled’ property of the GAction and the ‘sensitive’ property of the GtkAction, so by clicking the item or changing the action’s state, a response is triggered. Currently, the function can only be found inside the GNOME Web’s source code, but I’m already preparing a patch that I will submit to WebKit.

Another problem was related to the fact that GtkAction’s are heavily used for the bookmarks subsystem. After consulting with my mentor, Michael, I chose not to port that code yet, as with the next part of my project most of that code will wind up being rewritten or completely removed.

The goal of the port was to make my work easier while working on the second part of my project. While working towards achieving my goal, I also fixed some smaller bugs, achieved a better separation between different kinds of actions (window actions, app actions, toolbar actions etc) and greatly simplified some parts of the code.

July 13, 2016

Pre-test “setup” and wrap-up interview

As I am preparing for the final usability test, I put together a pre-test script and a wrap-up interview that will help me provide some necessary details to each participant, and also get their feedback. In my pre-test script I will talk about usability testing and GNOME, and afterwards I will be doing the short post-test interview on the overall experience of the testing process, and using GNOME.

Test intro:
Thank you for coming today!

Just like Windows is a desktop system and MacOS is a desktop system, GNOME is a free software desktop system. And you’ll be doing a few tasks using GNOME and a few GNOME applications. I’ll give you the tasks one at a time. We aren’t “testing” you. The test is all about the software. If you have problems with part of the test, that’s okay; that’s exactly what we are trying to find. So don’t feel embarrassed if you have problems doing something and please do not feel pressured by time or anything else. f you can’t figure something out, that is perfectly okay and will still provide us with useful information for the test.  If you have any questions about the tasks I will try to answer them, but the answers and feedback should come directly from you as much as possible so I will avoid anything that would lead you to a specific choice.  Also, I’m going to take notes while you’re doing these tasks. It will help me if you talk out loud when you are doing something, so I can take notes. So if you’re looking for a Print button, just say “I’m looking for a Print button.” And move the mouse to wherever you are looking on the screen, so I can see where you’re looking. It would also be very helpful for me to do a screen-record of the testing process, so that I can go back to that later during my analysis.

Follow-up Questions:

  • What is your overall impression of GNOME?
  • Do you feel familiar with the interface in general?
  • Is there anything about the test that was confusing or bothered you?
  • Which tasks made you feel the most comfortable/uncomfortable?
  • Is there anything else you would like to say about the test?


I might also ask some questions on a specific task if I need more information:

  • It looked like you had problems doing __ in the test; what were you looking for, or what would have made that more obvious?
  • What were your expectations regarding … ?
  • Why did you choose this method for …?

Thank you very much for accepting my invitation for this usability test, and for your help on improving GNOME!


As I mentioned earlier I am planning to do a “screencast” record of the testing process, so that I can go back to that later during my analysis. If you have any suggestions on what software to use for that, please let me know in the comments!

GUI comes in For Proxies

My last post was an overview of how this project is designed to offer proxy features through NetworkManager. NM is the server part (which configures PacRunner) and PacRunner is there inside to act as an engine for doing all stuff (Interpreting, downloading PAC File etc) Applications can call FindProxyForURL() DBus method on PacRunner DBus interface org.pacrunner.Client .

NM is using org.pacrunner.Manager interface i.e CreateProxyConfiguration() and DestroyProxyConfiguration()  methods for configuring PacRunner with multiple configuration (one for each active connection!) . So our VPN has a separate proxy configuration and it won’t mess up with the LAN proxies. Clients just need to call the FindProxyForURL(url, host) in return they will be getting a copy of proxy server address to use for that URL.

The whole project sums up to give a GUI page per connection for proxies , GUI comes as a new tab in nm-connection-editor which many of us may be aware if they use GUI instead of nmcli. I’d also love to write bits for adding new proxy specific nmcli commands once i’m done with finding my code for above things inside NM.

GUI Modes:

I. None : User doesn’t want to use proxy for this connection (DIRECT internet)

I. Auto : Entries for Pac Url and Pac Script. If none of these is filled the one obtained from DHCP Server is given to PacRunner. If user want to override DHCP obtained WPAD value they just need to fill Pac Url and/or Pac Script entries.


III. Manual : Manual mode let users specify different proxy servers for different protocols and an entry for hosts for which they don’t want a proxy to reach i.e Direct internet connection.



Week Header

Today, I would like to introduce to you all, the all new week-header😀

The week-header is the first half of the week-view which deals with events last greater than or equal to 24h. Basically, events that last for a whole day or multiple days.

First, the events are fetched and stored in a GList. This GList is sorted according to the following criteria’s

  • Multiday events should be placed before all-day events
  • In case of both events being similar events:
    • They are sorted according to the start day first
    • They are sorted according to the end day in case of a tie.

For placing the events themselves, we iterate from the first row and find a perfect fit for the event at hand and add it there. The result- A very neat placement of events. The whole process is like sand flowing inside a jar of golf balls and occupying where it can.


If you noticed, this hides certain events and only shows 3 rows of events. If you further notice, you will see a button at the bottom left. Let’s click it and this happens:


The header also has a feature of expand/collapse which shows/hides events:)

The expand mode shows all the events and also introduces a scrollable window to see events further below the current events.
The collapse mode shows only three rows of events and shows label placeholders if there are any events hidden.

Hope you all liked it.:)

Let me also remind you that the countdown has begun for this :


One month to go😀

July 12, 2016

Some Rework on Tracker and Popovers

For the past three weeks there was a lot going on. First of all, the userTracking idea we initially had proved to provide a bit less than we actually needed, so we agreed that we needed to come up with something more complex, and that lead us to the creation of a new module, the UserTracker. Basically, the user tracker is a whole new module that does what its name says it does:). But the old userTracking stuff was integrated in the ChatView (hence the need to separate the logic into a stand-alone module). This was a bit difficult as not all the things were clear from the start: what signals to send, how to filter them so that we don’t end up with a ton of signals everywhere around the app that would need their own filtering process.

Very important thing: the UserTracker watches users both locally (in the room you are on) and globally (all the rooms you are on, rooms that are all on the same network). So, each network (or account, as they are called) has its own UserTracker that has the job of tracking the local status in all the rooms, and the global one on that network.

The global tracking part uses detailed signals, while the local one uses callbacks in order to kind of simulate the idea of signals. Both these measures were taken so that we minimize the number of filtered signals (basically there is minimum filtering done).

Whoa. A lot about the tracker so far. Now the visual part that will use it. Well, visually speaking there is not much of a change on the Popovers, sure, there were some minor bugs fixed, but in a nutshell the popovers are very much the same visually speaking. What happens in the back-end, well, that’s a different story:).

The Popovers had to be rebased on the tracker branch, and that had its own challenges as well. The current work that’s being done is in order to make use of all the things that tracker has to offer, but inside the popovers.

Stay tuned for more news on the popovers, as the work on them is advancing towards the finish line:).

Vala? Is it tasty?

Well, you could say it is, but actually it’s a programming language. But nobody heard of it so there’s nothing awesome about it, you might say. WRONG! Here are some of its characteristics that caught my attention:

  • performance on the same level as C, but it’s more of a high-level language
  • compiles Vala code and outputs C code ≡ C cannot do X ⇒ Vala cannot do X
  • syntax ≈ C# (or Java)
  • valac is the name of the compiler; it’s used exactly like gcc
  • some details about data:
    • constants are defined as: const UPPER_CASE
    • you can get the MAX or MIN of a type by using: data_type.MIN or data_type.MAX (int.MIN etc.)
    • strings
      • UTF-8
      • immutable (Java-like)
      • templates: if a = 6 and b = 7 then “$(a*b)” will be evaluated to “42”
      • can be sliced: some_string[start:end]
      • methods like parse or to_string are available
      • in can be used to find a string in another string (instead of strstr in C)
  • delegates: passing functions/methods as objects/variables
  • there is no overloading!
    • for overloading constructors you need to use named constructors:
      • new Button();
      • new Button.with_label(“Click me”);
      • new Button.from_stock(Gtk.STOCK_OK);
    • constructors can be chained by using this
  • signals = events
    • require connect (handlers)
    • are better used with lambda functions (closures)
    • are activated by extern factors
    • always belong to instances of classes
    • every instance of a class derived from GLib.Object has a signal called notify which gets emitted every time a property of its object changes! (properties, later on)
    • examples:
      • instance.signal.connect((t, param_of_signal) => {…});
      • obj.notify.connect((s, p) => { stdout.printf(“Property ‘%s’ has changed! \n”,;
        • s = source of the signal
  • properties
    • alice.age++; instead of alice.set_age(alice.get_age() + 1);
    • public int age {get; set; default = 32; }
      • or private set/get, or no set/get at all
      • C#-like
    • keyword construct can be used alongside set/get and whatever is implemented in construct { … } is called after the constructor is called. (starting from the first superclass every class calls its construct block if it exists)
  • contract programming
    • double method_name(int x, double d)
    • requires (x > 0 && x < 10)
    • requires (d >= 0.0 && d <= 1.0)
    • ensures (result >= 0.0 && result <= 10.0)
    • {
      • return d * x;
    • }
    • result = special variable = return value
  • pointers
    • exactly like C
  • chained relational expressions
    • if (0 < a && a < b && b < c) { … }
  • regex allowed!
  • OOP
    • base = super
    • every abstract method of an abstract class must be overridden
    • virtual can be overridden but it is not necessary
    • when implementing interfaces methods should have the same name
    • when implementing multiple interfaces:
      • interface Foo { public abstract int m(); }
      • interface Bar { public abstract string m(); }
      • class Cls : Foo, Bar {
        • public int Foo.m() { return 10; }
        • public string Bar.m() { return “bar”; }
      • }
    • keyword as:
      • (Button) widget ≡ widget as Button
      • less parentheses
    • keyword is:
      • boolean function that tells us if a variable is of a certain type
      • Button b = (widget is Button) ? widget as Button : null;
  • GLib.Object = The Mother of All Classes
    • namespaces are available so using GLib; will allow us to use Object alone

These are just some things I found interesting about Vala. More of it can be found at Valadoc.

As the sixth season of GoT is almost over, all I can say is “Vala(r) morghulis”.