24 hours a day, 7 days a week, 365 days per year...

October 24, 2014

A Londoner In San Jose

10th Year Reunion Summit

Here I am at the Google Mentor Summit to celebrate the 10 year Reunion as both a Google Summer of Code Scientific Ruby and of course, a GNOME student too. ;-)

Banner: Google Summer of Code, tenth year

I have yet to see any GNOMIES or the SciRuby lot, but there is a gathering in the Marriott in about an hour where I am sure I will get a chance then. That said, I have already had the opportunity to meet some cool people from various FOSS communities involved in GSoC and accordingly I found out about some interesting work that they have been involved with relating to Neuroscience, Robotics and Open Education. There seem to be a lot of researchers floating about at the Summit which has led to some especially interesting chat (since I like all that stuff).

With Thanks To...

Logo: SciRubyI would just like to take this opportunity to thank all the people who helped me get here from the GNOME and Scientific Ruby communities respectively.Logo: GNOME
All donations have been very much appreciated! So to all of those individuals who saw fit to support me (in my time of need), thank you very much!

I would also like to give a shout out to Google for giving me free digs at the San Jose Hilton for three nights of the Summit, for inviting me to the event and generally providing me with cool free stuff and  an itinerary of interesting things to do this weekend in San Jose!

Logo: Google

Last but by no means least, I would like to thank The Lavin Agency for not only, "making the world a smarter place" ;-) but specifically, for covering most of my travel costs to the Summit San Jose and back again... That contribution truly nailed it in getting me all the way to San Jose!

Logo: The Lavin Agency

Once again, thank you all.

October 23, 2014

Mono for Unreal Engine

Earlier this year, both Epic Games and CryTech made their Unreal Engine and CryEngine available under an affordable subscription model. These are both very sophisticated game engines that power some high end and popular games.

We had previously helped Unity bring Mono as the scripting language used in their engine and we now had a chance to do this over again.

Today I am happy to introduce Mono for Unreal Engine.

This is a project that allows Unreal Engine users to build their game code in C# or F#.

Take a look at this video for a quick overview of what we did:

This is a taste of what you get out of the box:

  • Create game projects purely in C#
  • Add C# to an existing project that uses C++ or Blueprints.
  • Access any API surfaced by Blueprint to C++, and easily surface C# classes to Blueprint.
  • Quick iteration: we fully support UnrealEngine's hot reloading, with the added twist that we support it from C#. This means that you hit "Build" in your IDE and the code is automatically reloaded into the editor (with live updates!)
  • Complete support for the .NET 4.5/Mobile Profile API. This means, all the APIs you love are available for you to use.
  • Async-based programming: we have added special game schedulers that allow you to use C# async naturally in any of your game logic. Beautiful and transparent.
  • Comprehensive API coverage of the Unreal Engine Blueprint API.

This is not a supported product by Xamarin. It is currently delivered as a source code package with patches that must be applied to a precise version of Unreal Engine before you can use it. If you want to use higher versions, or lower versions, you will likely need to adjust the patches on your own.

We have set up a mailing list that you can use to join the conversation about this project.

Visit the site for Mono for Unreal Engine to learn more.

(I no longer have time to manage comments on the blog, please use the mailing list to discuss). – introduction

My talk at GUADEC this year was titled Continuous Performance Testing on Actual Hardware, and covered a project that I’ve been spending some time on for the last 6 months or so. I tackled this project because of accumulated frustration that we weren’t making consistent progress on performance with GNOME. For one thing, the same problems seemed to recur. For another thing, we would get anecdotal reports of performance problems that were very hard to put a finger on. Was the problem specific to some particular piece of hardware? Was it a new problem? Was it an a problems that we have already addressed? I wrote some performance tests for gnome-shell a few years ago – but running them sporadically wasn’t that useful. Running a test once doesn’t tell you how fast something should be, just how fast it is at the moment. And if you run the tests again in 6 months, even if you remember what numbers you got last time, even if you still have the same development hardware, how can you possibly figure out what what change is responsible? There will have been thousands of changes to dozens of different software modules.

Continuous testing is the goal here – every time we make a change, to run the same tests on the same set of hardware, and then to make the results available with graphs so that everybody can see them. If something gets slower, we can then immediately figure out what commit is responsible.

We already have a continuous build server for GNOME, GNOME Continuous, which is hosted on GNOME Continuous is a creation of Colin Walters, and internally uses Colin’s ostree to store the results. ostree, for those not familiar with it is a bit like Git for trees of binary files, and in particular for operating systems. Because ostree can efficiently share common files and represent the difference between two trees, it is a great way to both store lots of build results and distribute them over the network.

I wanted to start with the GNOME Continuous build server – for one thing so I wouldn’t have to babysit a separate build server. There are many ways that the build can break, and we’ll never get away from having to keep a eye on them. Colin and, more recently, Vadim Rutkovsky were already doing that for GNOME Continuouous.

But actually putting performance tests into the set of tests that are run by doesn’t work well. GNOME Continuous runs it’s tests on virtual machines, and a performance test on a virtual machine doesn’t give the numbers we want. For one thing, server hardware is different from desktop hardware – it generally has very limited graphics acceleration, it has completely different storage, and so forth. For a second thing, a virtual machine is not an isolated environment – other processes and unpredictable caching will affect the numbers we get – and any sort of noise makes it harder to see the signal we are looking for.

Instead, what I wanted was to have a system where we could run the performance tests on standard desktop hardware – not requiring any special management features.

Another architectural requirement was that the tests would keep on running, no matter what. If a test machine locked up because of a kernel problem, I wanted to be able to continue on, update the machine to the next operating system image, and try again.

The overall architecture is shown in the following diagram:

HWTest Architecture The most interesting thing to note in the diagram the test machines don’t directly connect to to download builds or to upload the results. Instead, test machines are connected over a private network to a controller machine which supervises the process of updating to the next build and actually running, the tests. The controller has two forms of control over the process – first it controls the power to the test machines, so at any point it can power cycle a test machine and force it to reboot. Second, the test machines are set up to network boot from the test machines, so that after power cycling the controller machine can determine what to boot – a special image to do an update or the software being tested. The systemd journal from the test machine is exported over the network to the controller machine so that the controller machine can see when the update is done, and collect test results for publishing to is live now, and tests have been running for the last three months. In that period, the tests have run thousands of times, and I haven’t had to intervene once to deal with a . Here’s catching a regression (fix) regressionI’ll cover more about the details of how the hardware testing setup work and how performance tests are written in future posts – for now you can find some more information at

GUADEC-ES 2014: Zaragoza (Spain), 24th-26th October

A short notice to remind everyone that this weekend a bunch of GNOME hackers and users will be meeting in the beautiful city of Zaragoza (*) (Spain) for the Spanish-speaking GUADEC. The schedule is already available online:

Of course, non-Spanish-speaking people are also very welcome :)

See you there!


(*) Hoping not to make enemies: Zárágozá.

Filed under: GNOME Planet, GNU Planet Tagged: gnome, guadec

Linux Container Security

First, read these slides. Done? Good.

Hypervisors present a smaller attack surface than containers. This is somewhat mitigated in containers by using seccomp, selinux and restricting capabilities in order to reduce the number of kernel entry points that untrusted code can touch, but even so there is simply a greater quantity of privileged code available to untrusted apps in a container environment when compared to a hypervisor environment[1].

Does this mean containers provide reduced security? That's an arguable point. In the event of a new kernel vulnerability, container-based deployments merely need to upgrade the kernel on the host and restart all the containers. Full VMs need to upgrade the kernel in each individual image, which takes longer and may be delayed due to the additional disruption. In the event of a flaw in some remotely accessible code running in your image, an attacker's ability to cause further damage may be restricted by the existing seccomp and capabilities configuration in a container. They may be able to escalate to a more privileged user in a full VM.

I'm not really compelled by either of these arguments. Both argue that the security of your container is improved, but in almost all cases exploiting these vulnerabilities would require that an attacker already be able to run arbitrary code in your container. Many container deployments are task-specific rather than running a full system, and in that case your attacker is already able to compromise pretty much everything within the container. The argument's stronger in the Virtual Private Server case, but there you're trading that off against losing some other security features - sure, you're deploying seccomp, but you can't use selinux inside your container, because the policy isn't per-namespace[2].

So that seems like kind of a wash - there's maybe marginal increases in practical security for certain kinds of deployment, and perhaps marginal decreases for others. We end up coming back to the attack surface, and it seems inevitable that that's always going to be larger in container environments. The question is, does it matter? If the larger attack surface still only results in one more vulnerability per thousand years, you probably don't care. The aim isn't to get containers to the same level of security as hypervisors, it's to get them close enough that the difference doesn't matter.

I don't think we're there yet. Searching the kernel for bugs triggered by Trinity shows plenty of cases where the kernel screws up from unprivileged input[3]. A sufficiently strong seccomp policy plus tight restrictions on the ability of a container to touch /proc, /sys and /dev helps a lot here, but it's not full coverage. The presentation I linked to at the top of this post suggests using the grsec patches - these will tend to mitigate several (but not all) kernel vulnerabilities, but there's tradeoffs in (a) ease of management (having to build your own kernels) and (b) performance (several of the grsec options reduce performance).

But this isn't intended as a complaint. Or, rather, it is, just not about security. I suspect containers can be made sufficiently secure that the attack surface size doesn't matter. But who's going to do that work? As mentioned, modern container deployment tools make use of a number of kernel security features. But there's been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:

  • Strong auditing and aggressive fuzzing of containers under realistic configurations
  • Support for meaningful nesting of Linux Security Modules in namespaces
  • Introspection of container state and (more difficult) the host OS itself in order to identify compromises

These aren't easy jobs, but they're important, and I'm hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it's going to be far too easy to write containers off as a "convenient, cheap, secure: choose two" tradeoff. That's not a winning strategy.

[1] Companies using hypervisors! Audit your qemu setup to ensure that you're not providing more emulated hardware than necessary to your guests. If you're using KVM, ensure that you're using sVirt (either selinux or apparmor backed) in order to restrict qemu's privileges.
[2] There's apparently some support for loading per-namespace Apparmor policies, but that means that the process is no longer confined by the sVirt policy
[3] To be fair, last time I ran Trinity under Docker under a VM, it ended up killing my host. Glass houses, etc.

comment count unavailable comments

An early view of GTK+ 3.16

A number of new features have landed in GTK+ recently. These are now available in the 3.15.0 release. Here is a quick look at some of them.

Overlay scrolling

We’ve had long-standing feature requests to turn scrollbars into overlayed indicators, for touch systems. An implementation of this idea has been merged now. We show traditional scrollbars when a mouse is detected, otherwise we fade in narrow, translucent indicators. The indicators are rendered on top of the content and don’t take up extra space. When you move the pointer over the indicator, it turns into a full-width scrollbar that can be used as such.

Other new scrolling-related features are support for synchronized scrolling of multiple scrolled windows with a shared scrollbar (like in the meld side-by-side view), and an ::edge-overshot signal that is generated when the user ‘overshoots’ the scrolling at either end.

OpenGL support

This is another very old request – GtkGLExt and GtkGLArea have been around for more than a decade.  In 3.16, GTK+ will come with a GtkGLArea widget.

Alex’ commit message explains all the details, but the high-level summary is that we now render with OpenGL when we have to, and we can  fully control the stacking of pieces that are rendered with OpenGL or with cairo: You can have a translucent popup over a 3D scene, or mix buttons into your 3D views.

While it is nice to have a GLArea widget, the real purpose is to prepare GDK for Emmanuele’s scene graph work, GSK.

A Sidebar widget

Ikey Doherty contributed the GtkSidebar widget. It is a nice and clean widget to turn the pages of a GtkStack.

IPP Printing

The GTK+ print dialog can now handle IPP printers which don’t provide a PPD to describe their capabilities.

Pure CSS theming

For the last few years, we’ve implemented more and more CSS functionality in the GTK+ style code. In 3.14, we were able to turn Adwaita into a pure CSS theme. Since CSS has clear semantics that don’t include ‘call out to arbitrary drawing code’, we are not loading and using theme engines anymore.

We’ve landed this change early in 3.15 to give theme authors enough time to convert their themes to CSS.

More to come

With these features, we are about halfway through our plans for 3.16. You can look at the GTK+ roadmap to see what else we hope to achieve in the next few months.

October 22, 2014

A workshop at FSCONS

Things are going at a fast pace at Medialogy these days, but I’ll have a bit of time to do GNOME Engagement again soon. FSCONS is coming up and I plan to bring posters, brochures and myself to Sweden from Thursday the 30th October till Monday the 3rd. If anyone is interested in meeting up, I’ll be around the whole weekend at the conference.

Also! Also! I’m doing a workshop on promotional videos. It will be interesting as I haven’t quite held a talk on this subject before.  It’s scheduled around 18.45 on Sunday in Room 6. My plan is to give tips on creating promotional videos, especially:

  • Planning out a promotional video
  • Using a possible pipeline of FOSS tools for creating these videos.
  • Sharing my own  collection of ressources I have used to learn the tools.

I’m curious if there’s anything else you feel I should touch upon during this workshop. Feel free to tell me on beforehand. (-:|>

GStreamer Conference 2014 talks online

For those of you who like me missed this years GStreamer Conference the recorded talks are now available online thanks to Ubicast. Ubicats has been a tremendous partner for GStreamer over the years making sure we have high quality talk recordings online shortly after the conference ends. So be sure to check out this years batch of great GStreamer talks.

Btw, I also done a minor release of Transmageddon today, which mostly includes a couple of bugfixes and a few less deprecated widgets :)

Development of Nautilus – Popovers, port to GAction and more

So for the last two weeks, I have been trying to implement this:

The popovers!

In an application that already use GAction and a normal GMenu for everything is quite easy.

But Nautilus is not using GAction neither GMenu for its menus. Not only that, Nautilus use GtkUIManager for managing the menus and GtkActions. And not only that, Nautilus merge parts of menus along all the code.

Also, the popover drawn in that design is not possible with GMenu because of the GtkSlider.

So my first step, when nothing was clear for me, was to just trying to create a custom GtkBox class to embed it on the popover and try to us the current architecture of nautilus.

It didn’t work, obviously. Fail 1.

Then after talking with some Gedit guys (thanks!), I understood that what I needed was to port Nautilus to GAction first. But, I will have to find a solution to merge menus.

My first week and a half was trying to find a solution on how to merge the menus, along making the port to GAction and refactoring Nautilus code to make it with sense and being used to the code of Nautilus.

The worst part was the complexity of the code, understanding it and the its intricate code paths. Making a new application test with GMenu and popovers merging menus was kinda acceptable.

To understand why I needed to merge menus recursively, this was the recursion of nautilus menus that was done with GtkUIManager along 4 levels of classes. That diagram should have more leafs (more classes injecting items) at some levels, but this was the most complex one.:

Dibujo sin título

So after spending more than a week trying to make it work at all costs,  I figured out that merging menus recursively in recursive sections was not working. That was kinda frustrating.

Big fail 2.

Then I decided to get another path, with the experience earned along that one week and a half.

I simplified the menu layout, to be a flat layout (still I have to merge a one-level menus, so a new way to merge menus was born), put all the management of the actions on the window instead of having multiple GtkActionGroups sparsed on the code as Nautilus had previously, make the update of menus centralized on the window, attach the menus where it makes sense (on the toolbar), and a beautiful thing, the toolbar of nautilus (aka header bar) is now on a xml gresource file, not longer making it programatically =).

That last thing required to redo a good part of the toolbar, to for example use the private bindings that GObject provides (and then be able to use gtk_widget_class_bind_template_child_private) or sync the sensitivity of some widgets that were synced directly modifying the actions on the window instead on the toolbar, etc.

And thanks to the experience earned in the fails before, it started working!

Then I became enthusiastic to add more and more part of nautilus ported. After the prototype worked this morning, all was kinda easy. And now I feel more (like a very big difference) confident with the code of Nautilus, C, GTK+ and GObject.

Here’s the results



It’s still a very early prototype, since the port to GAction is not completed. I think I have 40% of the port done. And I didn’t erased all the code that now it’s not necesary. But with a prototype working and the difficult edges solved, that doesn’t worry me at all.

Work to be done is:

* Complete the port to GAction, porting also all menus.

* Refactor to make more sense now with the current workflow of menus and actions.

* Create the public API to allow extensions to extend the menus. Luckily I was thinking on that when creating the API to merge the menus inside Nautilus, so the method will be more or less the same.

* And last but not least, make sure any regression is known (this is kinda complicated due to the possibly code paths and supported tools of Nautilus)

Hope you like the work!

PD: Work is being done in wip/gaction but please, don’t look at the code yet =)

Apache SSLCipherSuite without POODLE

In my previous post Forward Secrecy Encryption for Apache, I’ve described an Apache SSLCipherSuite setup to support forward secrecy which allowed TLS 1.0 and up, avoided SSLv2 but included SSLv3. With the new PODDLE attack (Padding Oracle On Downgraded Legacy Encryption), SSLv3 (and earlier versions) should generally be avoided. Which means the cipher configurations discussed [...]

Cassandra Keyspace case-sensitiveness WTF

foo   bar  OpsCenter

cqlsh> use opscenter;
Bad Request: Keyspace 'opscenter' does not exist

cqlsh> use OpsCenter;
Bad Request: Keyspace 'opscenter' does not exist

cqlsh> USE "OpsCenter";

Seriously this is the way Cassandra handle case-sensitiveness ???

October 21, 2014

A GNOME Kernel wishlist

GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

October 20, 2014

GNOME Web 3.14

It’s already been a few weeks since the release of GNOME Web 3.14, so it’s a bit late to blog about the changes it brings, but better late than never. Unlike 3.12, this release doesn’t contain big user interface changes, so the value of the upgrade may not be as immediately clear as it was for 3.12. But this release is still a big step forward: the focus this cycle has just been on polish and safety instead of UI changes. Let’s take a look at some of the improvements since 3.12.1.

Safety First

The most important changes help keep you safer when browsing the web.

Safer Handling of TLS Authentication Failures

When you try to connect securely to a web site (via HTTPS), the site presents identification in the form of a chain of digital certificates to prove that your connection has not been maliciously intercepted. If the last certificate in the chain is not signed by one of the certificates your browser has on file, the browser decides that the connection is not secure: this could be a harmless server configuration error, but it could also be an attacker intercepting your connection. (To be precise, your connection would be secure, but it would be a secure connection to an attacker.) Previously, Web would bet on the former, displaying an insecure lock icon next to the address in the header bar, but loading the page anyway. The problem with this approach is that if there really is an attacker, simply loading the page gives the attacker access to secure cookies, most notably the session cookies used to keep you logged in to a particular web site. Once the attacker controls your session, he can trick the web site into thinking he’s you, change your settings, perhaps make purchases with your account if you’re on a shopping site, for example. Moreover, the lock icon is hardly noticeable enough to warn the user of danger. And let’s face it, we all ignore those warnings anyway, right? Web 3.14 is much stricter: once it decides that an attacker may be in control of a secure connection, it blocks access to the page, like all major browsers already do:

Screenshot from 2014-10-17 19:52:53Click for full size

(The white text on the button is probably a recent compatibility issue with GTK+ master: it’s fine in 3.14.)

Safety team members will note that this will obviously break sites with self-signed certificates, and is incompatible with a trust-on-first-use approach to certificate validation. As much as I agree that the certificate authority system is broken and provides only a moderate security guarantee, I’m also very skeptical of trust-on-first-use. We can certainly discuss this further, but it seemed best to start off with an approach similar to what major browsers already do.

The Load Anyway button is non-ideal, since many users will just click it, but this provides good protection for anyone who doesn’t. So, why don’t we get rid of that Load Anyway button? Well, different browsers have different strategies for validating TLS certificates (a good topic for a future blog post), which is why Web sometimes claims a connection is insecure even though Firefox loads the page fine. If you think this may be the case, and you don’t care about the security of your connection (including any passwords you use on the site), then go ahead and click the button. Needless to say, don’t do this if you’re using somebody else’s Wi-Fi access point, or on an email or banking or shopping site… when you use this button, the address in the address bar does not matter: there’s no telling who you’re really connected to.

But all of the above only applies to your main connection to a web site. When you load a web page, your browser actually creates very many connections to grab subresources (like images, CSS, or trackers) needed to display the page. Prior to 3.14, Web would completely ignore TLS errors for subresources. This means that the secure lock icon was basically worthless, since an attacker could control the page by modifying any script loaded by the page without being detected. (Fortunately, this attack is somewhat unlikely, since major browsers would all block this.) Web 3.14 will verify all TLS certificates used to encrypt subresources, and will block those resources if verification fails. This can cause web pages to break unexpectedly, but it’s how all major browsers I’ve tested behave, and it’s certainly the right thing to do. (We may want to experiment with displaying a warning, though, so that it’s clear what’s gone wrong.)

And if you’re a distributor, please read this mail to learn how not to break TLS verification in your distribution. I’m looking at you, Debian and derivatives.

Fewer TLS Authentication Failures

With glib-networking 2.42, corresponding to GNOME 3.14, Web will now accept certificate chains when the certificates are sent out of order. Sites that do this are basically broken, but all major browsers nevertheless support unordered chains. Sending certificates out of order is a harmless configuration mistake, not a security flaw, so the only harm in accepting unordered certificates is that this makes sites even less likely than before to notice their configuration mistake, harming TLS clients that don’t permute the certificates.

This change should greatly reduce the number of TLS verification failures you experience when using Web. Unfortunately, there are still too many differences in how certificate verification is performed for me to be comfortable with removing the Load Anyway button, but that is definitely the long-term goal.

HTTP Authentication

WebKitGTK+ 2.6.1 plugs a serious HTTP authentication vulnerability. Previously, when a secure web page would require a password before the user could load the page, Web would not validate the page’s TLS certificate until after prompting the user for a password and sending it to the server.

Mixed Content Detection

If a secure (HTTPS) web page displays insecure content (usually an image or video) or executes an insecure script, Web now displays a warning icon instead of a lock icon. This means that the lock icon now indicates that your connection is completely private, with the exception that a passive adversary can always know the web site that you are visiting (but not which page you are visiting on the site). If the warning icon is displayed, then an adversary can compromise some (and possibly all) of the page, and has also learned something that might reveal which page of the site you are visiting, or the contents of the page.

If you’re curious where the insecure content is coming from and don’t mind leaving behind Web’s normally simple user interface, you can check using the web inspector:

Screenshot from 2014-10-17 21:07:52The screenshot is leaked to an attacker, revealing that you’re on the home page. Click for full size.

The focus on safety will continue to drive the development of Web 3.16. Most major browsers, with the notable exception of Safari, take mixed content detection one step further by actively blocking some more dangerous forms of insecure content, such as scripts, on secure pages, and we certainly intend to do so as well. We’re also looking into support for strict transport security (HSTS), to ensure that your connection to HSTS-enabled sites is safe even if you tried to connect via HTTP instead of HTTPS. This is what you normally do when you type an address into the address bar. Many sites will redirect you from an HTTP URL to an HTTPS URL, but an attacker isn’t going to do this kindness for you. Since all HTTP pages are insecure, you get no security warning. This problem is thwarted by strict transport security. We’re currently hoping to have both mixed content blocking and strict transport security complete in time for 3.16.

UI Changes

Of course, security hasn’t been the only thing we’ve been working on.

  • The most noticeable user experience change is not actually a change in Web at all, but in GNOME Document Viewer 3.14. The new Document Viewer browser plugin allows you to read PDFs in Web without having to download the file and open it in Document Viewer. (This is similar to the proprietary Adobe Reader browser plugin.) This is made possible by new support in WebKitGTK+ 2.6 for GTK+ 3 browser plugins.
  • The refresh button has been moved from the address bar and is now next to the new tab button, where it’s always accessible. Previously, you would need to click to show the address bar before the refresh button would appear.
  • The lock icon now opens a little popover to display the security status of the web page, instead of directly presenting the confusing certificate dialog. You can also now click the lock when the title of the page is displayed, without needing to switch to the address bar.


3.14 also contains some notable bugfixes that will improve your browsing experience.

  • We fixed a race condition that caused the ad blocker to accidentally delete its list of filters, so ad block will no longer randomly stop working when enabled (it’s off by default). (We still need one additional fix in order to clean this up automatically if it’s already broken, but in the meantime you can reset your filters by deleting ~/.config/epiphany/adblock if you’re affected.)
  • We (probably!) fixed a bug that caused pages to disappear from history after the browser was closed.
  • We fixed a bug in Web’s aggressive removal of tracking parameters from URLs when the do not track preference is enabled (it’s off by default), which caused compatibility issues with some web sites.
  • We fixed a bug that caused Web to sometimes not offer to remember passwords.

These issues have all been backported to our 3.12 branch, but were never released. We’ll need to consider making more frequent stable releases, to ensure that bugfixes reach users more quickly in the future.


  • There are new context menu entries when right-clicking on an HTML video. Notably, this adds the ability to easily download a copy of the video for watching it offline.
  • Better web app support. Recent changes in 3.14.1 make it much harder for a web app to escape application mode, and ensure that links to other sites open in the default browser when in application mode.
  • Plus a host of smaller improvements: The subtitle of the header bar now changes at the same time as the title, and the URL in the address bar will now always match the current page when you switch to address bar mode. Opening multiple links in quick succession from an external application is now completely reliable (with WebKitGTK+ 2.6.1); previously, some pages would load twice or not at all. The search provider now exits one minute after you search for something in the GNOME Shell overview, rather than staying alive forever. The new history dialog that was added in 3.12 now allows you to sort history by title and URL, not just date. The image buttons in the new cookies, history, and passwords dialogs now have explanitory tooltips. Saving an image, video, or web page over top of an existing file now works properly (with Web 3.14.1). And of course there are also a few memory usage and crash fixes.

As always, the best place to send feedback is <>, or Bugzilla if you’ve found a bug. Comments on this post work too. Happy browsing!

3.14 Games Updates

So, what new things happened to our games in GNOME 3.14?


GNOME Hitori has actually been around for a while, but it wasn’t until this cycle that I discovered it. After chatting with Philip Withnall, we agreed that with a minor redesign, the result would be appropriate for GNOME 3. And here it is:

Screenshot from 2014-10-17 18:03:30

The gameplay is similar to Sudoku, but much faster-paced. The goal is to paint squares such that the same digit appears in each row and column no more than once, without ever painting two horizontally- or vertically-adjacent squares and without ever creating a set of unpainted squares that is disconnected both horizontally and vertically from the rest of the unpainted squares. (This sounds a lot more complicated than it is: think about it for a bit and it’s really quite intuitive.) You can usually win each game in a minute or two, depending on the selected board size.


For Mines, the screenshots speak for themselves. The new design is by Allan Day, and was implemented by Robert Roth.

Screenshot from 2014-10-17 18:09:103.12
Screenshot from 2014-10-17 18:08:063.14

There is only one gameplay change: you can no longer get a hint to help you out of a tough spot at the cost of a small time penalty. You’ll have to actually guess which squares have mines now.

Right now, the buttons on the right disappear when the game is in progress. This may have been a mistake, which we’ll revisit in 3.16. You can comment in Bug #729250 if you want to join our healthy debate on whether or not to use colored numbers.


Sudoku has been rewritten in Vala with the typical GNOME emphasis on simplicity and ease of use. The design is again by Allan Day. Christopher Baines started work on the port for a Google Summer of Code project in 2012, and Parin Porecha completed the work this summer for his own Google Summer of Code project.

Screenshot from 2014-10-17 18:19:023.12 (note: not possible to get rid of the wasted space on the sides)
Screenshot from 2014-10-17 18:20:253.14

We’re also using a new Sudoku generator, QQwing, for puzzle generation. This allows us to avoid reimplementing bugs in our old Sudoku generator (which is documented to have generated at least one impossible puzzle, and sometimes did a very poor job of determining difficulty), and instead rely on a project completely focused on correct Sudoku generation. Stephen Ostermiller is the author of QQwing, and he worked with us to make sure QQwing met our needs by implementing symmetric puzzle generation and merging changes to make it a shared library. QQwing is fairly fast at generating puzzles, so we’ve dropped the store of pregenerated puzzles that Sudoku 3.12 used and now generate puzzles on the fly instead. This means a small (1-10 second) wait if you’re printing dozens of puzzles at once, but it ensures that you no longer get the same puzzle twice, as sometimes occurred in 3.12.

If you noticed from the screenshot, QQwing often uses more interesting symmetries than our old generator did. For the most part, I think this is exciting — symmetric puzzles are intended to be artistic — but I’m interested in comments from players on whether we should disable some of the symmetry options QQwing provides if they’re too flashy. We also need feedback on whether the difficulty levels are set appropriately; I have a suspicion that QQwing’s difficulty rating may not be as refined as our old one (when it was working properly), but I could be completely wrong: we really need player feedback to be sure.

A few features from Sudoku 3.12 did not survive the redesign, or changed significantly. Highlighter mode is now always active and uses a subtle gray instead of rainbow colors. I’m considering making it a preference in 3.16 and turning it off by default, since it’s primarily helpful for keyboard users and seems to get in the way when playing with a mouse. The old notes are now restricted to digits in the top row of the cell, and you set them by right-clicking in a square. (The Ctrl+digit shortcuts will still work.) This feels a lot better, but we need to figure out how to make notes more discoverable to users.  Most notably, the Track Additions feature is completely gone, the victim of our desire to actually ship this update. If you used Track Additions and want it back, we’d really appreciate comments in Bug #731640. Implementation help would be even better. We’d also like to bring back the hint feature, which we removed because the hints in 3.12 were only useful when an easy move exists, and not very helpful in a tough position. Needless to say, we’re definitely open to feedback on all of these changes.

Other Games

We received a Lights Off bug report that the seven-segment display at the bottom of the screen was difficult to read, and didn’t clearly indicate that it corresponded to the current level. With the magic of GtkHeaderBar, we were able to remove it. The result:

Screenshot from 2014-10-17 18:34:593.12
Screenshot from 2014-10-17 18:31:373.14

Robots was our final game (from the historical gnome-games package, so discounting Aisleriot) with a GNOME 2 menu bar. No longer:

Screenshot from 2014-10-17 18:36:363.12
Screenshot from 2014-10-17 18:42:333.14

It doesn’t look as slick as Mines or Sudoku, but it’s still a nice modernization.

I think that’s enough screenshots for one blog post, but I’ll also mention that Swell Foop has switched to using the dark theme (which blends in better with its background), Klotski grew a header bar (so now all of the historical gnome-games have a header bar as well), and Chess will now prompt the player at the first opportunity to claim a draw, allowing us to remove the confusing Claim Draw menu item and also the gear menu with it. (It’s been replaced by a Resign button.)

Easier Easy Modes

The computer player in Four-in-a-row used to be pratically impossible to defeat, even on level one. Nikhar Agrawal wrote a new artificial intelligence for this game as part of his Google Summer of Code project, so now it’s actually possible to win at connect four. And beginning with Iagno 3.14.1, the computer player is much easier to beat when set to level one (the default). Our games are supposed to be simple to play, and it’s not fun when the easiest difficulty level is overwhelming.


There have also been plenty of smaller improvements to other games. In particular, Arnaud Bonatti has fixed several Iagno and Sudoku bugs, and improved the window layouts for several of our games. He also wrote a new game that will appear in GNOME 3.16.  But that has nothing to do with 3.14, so I can’t show you that just yet, now can I? For now, I will just say that it will prominently feature the Internet’s favorite animal.

Happy gaming!

Mon 2014/Oct/20

  • Together with GNOME 3.14, we have released Web 3.14. Michael Catanzaro, who has been doing an internship at Igalia for the last few months, wrote an excellent blog post describing the features of this new release. Go and read his blog to find out what we've been doing while we wait for his new blog to be sindicated to Planet GNOME.

  • I've started doing two exciting things lately. The first one is Ashtanga yoga. I had been wanting to try yoga for a long time now, as swimming and running have been pretty good for me but at the same time have made my muscles pretty stiff. Yoga seemed like the obvious choice, so after much tought and hesitation I started visiting the local Ashtanga Yoga school. After a month I'm starting to get somewhere (i.e. my toes) and I'm pretty much addicted to it.

    The second thing is that I started playing the keyboards yesterday. I used to toy around with keyboards when I was a kid but I never really learned anything meaningful, so when I saw an ad for a second-hand WK-1200, I couldn't resist and got it. After an evening of practice I already got the feel of Cohen's Samson in New Orleans and the first 16 bars of Verdi's Va, pensiero, but I'm still horribly bad at playing with both hands.

October 17, 2014

2014-10-17: Friday

  • Early to rise; quick call, mail, breakfast; continued on slideware - really thrilled to use droidAtScreen to demo the LibreOffice on Android viewer.
  • Off to the venue in the coach; prepped slides some more, gave a talk - rather a hard act to follow at the end of the previous talk: a (male) strip-tease, mercifully aborted before it went too far. Presented my slides, informed by a few recent local discussions:
    Hybrid PDF of LibreOffice under-development slides
  • Quick lunch, caught up with mail, customer call, poked Neil & Daniel, continued catching up with the mail & interaction backlog.
  • Conference ended - overall an extremely friendly & positive experience, in a lovely location - most impressed by my first trip to Brazil; cudos to the organizers; and really great to spend some time with Eliane & Olivier on their home turf.

ffs ssl

I just set up SSLTLS on my web site. Everything can be had via, and things appear to work. However the process of transitioning even a simple web site to SSL is so clownshoes bad that it's amazing anyone ever does it. So here's an incomplete list of things that can go wrong when you set up TLS on a web site.

You search "how to set up https" on the Googs and click the first link. It takes you here which tells you how to use StartSSL, which generates the key in your browser. Whoops, your private key is now known to another server on this internet! Why do people even recommend this? It's the worst of the worst of Javascript crypto.

OK so you decide to pay for a certificate, assuming that will be better, and because who knows what's going on with StartSSL. You've heard of RapidSSL so you go to WTF their price is 49 dollars for a stupid certificate? Your domain name was only 10 dollars, and domain name resolution is an actual ongoing service, unlike certificate issuance that just happens one time. You can't believe it so you click through to the prices to see, and you get this:


OK so I'm using Epiphany on Debian and I think that uses the system root CA list which is different from what Chrome or Firefox do but Jesus this is shaking my faith in the internet if I can't connect to an SSL certificate provider over SSL.

You remember hearing something on Twitter about cheaper certs, and oh ho ho, it's, not just RapidSSL. WTF. OK. It turns out Geotrust and RapidSSL and Verisign are all owned by Symantec anyway. So you go and you pay. Paying is the first thing you have to do on rapidsslonline, before anything else happens. Welp, cross your fingers and take out your credit card, cause SSLanta Clause is coming to town.

Recall, distantly, that SSL has private keys and public keys. To create an SSL certificate you have to generate a key on your local machine, which is your private key. That key shouldn't leave your control -- that's why the DigitalOcean page is so bogus. The certification authority (CA) then needs to receive your public key and then return it signed. You don't know how to do this, because who does? So you Google and copy and paste command line snippets from a website. Whoops!

Hey neat it didn't delete your home directory, cool. Let's assume that your local machine isn't rooted and that your server isn't rooted and that your hosting provider isn't rooted, because that would invalidate everything. Oh what so the NSA and the five eyes have an ongoing program to root servers? Um, well, water under the bridge I guess. Let's make a key. You google "generate ssl key" and this is the first result.

# openssl genrsa -des3 -out foo.key 1024

Whoops, you just made a 1024-bit key! I don't know if those are even accepted by CAs any more. Happily if you leave off the 1024, it defaults to 2048 bits, which I guess is good.

Also you just made a key with a password on it (that's the -des3 part). This is eminently pointless. In order to use your key, your web server will need the decrypted key, which means it will need the password to the key. Adding a password does nothing for you. If you lost your private key but you did have it password-protected, you're still toast: the available encryption cyphers are meant to be fast, not hard to break. Any serious attacker will crack it directly. And if they have access to your private key in the first place, encrypted or not, you're probably toast already.

OK. So let's say you make your key, and make what's called the "CRTCSR", to ask for the cert.

# openssl req -new -key foo.key -out foo.csr

Now you're presented with a bunch of pointless-looking questions like your country code and your "organization". Seems pointless, right? Well now I have to live with this confidence-inspiring dialog, because I left off the organization:

Don't mess up, kids! But wait there's more. You send in your CSR, finally figure out how to receive mail for because that's what "verification" means (not, god forbid, control of the actual web site), and you get back a certificate. Now the fun starts!

How are you actually going to serve SSL? The truly paranoid use an out-of-process SSL terminator. Seems legit except if you do that you lose any kind of indication about what IP is connecting to your HTTP server. You can use a more HTTP-oriented terminator like bud but then you have to mess with X-Forwarded-For headers and you only get them on the first request of a connection. You could just enable mod_ssl on your Apache, but that code is terrifying, and do you really want to be running Apache anyway?

In my case I ended up switching over to nginx, which has a startlingly underspecified configuration language, but for which the Debian defaults are actually not bad. So you uncomment that part of the configuration, cross your fingers, Google a bit to remind yourself how systemd works, and restart the web server. Haich Tee Tee Pee Ess ahoy! But did you remember to disable the NULL authentication method? How can you test it? What about the NULL encryption method? These are actual things that are configured into OpenSSL, and specified by standards. (What is the use of a secure communications standard that does not provide any guarantee worth speaking of?) So you google, copy and paste some inscrutable incantation into your config, turn them off. Great, now you are a dilettante tweaking your encryption parameters, I hope you feel like a fool because I sure do.

Except things are still broken if you allow RC4! So you better make sure you disable RC4, which incidentally is exactly the opposite of the advice that people were giving out three years ago.

OK, so you took your certificate that you got from the CA and your private key and mashed them into place and it seems the web browser works. Thing is though, the key that signs your certificate is possibly not in the actual root set of signing keys that browsers use to verify the key validity. If you put just your key on the web site without the "intermediate CA", then things probably work but browsers will make an additional request to get the intermediate CA's key, slowing down everything. So you have to concatenate the text files with your key and the one with the intermediate CA's key. They look the same, just a bunch of numbers, but don't get them in the wrong order because apparently the internet says that won't work!

But don't put in too many keys either! In this image we have a cert for with one intermediate CA:

And here is the same but with an a different root that signed the GeoTrust Global CA certificate. Apparently there was a time in which the GeoTrust cert hadn't been added to all of the root sets yet, and it might not hurt to include them all:

Thing is, the first one shows up "green" in Chrome (yay), but the second one shows problems ("outdated security settings" etc etc etc). Why? Because the link from Equifax to Geotrust uses a SHA-1 signature, and apparently that's not a good idea any more. Good times? (Poor Remy last night was doing some basic science on the internet to bring you these results.)

Or is Chrome denying you the green because it was RapidSSL that signed your certificate with SHA-1 and not SHA-256? It won't tell you! So you Google and apply snakeoil and beg your CA to reissue your cert, hopefully they don't charge for that, and eventually all is well. Chrome gives you the green.

Or does it? Probably not, if you're switching from a web site that is also available over HTTP. Probably you have some images or CSS or Javascript that's being loaded over HTTP. You fix your web site to have scheme-relative URLs (like // instead of, and make sure that your software can deal with it all (I had to patch Guile :P). Update all the old blog posts! Edit all the HTMLs! And finally, green! You're golden!

Or not! Because if you left on SSLv3 support you're still broken! Also, TLSv1.0, which is actually greater than SSLv3 for no good reason, also has problems; and then TLS1.1 also has problems, so you better stick with just TLSv1.2. Except, except, older Android phones don't support TLSv1.2, and neither does the Googlebot, so you don't get the rankings boost you were going for in the first place. So you upgrade your phone because that's a thing you want to do with your evenings, and send snarky tweets into the ether about scumbag google wanting to promote HTTPS but not supporting the latest TLS version.

So finally, finally, you have a web site that offers HTTPS and HTTP access. You're good right? Except no! (Catching on to the pattern?) Because what happens is that people just type in web addresses to their URL bars like "" and leave off the HTTP, because why type those stupid things. So you arrange for to redirect for users that have visited the HTTPS site. Except no! Because any network attacker can simply strip the redirection from the HTTP site.

The "solution" for this is called HTTP Strict Transport Security, or HSTS. Once a visitor visits your HTTPS site, the server sends a response that tells the browser never to fetch HTTP from this site. Except that doesn't work the first time you go to a web site! So if you're Google, you friggin add your name to a static list in the browser. EXCEPT EVEN THEN watch out for the Delorean.

And what if instead they go to instead of the that you configured? Well, better enable HSTS for the whole site, but to do anything useful with such a web request you'll need a wildcard certificate to handle the multiple URLs, and those run like 150 bucks a year, for a one-bit change. Or, just get more single-domain certs and tack them onto your cert, using the precision tool cat, but don't do too many, because if you do you will overflow the initial congestion window of the TCP connection and you'll have to wait for an ACK on your certificate before you can actually exchange keys. Don't know what that means? Better look it up and be an expert, or your wobsite's going to be slow!

If your security goals are more modest, as they probably are, then you could get burned the other way: you could enable HSTS, something could go wrong with your site (an expired certificate perhaps), and then people couldn't access your site at all, even if they have no security needs, because HTTP is turned off.

Now you start to add secure features to your web app, safe with the idea you have SSL. But better not forget to mark your cookies as secure, otherwise they could be leaked in the clear, and better not forget that your website might also be served over HTTP. And better check up on when your cert expires, and better have a plan for embedded browsers that don't have useful feedback to the user about certificate status, and what about your CA's audit trail, and better stay on top of the new developments in security! Did you read it? Did you read it? Did you read it?

It's a wonder anything works. Indeed I wonder if anything does.

XNG: GIFs, but better, and also magical

It might seem like the GIF format is the best we’ll ever see in terms of simple animations. It’s a quite interesting format, but it doesn’t come without its downsides: quite old LZW-based compression, a limited color palette, and no support for using old image data in new locations.

Two competing specifications for animations were developed: APNG and MNG. The two camps have fought wildly and we’ve never gotten a resolution, and different browsers support different formats. So, for the widest range of compatibility, we have just been using GIF… until now.

I have developed a new image format which I’m calling “XNG”, which doesn’t have any of these restrictions, and has the possibility to support more complex features, and works in existing browsers today. It doesn’t require any new features like <canvas> or <video> or any JavaScript libraries at all. In fact, it works without any JavaScript enabled at all. I’ve tested it in both Firefox and Chrome, and it works quite well in either. Just embed it like any other image, e.g. <img src="myanimation.xng">.

It’s magic.

Have a few examples:

I’ve been looking for other examples as well. If you have any cool videos you’d like to see made into XNGs, write a comment and I’ll try to convert it. I wrote out all of these XNG files out by hand.

Over the next few days, I’ll talk a bit more about XNG. I hope all you hackers out there look into it and notice what I’m doing: I think there’s certainly a lot of unexplored ideas in what I’ve developed. We can push this envelope further.

EDIT: Yes, guys, I see all your comments. Sorry, I’ve been busy with other stuff, and haven’t gotten a chance to moderate all of them. I wasn’t ever able to reproduce the bug in Firefox about the image hanging, but Mario Klingemann found a neat trick to get Firefox to behave, and I’ve applied it to all three XNGs above.

October 16, 2014

2014-10-16: Thursday

  • To the venue, crazy handing out of collateral, various talks with people; Advisory Board call, LibreOffice anniversary Cake cutting and eating (by massed hordes).
  • It is extraordinary, and encouraging to see how many young ladies are at the conference, and (hopefully) getting engaged with Free Software: never seen so many at other conferences. As an unfortunate down-side: was amused to fobb off an un-solicited offer of marriage from a 15yr old: hmm.
  • Chewed some mail, bus back in the evening; worked on slides until late, for talk tomorrow.

The Wait Is Over: MimeKit and MailKit Reach 1.0

After about a year in the making for MimeKit and nearly 8 months for MailKit, they've finally reached 1.0 status.

I started really working on MimeKit about a year ago wanting to give the .NET community a top-notch MIME parser that could handle anything the real world could throw at it. I wanted it to run on any platform that can run .NET (including mobile) and do it with remarkable speed and grace. I wanted to make it such that re-serializing the message would be a byte-for-byte copy of the original so that no data would ever be lost. This was also very important for my last goal, which was to support S/MIME and PGP out of the box.

All of these goals for MimeKit have been reached (partly thanks to the BouncyCastle project for the crypto support).

At the start of December last year, I began working on MailKit to aid in the adoption of MimeKit. It became clear that without a way to inter-operate with the various types of mail servers, .NET developers would be unlikely to adopt it.

I started off implementing an SmtpClient with support for SASL authentication, STARTTLS, and PIPELINING support.

Soon after, I began working on a Pop3Client that was designed such that I could use MimeKit to parse messages on the fly, directly from the socket, without needing to read the message data line-by-line looking for a ".\r\n" sequence, concatenating the lines into a massive memory buffer before I could start to parse the message. This fact, combined with the fact that MimeKit's message parser is orders of magnitude faster than any other .NET parser I could find, makes MailKit the fastest POP3 library the world has ever seen.

After a month or so of avoiding the inevitable, I finally began working on an ImapClient which took me roughly two weeks to produce the initial prototype (compared to a single weekend for each of the other protocols). After many months of implementing dozens of the more widely used IMAP4 extensions (including the GMail extensions) and tweaking the APIs (along with bug fixing) thanks to feedback from some of the early adopters, I believe that it is finally complete enough to call 1.0.

In July, at the request of someone involved with a number of the IETF email-related specifications, I also implemented support for the new Internationalized Email standards, making MimeKit and MailKit the first - and only - .NET email libraries to support these standards.

If you want to do anything at all related to email in .NET, take a look at MimeKit and MailKit. I guarantee that you will not be disappointed.

Communities in Real Life

Add this to the list of things I never expected to be doing: opening a grocery store.

At last year’s Open Help Conference, I gave a talk titled Community Lessons From IRL. I told the story of how I got involved in opening a grocery store, and what I’ve learned about community work when the community is your neighbors.

I live in Cincinnati, in a beautiful, historic, walkable neighborhood called Clifton. We pride ourselves on being able to walk to get everything we need. We have a hardware store, a pharmacy, and a florist. We have lots of great restaurants. We had a grocery store, but after generations of serving the people of Clifton, our neighborhood IGA closed its doors nearly four years ago.

The grocery store closing hurt our neighborhood. It hurt our way of life. Other shops saw their business decline. Quite a few even closed their doors. At restaurants and coffee houses and barber shops, all anybody talked about was the grocery store being closed. When will it reopen? Has anybody contacted Trader Joe’s/Whole Foods/Fresh Market? Somebody should do something.

“Somebody should do something” isn’t doing something.

If there’s one thing I’ve learned from over a decade of working in open source, it’s that things only get done when people get up and do them. Talk is cheap, whether it’s in online forums or in the barber shop. So a group of us got up and did something.

Last August, a concerned resident sent out a message that if anybody wanted to take action, she was hosting a gathering at her house. Sometimes just hosting a gathering at your house is all it takes to get the ball rolling. Out of that meeting came a team of people committed to bringing a full-service grocery store back to Clifton as a co-op, owned and controlled by the community.

Thus was born Clifton Market.

Clifton Market display in the window of the vacant IGA building

Clifton Market display in the window of the vacant IGA building

For the last 14 months, I’ve spent whatever free time I could muster trying to open a grocery store. Along with an ever-growing community of volunteers, I’ve surveyed the neighborhood, sold shares, created a business plan, talked to contractors, negotiated real estate, and learned far more about the grocery industry than I ever expected. In many ways, I’ve been well-served by my experience working with volunteer communities in GNOME and other projects. But a lot of things are different when the project is in your backyard staring you down each day.

Opening a grocery store costs money, and we’ve been working hard on raising the money through shares and owner loans. If you want to support our effort, you can buy a share too.

Thu 2014/Oct/16

My first memories of Meritähti are from that first weekend, in late August 2008, when I had just arrived in Helsinki to spend what was supposed to be only a couple of months doing GTK+ and Hildon work for Nokia. Lucas, who was still in Finland at the time, had recommended that I check the program for the Night of the Arts, an evening that serves as the closing of the summer season in the Helsinki region and consists of several dozens of street stages set up all over with all kind of performances. It sounded interesting, and I was looking forward to check the evening vibe out.

I was at the Ruoholahti office that Friday, when Kimmo came over to my desk to invite me to join his mates for dinner. Having the Night of the Arts in mind, I suggested we grab some food downtown before finding an interesting act somewhere, to which he replied emphatically "No! We first go to Meritähti to eat, a place nearby — it's our Friday tradition here." Surprised at the tenacity of his objection and being the new kid in town, I obliged. I can't remember now who else joined us in that summer evening before we headed to the Night of the Arts, probably Jörgen, Marius, and others, but that would be the first of many more to come in the future.

I started taking part of that tradition and I always thought, somehow, that those Meritähti evenings would continue for a long time. Because even after the whole Hildon team was dismantled, even after many of the people in our gang left Nokia and others moved on to work on the now also defunct MeeGo, we still met in Meritähti once in a while for food, a couple of beers, and good laughs. Even after Nokia closed down the Ruoholahti NRC, even after everyone I knew had left the company, even after the company was sold out, and even after half the people we knew had left the country, we still met there for a good old super-special.

But those evenings were not bound to be eternal, and like most good things in life, they are coming to an end. Meritähti is closing in the next weeks, and the handful of renegades who stuck in Helsinki will have to find a new place where to spend our evenings together. László, the friendly Hungarian who ran the place with his family, is moving on to less stressful endeavors. Keeping a bar is too much work, he told us, and everyone has the right to one day say enough. One would want to do or say something to change his mind, but what right do we have? We should instead be glad that the place was there for us and that we had the chance to enjoy uncountable evenings under the starfish lamps that gave the place its name. If we're feeling melancholic, we will always have Kaurismäki's Lights in the dusk and that glorious scene involving a dog in the cold, to remember one of those many times when conflict would ensue whenever a careless dog-owner would stop for a pint in the winter.

Long live Meritähti, long live László, and köszönöm!

October 15, 2014

A Londoner on Losing the YES Vote on BBC 6 O'Clock News!

Here I am backing up my words about losing the YES vote with concrete action...Well sort of: Today I made an appearance on prime time TV and said some words in support of Nicola Sturgeon from the SNP in leading the next phase of the struggle for Scottish Independence.

I am trying to record a screencast of the relevant item because the direct BBC video link expires tomorrow and might not be available abroad either. Unfortunately the instruction on this google plus article is no longer working for me to solve the problem Does anyone at GNOME have any ideas on what I might be able to do to sort that out using GNOME Shell version 3.14.0 on fedora 20?

Thanks to one kind soul who has uploaded the item online, it can be viewed outside the UK and there is no expiry date tomorrow! At some point I would still like to solve the mystery of the screencast sound though (if anyone has any ideas about that?) but for now, at least: Bairns not Bombs!

Video is not enabled

GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <summary>Open source versions of several commercial fonts</summary>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
  <url type="homepage"></url>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

October 14, 2014

Tracker – What do we do now we’re stable?


Over the past month or two, I’ve spent time working on various feature branches for Tracker. This coming after a 1.2 stable release and a new feature set which was added in 1.2.

So a lot has been going on with Tracker internally. I’ve been relatively quiet on my blog of late and I thought it would be a good idea to run a series of blogs relating to what is going on within the project.

Among my blogs, I will be covering:

  • What features did we add in Tracker 1.2 – how can they benefit you?
  • The difference between URIs, URNs, URLs and IRIs – dispelling any confusion; for the bugs we’ve had reported
  • Making Tracker more Git-like – we’re moving towards a new ‘git’ style command line with some new features on the way
  • Preparing for the divorce – is it time to finally split tracker-store, the ontologies and the data-miners?
  • Making Tracker even more idle – using cgroups and perhaps keyboard/mouse idle notifications

If anyone has any questions or concerns they would like to see answered in articles around these subjects, please comment below and I will do my best to address them! :)

.NET Foundation: Forums and Advisory Council

Today, I want to share some news from the .NET Foundation.

Forums: We are launching the official .NET Foundation forums to engage with the larger .NET community and to start the flow of ideas on the future of .NET, the community of users of .NET, and the community of contributors to the .NET ecosystem.

Please join us at We are using the powerful Discourse platform. Come join us!

Advisory Council: We want to make the .NET Foundation open and transparent. To achieve that goal, we decided to create an advisory council. But we need your help in shaping the advisory council: its role, its reach, its obligations and its influence on the foundation itself.

To bootstrap the discussion, we have a baseline proposal that was contributed by Shaun Walker. We want to invite the larger .NET community to a conversation about this proposal and help us shape the advisory council.

Check out the Call for Public Comments which has a link to the baseline proposal and come join the discussion at the .NET Forums.

GNOME Summit wrap-up

Day 3

The last day of the Summit was a hacking-focused day.

In one corner,  Rob Taylor was bringing up Linux and GNOME on an Arm tablet. Next to him, Javier Jardon was working on a local gnome-continuous instance . Michael Cantanzaro and Robert Schroll were working on D-Bus communication to let geary use Webkit2. Christan Hergert was adding editor preferences to gnome-builder and reviewed design ideas with Ryan Lerch. I spent most of the day teaching glade about new widgets in GTK+. I also merged OpenGL support for GTK+.

Overall, a quite productive day.

Thanks again to the sponsors


and to Walter Bender and the MIT for hosting us.


October 13, 2014

quiet strain

as promised during GUADEC, I’m going to blog a bit more about the development of GSK — and now that I have some code, it’s actually easier to do.

so, let’s start from the top, and speak about GDK.

in April 2008 I was in Berlin, enjoying the city, the company, and good food, and incidentally attending the first GTK+ hackfest. those were the days of Project Ridley, and when the plan for GTK+ 3.0 was to release without deprecated symbols and with all the instance structures sealed.

in the long discussions about the issue of a “blessed” canvas library to be used by GTK app developers and by the GNOME project, we ended up discussing the support of the OpenGL API in GDK and GTK+. the [original bug][bug-opegl] had been opened by Owen about 5 years prior, and while we had ancillary libraries like GtkGLExt and GtkGLArea, the integration was a pretty sore point. the consensus at the end of the hackfest was to provide wrappers around the platform-specific bits of OpenGL inside GDK, enough to create a GL context and bind it to a specific GdkWindow, to let people draw with OpenGL commands at the right time in the drawing cycle of GTK+ widgets. the consensus was also that I would look at the bug, as a person that at the time was dealing with OpenGL inside tool kits for his day job.

well, that didn’t really work out, because cue to 6 years after that hackfest, the bug is still open.

to be fair, the landscape of GTK and GDK has changed a lot since those days. we actually released GTK+ 3.0, and with a lot more features than just deprecations removal; the whole frame cycle is much better, and the paint sequence is reliable and completely different than before. yet, we still have to rely on poorly integrated external libraries to deal with OpenGL.

right after GUADEC, I started hacking on getting the minimal amount of API necessary to create a GL context, and being able to use it to draw on a GTK widget. it turns out that it wasn’t that big of a job to get something on the screen in a semi-reliable way — after all, we already had libraries like GtkGLExt and GtkGLArea living outside of the GTK git repository that did that, even if they had to use deprecated or broken API. the complex part of this work involved being able to draw GL inside the same infrastructure that we currently use for Cairo. we need to be able to synchronise the frame drawing, and we need to be able to blend the contents of the GL area with both content that was drawn before and after, likely with Cairo — otherwise we would not be able to do things like drawing an overlay notification on top of the usual spinning gears, while keeping the background color of the window:

welcome to the world of tomorrow (for values of tomorrow close to 2005)

luckily, thanks to Alex, the amount of changes in the internals of GDK was kept to a minimum, and we can enjoy GL rendering running natively on X11 and Wayland, using GLX or EGL respectively.

on top of the low level API, we have a GtkGLArea widget that renders all the GL commands you submit to it, and it behaves like any other GTK+ widgets.

today, Matthias merged the topic branch into master, which means that, barring disastrous regressions, GTK+ 3.16 will finally have native OpenGL support — and we’ll be one step closer to GSK as well.

right now, there’s still some work to do — namely: examples, performance, documentation, porting to MacOS and Windows — but the API is already fairly solid, so we’d all like to get feedback from the users of libraries like GtkGLExt and GtkGLArea, to see what they need or what we missed. feedback is, as usual, best directed at the gtk-devel mailing list, or on the #gtk+ IRC channel.

Technology Catchup

Coincidentally three different people asked me in the last month, to write about new technologies that they should be knowing, to make them more eligible to get a job in a startup. All these people have been C/C++ programmers, in big established companies, for about a decade now. Some of them have had only glimpses of any modern technologies.

I have tried a little bit (with moderate success) to work in all layers of programming with most of the popular modern technologies, by writing little-more-than-trivial programs (long before I heard of the fancy title "full stack developer"). So here I am writing a "technology catchup" post, hoping that it may be useful for some people, who want to know what has happened in the technologies in the last decade or so.

Disclaimer 1: The opinions expressed are totally biased as per my opinion. You should work with the individual technologies to know their true merits.

Disclaimer 2: Instead of learning everything, I personally recommend people to pick whatever they feel they are connected to. I, for example, could not feel connected to node-js even after toying with it for a while, but fell in love with Go. Tastes differ and nothing is inferior. So give everything a good try and pick your choice. Also remember what Donald Knuth said, "There is difference between knowing the name of something and knowing something". So learn deeply.

Disclaimer 3: From whatever I have observed, getting hired in a startup is more about being in the right circles of connection, than being a technology expert. A surprisingly large number of startups start with familiar technology than with the right technology, and then change their technology, once the company is established.

Disclaimer 4: This is actually not a complete list of things one should know. These are just things that I have come across and experimented a little bit at least. There are a lot more interesting things that I would have have missed. If you need something must have been in the list, please comment :-)

With those disclaimers away, let us cut to the chase.

Version Control Systems

The most prominent change in the open source arena, in the last decade or so, is the invention of Git. It is a version controlled system initially designed for keeping the kernel sources and has since then become the de-facto VCS for most modern companies and projects.

Github is a website that allows people to host their open source projects. Often startups recruit people based on their github profile. Even big companies like microsoft, google, facebook, twitter, dropbox etc. have their own github accounts. I personally have received more job queries through my github projects than via my linkedin profile in the last year.

bitbucket is another site that allows people to host code and give even private repos. A lot of the startups that I know of use this, along with the jira project management software. This is your equivalent of MS Project in some sense.

I have observed that most of the startups founded by people who come from Banking or Finance companies to be using Subversion. Git is the choice for people from tech companies though. Mercurial is another open source, distributed VCS which has lost a lot of limelight in the recent times, due to Git. Fossil is another VCS, from the author of sqlite, Dr. Richard Hipp. If you can learn only one VCS for now, start with Git.

Programming Languages & Frameworks

Javascript has evolved to be a leading programming language of the last decade. It is even referred to as the X86 of the web. From its humble beginnings as a client-side scripting language to validate if the user has typed a number or text, it has grown into a behemoth and entered even the server-side programming through the node-js framework. For incorporating ModelViewController pattern, javascript has gained the AngularJS framework. JS is a dynamically typed language and to bring in some statically typed langauges' goodness, we have a coffeescript language too.

Python is another dynamically typed, interpreted programming language. Personally, I felt that it is a lot more tasteful than Javascript. It feels good on eyes too. It helps in rapid application development and is available by default in almost all the Linux distros and Mac machines by default. Django is a web framework that is built on python to make it easy to develop web applications. In addition to being used in a lot of startups, it is used in even big companies like Google and Dropbox. There are variants of Python runtime such that you can run it in the JVM using Jython or in the .NET CLR using the IronPython. I have personally found this language to be lacking in performance though, which is elaborated more in a subsequent section.

Ruby is an old programming language that shot into fame in the recent years through the popular web application framework Ruby on Rails, often called just Rails. I have learnt a lot of engineering philosophies such as DRY, COO etc. while learning RoR.

All these above languages and frameworks use a package manager such as npmBower, pip, gems etc. to install libraries easily.

Go is my personal favorite in the new languages to learn. I see Go becoming as vital and prominent a programming language as C, C++ or Java in the next decade. It is developed in Google for creating large scale systems. It is a statically-typed, automatic-memory-managed language that generates native-machine-code and helps writing concurrent-code easily.

Go is the default language that I use for any programming task in the last year or so. It is amazingly fast even though (just because?) it is still in the 1.X series. In my dayjob we did a prototype in both go and python, and for a highly concurrent workflow in the same hardware, Go puffed Python in performance (20 seconds vs 5 minutes). I won't be surprised if a lot of the python and ruby code gets converted to golang in their next edition of rewrites. Personally, I have found the quality of go libraries to be much higher compared to Ruby or nodejs as well, probably because not everyone has adapted to this language yet. However, this could be just my personal biased opinion.

If you like to get fancy with functional programming, then you can learn Scala (on top of JVM), F# (on top of .NET), Haskell, Erlang, etc. The last two are very old btw but in use even today. Most recently, Whatsapp was known to use Erlang. D is also seen in the news, mostly thanks to Facebook. Dart is another language that is from Google but still to receive any wide deployment afaik, even with Google's massive marketing machinery behind it. It has been compared to VBscript and is criticized, and as of now chrome-only. Dart has received criticism from Mozilla, Webkit (rendering engine that powers Safari (and chrome earlier)), Microsoft IE as well. Dart is done by Lars Bak et al. (the people who gave us V8, chrome's Javascript engine)

Rust is another programming language that is aimed for high-performance concurrent systems. But I have not played around with it, as they don't maintain a stable API and they are not 1.0 yet. Julia is another programming language aimed at doing distributed systems, about which I have heard a lot of praise, but it still remains a exotic language afaik. R is another language which I have seen in a lot of corporate demos where the presenters wanted to show statistics, charts. Learning this may be useful even if you are not a programmer and works with numbers (like a project manager).

There is a Swift programming language from Apple to write iOS apps. I have not tried Swift yet, but from my experience of using Objective C, it cannot be worse.

Bootstrap is a nice web framework from twitter, which provides various GUI elements that you can incorporate into your application, to rapidly prototype beautiful applications, that are fluidic even when viewed in mobile.

jquery is a popular javascript library that is ubiquitous. Cascading Style Sheets (shortly CSS) is a markup language that helps configure the style of the web page UI elements. CSS is becoming mature to the extent of showing animations too. You should ideally spend a few weeks to learn about HTML5 and CSS.

Text Editors

Sublimetext is what the cool kids use these days as the editor. I have found the tutorial on tutsplus to be extra-ordinarily good at explaining sublime. It is a free (as in beer) software and not open source.

Atom is a text-editor from github built using nodejs and chromium. I did not find a linux binary and so did not bother to investigate it. But I have heard it to be good for Javascript programmers than any others, as the editor could be extended by javascript itself.

Brackets is another editor that I have heard good things about. Lime is an editor that is developed in Go, aimed to be an open-source replacement for the sublimetext.

Personally, after trying various text editors, I have always comeback to using vim. There are a few good plugins for vim in the recent times. Vundle, Pathogen are nice plugin managers for vim to ease up installation of plugins. YouCompleteMe is a nice plugin for auto-completion. vim-spf13 is a nice distro of vim, where various plugins and colorschemes are pre-packaged.

Distributed Computing

In the modern day of computing, most programs have been driven by a Service Oriented Architecture (shortly SOA). Webservices are the preferred way of communication among servers as well. While we are talking about services, please read this nice piece by Steve Yegge.

memcached is a distributed (across multiple machines), caching system which can be used in front of your database. This was initially developed by Brad Fritzpatrick, while he was the head of the LiveJournal and who is now (2014) a member of the Go team at Google. While at Google, he has started GroupCache which as the project page says is a replacement for memcache in many cases.

GoogleFileSystem (GFS) is a seminal paper on how Google created a filesystem to suit their large needs of data processing. There is a database built on top of this filesystem named BigTable which powered Google's infrastructure. Apache Hadoop is an open source implementation of these concepts, which was originally started in Yahoo and now a top-level apache project. HDFS  is the equivalent of GFS for the Hadoop. Hive and Pig are technologies to query and analyze data from the Hadoop.

As with the evolution of any software, GFS has evolved into a Colossus filesystem and BigTable has evolved into a Spanner distributed database. I recommend you to read these papers even if you are not going to do any distributed computing development.

Cassandra is another distributed database which was started in Facebook initially, but is used in many companies such as Netflix and Twitter. I have used Cassandra more than any other distributed project and actually like it a lot. It uses a SQL like query language called CQL - Cassandra Query Language. It is modelled after the DynamoDB paper from Amazon. I am too tempted to write an alternative to this in Go, just to have the idea of writing a large scale distributed system, instead of just using it as a client, but have not got around to a good dataset or usecase with which I can test it.

MongoDB is another document oriented database, which I tried using for a pet project of mine. I don't remember exactly but there were some problems with respect to unicode handling. The project was done prior to go becoming 1.0, so the problem could be in any end.

Most of the new age databases are called NOSQL databases but what they really mean is that the database skips a lot of functions (such as datatype validation, stored procedures, etc.) and try to grow by scaling out instead of scaling up.


OpenStack is a suite of open source projects that help you create a private cloud. DeltaCloud is a project which was initially started by RedHat, and now an apache top-level project, as a way to provide a single API layer which will work across any cloud in the backend. This project is done in ruby. I was initially interested in participating in its development, until I got introduced to Go and moved into a different tangent.

To start off a software company is a very easy task to do in today's world. The public clouds are becoming cheaper and cheaper everyday and their capacity can be provisioned instantly.

Amazon web services provides an umbrella of various public cloud offerings. I have used Amazon EC2 which is a way to create a Linux (and windows) VM that runs on Amazon's datacenters. The machines come on various sizes. Amazon S3 is a cloud offering that provides you way to store data in buckets. This is used by Dropbox heavily for storing all your data. There are various other services too. In some of our prototyping, we found the performance of Amazon EC2, to be consistent mostly, even in the free tier.

Google is not lagging behind with their cloud offerings either. When Google Reader was shut down, I used Google's Appengine to deploy an alternative FOSS product and I was blown away by the simplicity of creating applications on top of it. Google Compute is the way to get VMs running on the Google Cloud. As with Amazon, there are plenty of other services too.

There are plenty of other players like Microsoft Azure, Heroku etc. but I do not have any experience with their applications. While we are talking about Cloud, you should probably read about Orchestration and know about at least Zookeeper.

In-Process Databases

These are databases which you can embed into your application, without needing a dedicated server. They run on your process-space.

sqlite is the world's most deployed software and it competes with fopen to become the default way to store data for your desktop applications (if you are still writing them ;) ). A new branch is coming with the latest rage on storage datastructures, a log-structured merge tree as well.

leveldb is a database that is written by the eminent Googlers (and trendsetters of technology in the last decade or so) Jeff Dean and Sanjay Ghemawat who gave us MapReduce, GFS etc. It is forked by Facebook into RocksDB as well.

KyotoCabinet and LMDB are other projects on this space.

Linux Filesystems

Since we have covered GFS, HDFS, etc. earlier. We will look at other popular filesystems.

btrfs is a copy-on-write filesystem in Linux. It is intended to be the defacto linux filesystem in the future, possibly obsoleting ext series in the longer run.

XFS is a filesystem that initially came from SGI to Linux. This is my personal favorite and I have been using it on all my linux machines. In addition to good performance, this offers robustness and comes with a load of features that are useful to me, like defragmentation.

We also have the big daddy of filesystems zfs too on linux.

Ceph is another interesting distributed filesystem that works on the kernel space and is already merged in the linux kernel sources for a long time now. GlusterFS is another distributed filesystem which works in the userspace. Both of these filesystems focus on scaling out instead of scaling up.


Pick any of these technologies that you like and start writing a toy application on it, may be as simple as a ToDo application and learn through all the stages. This approach has helped me. It may help you also.

I have written this post from a Thinkpad T430 running openSUSE Factory and GNOME Shell with a bunch of KDE tools. I like this machine, However, in the past few months I have realized that, in today's world, If you are a developer, it is best if you run Linux on your server and Mac on your laptop.

October 12, 2014

The GNOME Infrastructure’s FreeIPA move behind the scenes

A few days ago I wrote about the GNOME Infrastructure moving to FreeIPA, the post was mainly an announcement to the relevant involved parties with many informative details for contributors to properly migrate their account details off from the old authentication system to the new one. Today’s post is a follow-up to that announcement but it’s going to take into account the reasons about our choice to migrate to FreeIPA, what we found interesting and compelling about the software and why we think more projects (them being either smaller or bigger) should migrate to it. Additionally I’ll provide some details about how I performed the migration from our previous OpenLDAP setup with a step-by-step guide that will hopefully help more people to migrate the infrastructure they manage themselves.

The GNOME case

It’s very clear to everyone an infrastructure should reflect the needs of its user base, in the case of GNOME a multitude between developers, translators, documenters and between them a very good number of Foundation members, contributors that have proven their non-trivial contributions and have received the status of members of the GNOME Foundation with all the relevant benefits connected to it.

The situation we had before was very tricky, LDAP accounts were managed through our LDAP istance while Foundation members were being stored on a MySQL database with many of the tables being related to the yearly Board of Director’s elections and one specifically meant to store all the information from each of the members. One of the available fields on that table was defined as ‘userid’ and was supposed to store the LDAP ‘uid’ field the Membership Committee member processing a certain application had to update when accepting the application. This procedure had two issues:

  1. Membership Committee members had no access to LDAP information
  2. No checks were being run on the web UI to verify the ‘userid’ field was populated correctly taking in multiple inconsistencies between LDAP and the MySQL database

In addition to the above Mango (the software that helped the GNOME administrative teams to manage the user data for multiple years had no maintainer, no commits on its core since 2008 and several limitations)

What were we looking for as a replacement for the current setup?

It was very obvious to me we would have had to look around for possible replacements to Mango. What we were aiming for was a software with the following characteristics:

  1. It had to come with a pre-built web UI providing a wide view on several LDAP fields
  2. The web UI had to be extensible in some form as we had some custom LDAP schemas we wanted users to see and modify
  3. The sofware had to be actively developed and responsive to eventual security reports (given the high security impact a breach on LDAP could take in)

FreeIPA clearly matched all our expectations on all the above points.

The Migration process – RFC2307 vs RFC2307bis

Our previous OpenLDAP setup was following RFC 2307, which means that above all the other available LDAP attributes (listed on the RFC under point 2.2) group’s membership was being represented through the ‘memberUid’ attribute. An example:

objectClass: posixGroup
objectClass: top
gidNumber: 524
memberUid: foo
memberUid: bar
memberUid: foobar

As you can each of the members of the group ‘foundation’ are represented using the ‘memberUid’ attribute followed by the ‘uid’ of the user itself. FreeIPA does not make directly use of RFC2307 for its trees, but RFC2307bis instead. (RFC2307bis was not published as a RFC by the IETF as the author didn’t decide to pursue it nor the companies (HP, Sun) that then adopted it)

RFC2307bis uses a different attribute to represent group’s membership, it being ‘member’. Another example:

objectClass: posixGroup
objectClass: top
gidNumber: 524
member: uid=foo,cn=users,cn=accounts,dc=gnome,dc=org
member: uid=bar,cn=users,cn=accounts,dc=gnome,dc=org
member: uid=foobar,cn=users,cn=accounts,dc=gnome,dc=org

As you can see the DN representing the group ‘foundation’ differs between the two examples. That is why FreeIPA comes with a Compatibility plugin (cn=compat) which automatically creates RFC2307-compliant trees and entries whenever an append / modify / delete operation happens on any of the hosted RFC2307bis-compliant trees. What’s the point of doing this when we could just stick with RFC2307bis trees and go with it? As the plugin name points out the Compatibility plugin is there to prevent breakages between the directory server and any of the clients or softwares out there still retrieving information and data by using the ‘memberUid’ attribute as specified on RFC2307.

FreeIPA migration tools (ipa migrate-ds) do come with a ‘–schema’ flag you can use to specify what attribute the istance you are migrating from was following (values are RFC2307 and RFC2307bis as you may have guessed already), in the case of GNOME the complete command we ran (after installing all the relevant tools through ‘ipa-server-install’ and copying the custom schemas under /etc/dirsrv/slapd-$ISTANCE-NAME/schema) was:

ipa migrate-ds --bind-dn=cn=Manager,dc=gnome,dc=org --user-container=ou=people,dc=gnome,dc=org --group-container=ou=groups,dc=gnome,dc=org --group-objectclass=posixGroup ldap://internal-IP:389 --schema=RFC2307

Please note that before running the command you should make sure custom schemas you had on the istance you are migrating from are available to the directory server you are migrating your tree to.

More information on the migration process from an existing OpenLDAP istance can be found HERE.

The Migration process – Extending the directory server with custom schemas

One of the other challenges we had to face has been extending the available LDAP schemas to include Foundation membership attributes. This operation requires the following changes:

  1. Build the LDIF (that will include two new custom fields: FirstAdded and LastRenewedOn)
  2. Adding the LDIF in place on /etc/dirsrv/slapd-$ISTANCE-NAME/schema
  3. Extend the web UI to include the new attributes

I won’t be explaining how to build a LDIF on this post but I’m however pasting the schema I made to help you getting an idea:

attributeTypes: ( NAME 'LastRenewedOn' SINGLE-VALUE EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch DESC 'Last renewed on date' SYNTAX )
attributeTypes: ( NAME 'FirstAdded' SINGLE-VALUE EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch DESC 'First added date' SYNTAX )
objectClasses: ( NAME 'FoundationFields' AUXILIARY MAY ( LastRenewedOn $ FirstAdded ) )

After copying the schema in place and restarting the directory server, extend the web UI:

on /usr/lib/python2.7/site-packages/ipalib/plugins/

from ipalib.plugins import user
from ipalib.parameters import Str
from ipalib import _
from time import strftime
import re

def validate_date(ugettext, value):
if not re.match("^[0-9]{4}-(0[1-9]|1[0-2])-(0[1-9]|[1-2][0-9]|3[0-1])$", value):
return _("The entered date is wrong, please make sure it matches the YYYY-MM-DD syntax")

user.user.takes_params = user.user.takes_params + (
Str('firstadded?', validate_date,
label=_('First Added date'),
Str('lastrenewedon?', validate_date,
label=_('Last Renewed on date'),

on /usr/share/ipa/ui/js/plugins/foundation/foundation.js:

function(phases, user_mod) {

// helper function
function get_item(array, attr, value) {
for (var i=0,l=array.length; i&lt;l; i++) {
if (array[i][attr] === value) return array[i];

return null;

var foundation_plugin = {};

foundation_plugin.add_foundation_fields = function() {

var facet = get_item(user_mod.entity_spec.facets, '$type', 'details');
var section = get_item(facet.sections, 'name', 'identity');
name: 'firstadded',
label: 'Foundation Member since'

name: 'lastrenewedon',
label: 'Last Renewed on date'

name: 'description',
label: 'Previous account changes'

return true;


phases.on('customization', foundation_plugin.add_foundation_fields);
return foundation_plugin;

Once done, restart the web server. The next step would be migrating all the FirstAdded and LastRenewedOn attributes off from MySQL into LDAP now that our custom schema has been injected.

The relevant MySQL fields were following the YYYY-MM-DD syntax to store the dates and a little Python script to read from MySQL and populate the LDAP attributes was then made. If interested or you are in a similar situation you can find it HERE.

The Migration process – Own SSL certificates for HTTPD

As you may be aware of FreeIPA comes with its own certificate tools (powered by Certmonger), that means a CA is created (during the ipa-server-install run) and certificates for the various services you provide are then created and signed with it. This is definitely great and removes the burden to maintain an underlying self-hosted PKI infrastructure. At the same time this seems to be a problem for publicly-facing web services as browsers will start complaining they don’t trust the CA that signed the certificate the website you are trying to reach is using.

The problem is not really a problem as you can specify what certificate HTTPD should be using for displaying FreeIPA’s web UI. The procedure is simple and involves the NSS database at /etc/httpd/alias:

certutil -d /etc/httpd/alias/ -A -n "StartSSL CA" -t CT,C, -a -i
certutil -d /etc/pki/nssdb -A -n "StartSSL CA" -t CT,C, -a -i
openssl pkcs12 -inkey -in -export -out -nodes -name 'HTTPD-Server-Certificate'
pk12util -i -d /etc/httpd/alias/

Once done, update /etc/httpd/conf.d/nss.conf with the correct NSSNickname value. (which should match the one you entered after ‘-name’ on the third of the above commands)

The Migration process – Equivalent of authorized_keys’ “command”

At GNOME we do run several services that require users to login to specific machines and run a command. At the same time and for security purposes we don’t want all the users to reach a shell. Originally we were making use of SSH’s authorized_keys file to specify the “command” these users should have been restricted to. FreeIPA handles Public Key authentications differently (through the sss_ssh_authorizedkeys binary) which means we had to find an alternative way to restrict groups to only a specific command. SSH’s ForceCommand came in help, an example given a group called ‘foo':

Match Group foo,!bar
X11Forwarding no
PermitTunnel no
ForceCommand /home/admin/bin/

The above Match Group will be applied to all the users of the ‘foo’ group except the ones that are also part of the ‘bar’ group. If you are interested in the script which resets a certain user password and sends a temporary one to the registered email address (by checking the mail LDAP attr) for the user, click HERE.

The Migration process – Kerberos host Keytabs

Here, at GNOME, we still have one or two RHEL 5 hosts hanging around and SSSD reported a failure when trying to authenticate with the given Keytab (generated with RHEL 7 default values) to the KDC running (as you may have guessed) RHEL 7. The issue is simple as RHEL 5 does not support many of the encryption types which the Keytab was being encrypted with. Apparently the only currently supported Keytab encryption type on a RHEL 5 machine is rc4-hmac. Creating a Keytab on the KDC accordingly can be done this way:

ipa-getkeytab -s -p host/ -e rc4-hmac -k /root/keytabs/

That should be all for today, I’ll make sure to update this post with further details or answers to possible comments.

Testing a NetworkManager VPN plugin password dialog

Testing the password dialog of a NetworkManager VPN plugin is as simple as:

echo -e 'DATA_KEY=foo\nDATA_VAL=bar\nDONE\nQUIT\n' | ./auth-dialog/nm-iodine-auth-dialog -n test -u $(uuid) -i

The above is for the iodine plugin when run from the built source tree. This allows one to test these dialogs although one didn't see them since ages since GNOME shell uses the external UI mode to query for the password.

This blog is flattr enabled.

summing up 63

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • is there love in the telematic embrace? by roy ascott. it is the computer that is at the heart of this circulation system, and, like the heart, it works best when least noticed - that is to say, when it becomes invisible. at present, the computer as a physical, material presence is too much with us; it dominates our inventory of tools, instruments, appliances, and apparatus as the ultimate machine. in our artistic and educational environments it is all too solidly there, a computational block to poetry and imagination. it is not transparent, nor is it yet fully understood as pure system, a universal transformative matrix. the computer is not primarily a thing, an object, but a set of behaviors, a system, actually a system of systems. data constitute its lingua franca. it is the agent of the datafield, the constructor of dataspace. where it is seen simply as a screen presenting the pages of an illuminated book, or as an internally lit painting, it is of no artistic value. where its considerable speed of processing is used simply to simulate filmic or photographic representations, it becomes the agent of passive voyeurism. where access to its transformative power is constrained by a typewriter keyboard, the user is forced into the posture of a clerk. the electronic palette, the light pen, and even the mouse bind us to past practices. the power of the computer's presence, particularly the power of the interface to shape language and thought, cannot be overestimated. it may not be an exaggeration to say that the "content" of a telematic art will depend in large measure on the nature of the interface; that is, the kind of configurations and assemblies of image, sound, and text, the kind of restructuring and articulation of environment that telematic interactivity might yield, will be determined by the freedoms and fluidity available at the interface. highly recommended
  • on the reliability of programs, by e.w. dijkstra. automatic computers are with us for twenty years and in that period of time they have proved to be extremely flexible and powerful tools, the usage of which seems to be changing the face of the earth (and the moon, for that matter!) in spite of their tremendous influence on nearly every activity whenever they are called to assist, it is my considered opinion that we underestimate the computer's significance for our culture as long as we only view them in their capacity of tools that can be used. they have taught us much more: they have taught us that programming any non-trivial performance is really very difficult and i expect a much more profound influence from the advent of the automatic computer in its capacity of a formidable intellectual challenge which is unequalled in the history of mankind. this opinion is meant as a very practical remark, for it means that unless the scope of this challenge is realized, unless we admit that the tasks ahead are so difficult that even the best of tools and methods will be hardly sufficient, the software failure will remain with us. we may continue to think that programming is not essentially difficult, that it can be done by accurate morons, provided you have enough of them, but then we continue to fool ourselves and no one can do so for a long time unpunished
  • "institutions will try to preserve the problem to which they are the solution", the shirky principle
  • if no one reads the manual, that's okay, if you think about it, the technical writer is in an unusual role. users hate the presence of manuals as much as they hate missing manuals. they despise lack of detail yet curse length. if no one reads the help, your position lacks value. if everyone reads the help, you're on a sinking ship. ideally, you want the user interface to be simple enough not to need help. but the more you contribute to this user interface simplicity, the less you're needed
  • 44 engineering management lessons
  • the grain of the material in design, because of the particular characteristics of a specific piece's grain, a design can't simply be imposed on the material. you can "go with the grain" or "go against the grain," but either way you have to understand the grain of the material to successfully design and produce a work. design for technology shouldn't be done separately from the material - it must be done as an intimate and tactile collaboration with the material of technology.

October 11, 2014

Recent Reading Responses

Data & Society (which I persist in thinking of as "that New York City think tank that danah boyd is in" in case you want a glimpse of the social graph inside my head) has just published a few papers. I picked up "Understanding Fair Labor Practices in a Networked Age" which summarized many things well. A point that struck me, in its discussion of Uber and of relational labor:

The importance of selling oneself is a key aspect of this kind of piecemeal or contract work, particular because of the large power differential between management and workers and because of the perceived disposability of workers. In order to be considered for future jobs, workers must maintain their high ratings and receive generally positive reviews or they may be booted from the system.

In this description I recognize dynamics that play out, though less compactly, among knowledge workers in my corner of tech.

This pressure to perform relational labor, plus the sexist expectation that women always be "friendly" and never "abrasive" (including online), further silences women's ability to publicly organize around grievances. Those expectations additionally put us in an authenticity bind, since these circumstances demand a public persona that never speaks critically -- inherently inauthentic. Since genuine warmth, and therefore influence, largely derive from authenticity, this impairs our growth as leaders. And here's another pathway that gets blocked off: since since criticizing other people/institutions raises the status of the speaker, these expectations also remove a means for us to gain status.

Speaking of softening abrasive messages, I kept nodding as I read Jocelyn Goldfein's guide to asking for a raise if you're a knowledge worker (especially an engineer) at a company big enough to have compensation bands and levels. I especially liked how she articulated the dilemma of seeking more money -- and perhaps more power -- in a place where ambition is a dirty word (personally I do not consider ambition a dirty word; thank you Dr. Anna Fels), and the same scripts she offers for softening your manager's emotional reaction to bargaining.

I also kept nodding as I read "Rules for Radicals and Developer Marketing" by Rachel Chalmers. Of course she says a number of things that sound like really good advice and that I should take, and she made me want to go read Alinsky and spend more time with Beautiful Trouble, but she also mentions an attitude I share (mutatis mutandis, namely, I've only been working in tech since ~1998):

I've been in the industry 20 years. Companies come and go, relationships endure. The people who are in the Valley, a lot of us are lifers and the configurations of the groups that we're allied to shift over time. This is a big part of why I'm really into not lying and being generous: because I want to continue working with awesome, smart people, and I don't want to burn them just because they happen to be working for a competitor right now. In 10 years' time, who knows?

Relationships, both within the Valley and with your customer, are impossible to fake, and is really the only social capital you have left when you die.

No segue here! Feel the disruption! (Your incumbent Big Media types are all about smooth experience but with the infernokrusher approach I EXPLODE those old tropes so you can Make Your Own Meaning!)

Mark Guzdial, who thinks constantly about computer science education, mentions, in discussing legitimate peripheral participation:

Newcomers have to be able to participate in a way that's meaningful while working at the edge of the community of practice. Asking the noobs in an open-source project to write the docs or to do user testing is not a form of legitimate peripheral participation because most open source projects don’t care about either of those. The activity is not valued.
This point hit me right between the eyes. I have absolutely been that optimist cheerfully encouraging a newbie to write documentation or write up a user testing report. After reading Guzdial's legitimate critique, I wonder: maybe there are pre-qualifying steps we can take to check whether particular open source projects do genuinely value user testing and/or docs, to see whether we should suggest them to newbies.

Speaking of open source: I frequently recommend Dreaming in Code by Scott Rosenberg. It tells the story of the Chandler open source project as a case study, and uses examples from Chandler's process to explain the software engineering process to readers.

When I read Dreaming in Code several years ago, as the story of Chandler progressed, I noticed how many women popped up as engineers, designers, and managers. Rosenberg addressed my surprise late in the book:

Something very unusual had happened to the Chandler team over time. Not by design but maybe not entirely coincidentally, it had become an open source project largely managed by women. [Mitch] Kapor [a man] was still the 'benevolent dictator for life'... But with Katie Parlante and Lisa Dusseault running the engineering groups, Sheila Mooney in charge of product management, and Mimi Yin as the lead designer, Chandler had what was, in the world of software development, an impressive depth of female leadership.....

...No one at OSAF [Open Source Applications Foundation] whom I asked had ever before worked on a software team with so many women in charge, and nearly everyone felt that this rare situation might have something to do with the overwhelming civility around the office -- the relative rarity of nasty turf wars and rude insult and aggressive ego display. There was conflict, yes, but it was carefully muted. Had Kapor set a different tone for the project that removed common barriers to women advancing? Or had the talented women risen to the top and then created a congenial environment?

Such chicken-egg questions are probably unanswerable....

-Scott Rosenberg, Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest For Transcendent Software, 2007, Crown. pp. 322-323.

I have a bunch of anecdotal evidence that projects whose discussions stay civil attract and retain women more, but I'd love real statistics on that. And in the seven years since Dreaming in Code I think we haven't amassed enough data points in open source specifically to see whether women-led projects generally feel more civil, which means of course that means here's where I exhort the women reading this to found and lead projects!

(Parenthetically: Women have been noticing sexism in free and open source software for as long as FOSS has existed, and fighting it in organized groups for 15 or more years. Valerie Aurora first published "HOWTO Encourage Women in Linux" in 2002. And we need everyone's help, and you, whatever your gender, have the power to genuinely help. A man cofounded GNOME's Outreach Program for Women, for instance. And I'm grateful to everyone of every gender who gave to the Ada Initiative this year! With your help, we can -- among other things -- amass data to answer Scott Rosenberg's rhetorical questions. ;-) )

October 10, 2014

Life update

Like many others on planet.gnome, it seems I also don't feel like posting much on my blog any more since I post almost all major events of my life on social media (or SOME, as its for some reason now known as in Finland). To be honest, the thought usually doesn't even occur to me anymore. :( Well, anyway! Here is a brief of what's been up for the last many months:
  • Got divorced. Yeah, not nice at all but life goes on! At least I got to keep my lovely cat.

  • Its been almost an year (14 days less) that I moved to London. In a way it was good that I was in a new city at the time of divorce as its an opportunity to start a new life. I made some cool new friends, mostly the GNOME gang in here.

    London has its quirks but over all I'm pretty happy to be living here. One big issue is that most of my friends are in Finland so I miss them very much. Hopefully, in time I'll also make a lot more friends in London and also my friends from Finland will visit me too.

    The best thing about London is the weather! No, I'm not joking at all. Not only its a big improvement when compared to Helsinki, the rumours about "Its always raining in London" are greatly (I can't stress on this word enough) exaggerated.
  • I got my eyes Z-LASIK'ed so no more glasses!

  • Started taking:

    • Driving lessons. Failed the first driving test today. Having known what I did wrong, I'm sure I wont repeat the same mistakes again next time and will pass.
    • Helicopter flying lessons. Yes! I'm not joking. I grew up watching Airwolf and ever since then I've been fascinated by helicopters and wanted to fly them but never got around to doing it. Its very expensive, as you'd imagine so I'm only taking two lessons a month. With this pace, I should be have my PPL(H) by end of 2015.

      Turns out that I'm very good at one thing that most people find very challenging to master: Hovering. The rest isn't hard either in practice. Theory is the biggest challenge for me. Here is the video recording of the 15 mins trial lesson I started with.

Always Follow the Money

Selena Larson wrote an article describing the Male Allies Plenary Panel at the Anita Borg Institute's Grace Hopper Celebration on Wednesday night. There is a video available of the panel (that's the youtube link, the links on Anita Borg Institute's website don't work with Free Software).

Selena's article pretty much covers it. The only point that I thought useful to add was that one can “follow the money” here. Interestingly enough, Facebook, Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event. I find it a strange correlation that not one man on this panel is from a company that didn't sponsor the event. Are there no male allies to the cause of women in tech worth hearing from who work for companies that, say, don't have enough money to sponsor the event? Perhaps that's true, but it's somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption of campaign funds is that those who donate simply have more access and time to make their case to the congressional representatives. They aren't buying votes; they're buying access for conversations. (This was covered well in This American Life, Episode 461).

I often see a similar problem in the “Open Source” world. The loudest microphones can be bought by the highest bidder (in various ways), so we hear more from the wealthiest companies. The amazing thing about this story, frankly, is that buying the microphone didn't work this time. I'm very glad the audience refused to let it happen! I'd love to see a similar reaction at the corporate-controlled “Open Source and Linux” conferences!

Update later in the day: The conference I'm commenting on above is the same conference where Satya Nadella, CEO of Microsoft, said that women shouldn't ask for raises, and Microsoft is also a top-tier sponsor of the conference. I'm left wondering if anyone who spoke at this conference didn't pay for the privilege of making these gaffes.

The End For Pylyglot


It was around 2005 when I started doing translations for Free and Open-Source Software. Back then I was warmly welcomed to the Ubuntu family and quickly learned all there was to know about using their Rosetta online tool to translate and/or review existing translations for the Brazilian Portuguese language. I spent so much time doing it, even during working hours, that eventually I sort of “made a name for myself” and made my way up to the upper layers of the Ubuntu Community echelon.

Then I “graduated” and started doing translations for the upstream projects, such as GNOME, Xfce, LXDE, and Openbox. I took on more responsabilities, learned to use Git and make commits for myself as well as for other contributors, and strived to unify all Brazilian Portuguese translations across as many different projects as possible. Many discussions were had, (literally) hundreds of hours were spent going though also hundreds of thoundands of translations for hundreds of different applications, none of it bringing me any monetary of financial advantage, but all done for the simple pleasure of knowing that I was helping make FOSS applications “speak” Brazilian Portuguese.

I certainly learned a lot though the experience of working on these many projects… some times I made mistakes, other times I “fought” alone to make sure that standards and procedures were complied with. All in all, looking back I only have one regret: not being nominated to become the leader for the Brazilian GNOME translation team.

Having handled 50% of the translations for one of the GNOME releases (the other 50% was handled by a good friend, Vladimir Melo while the leader did nothing to help) and spent much time making sure that the release would go out the door 100% translated, I really thought I’d be nominated to become the next leader. Not that I felt that I needed a ‘title’ to show off to other people, but in a way I wanted to feel that my peers acknowledged my hard work and commitment to the project.

Seeing other people, even people with no previous experience, being nominated by the current leader to replace him was a slap in the face. It really hurt me… but I made sure to be supportive and continue to work just as hard. I guess you could say that I lived and breathed translations, my passion not knowing any limits or knowing when to stop…

But stop I eventually did, several years ago, when I realized how hard it was to land a job that would allow me to support my family (back then I had 2 small kids) and continue to do the thing I cared the most. I confess that I even went through a series of job interviews for the translation role that Jono Bacon, Canonical’s former community manager, was trying to hire, but in the end things didn’t work out the way I wanted. I also flirted with another similar role at MeeGo but since they wanted me to move to the West Coast I decided not to pursue it (I also had fallen in love with my then current job).


As a way to keep myself somewhat still involved with the translation communities and at the same time learn a bit more about the Django framework, I then created Pylyglot, “a web based glossary compedium for Free and Open Source Software translators heavily inspired on the web site… with the objective to ‘provide a concise, yet comprehensive compilation of a body of knowledge’ for translators derived from existing Free and Open Source Software translations.”


I have been running this service on my own and paying for the cost of domain registration and database costs out of my own pocket for a while now, and I now find myself facing the dilema of renewing the domain registration and keep Pylyglot alive for another year… or retire it and end once and for all my relationship with FOSS translations.

Having spent the last couple of months thinking about it, I have now arrived at the conclusion that it is time to let this chapter of my life rest. Though the US$140/year that I won’t be spending won’t make me any richer, I don’t foresee myself either maintaining or spending any time improving the project. So this July 21st, 2014 Pylyglot will close its doors and cease to exist in its current form.

To those who knew about Pylyglot and used it and, hopefuly, found it to be useful, my sincere thanks for using it. To those who supported my idea and the project itself, whether by submitting code patches, building the web site or just giving me moral support, thank you!

FauxFactory 0.3.0

Took some time from my vacation and released FauxFactory 0.3.0 to make it Python 3 compatible and to add a new generate_utf8 method (plus some nice tweaks and code clean up).

As always, the package is available on Pypi and can be installed via pip install fauxfactory.

If you have any constructive feedback, suggestions, or file a bug report or feature request, please use the Github page.

October 09, 2014

CUDA Programming

I was invited to attend a CUDA workshop, this event was promoted by DIA PUCP. Thanks to the professor, Dr. Manuel Ujaldon, who trained us for about 12 hours using C. We use the cloud of NVIDIA to practice and we do exercises to optimise  vector functions. Concepts of register, blocks, kernel and algorithms like compute bound and memory bound, memory shared, tiling, GPU/CPU technology, CUDA software (v6 and v6.5), which are compatible with CUDA hardware: Tesla(2008 – v1,2,3 with 8 cores), Fermi (2010 – v1,2 with 32 cores), Kepler(2012 – v3 and 3.5 with 192 cores) and Maxwell(2014 – v5 with 128 cores) and Pascal architecture for future.

Screen Shot 2014-10-09 at 3.23.00 PM

We started with this device:

CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: “GRID K520″
  CUDA Driver Version / Runtime Version          6.0 / 6.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 4096 MBytes (4294770688 bytes)
  ( 8) Multiprocessors, (192) CUDA Cores/MP:     1536 CUDA Cores
  GPU Clock rate:                                797 MHz (0.80 GHz)
  Memory Clock rate:                             2500 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           0 / 3
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.0, CUDA Runtime Version = 6.0, NumDevs = 1, Device0 = GRID K520
Result = PASS

We must analyse if we use the strategy of fine-grain or coarse-grain. In our first example was so convenient because we do not need so much the use of memory. But, if we use coarse-grain, we sacrifice parallelism. Not so much blocks are available, then we do not have enough backups of blocks. E.g. 128×128 is equal to 2 elevated to 14, which is 16384 with 256 threads, with 64 blocks for each SMX. 16 blocks equivalent to 1 block for each SMX.


Thanks to Genghis Rios to organise this workshop. More pictures here>>>

IMG_5923 IMG_5956 IMG_5962 IMG_5914 IMG_5922

Filed under: Education, GNOME, GNU/Linux/Open Source, τεχνολογια :: Technology, Programming

October 08, 2014

Wed 2014/Oct/08

  • Growstuff's Crowdfunding Campaign for an API for Open Food Data

    During GUADEC 2012, Alex Skud Bailey gave a keynote titled What's Next? From Open Source to Open Everything. It was about how principles like de-centralization, piecemeal growth, and shared knowledge are being applied in many areas, not just software development. I was delighted to listen to such a keynote, which validated my own talk from that year, GNOME and the Systems of Free Infrastructure.

    During the hallway track I had the chance to talk to Skud. She is an avid knitter and was telling me about Ravelry, a web site for people who knit/crochet. They have an excellent database of knitting patterns, a yarn database, and all sorts of deep knowledge on the craft gathered over the years.

    At that time I was starting my vegetable garden at home. It turned out that Skud is also an avid gardener. We ended up talking about how it would be nice to have a site like Ravelry, but for small-scale food gardeners. You would be able to track your own crops, but also consult about the best times to plant and harvest certain species. You would be able to say how well a certain variety did in your location and climate. Over time, by aggregating people's data, we would be able to compile a free database of crop data, local varieties, and climate information.

    Growstuff begins


    Skud started coding Growstuff from scratch. I had never seen a project start from zero-lines-of-code, and be run in an agile fashion, for absolutely everything, and I must say: I am very impressed!

    Every single feature runs through the same process: definition of a story, pair programming, integration. Newbies are encouraged to participate. They pair up with a more experienced developer, and they get mentored.

    They did that even for the very basic skeleton of the web site: in the beginning there were stories for "the web site should display a footer with links to About and the FAQ", and "the web site should have a login form". I used to think that in order to have a collaboratively-developed project, one had to start with at least a basic skeleton, or a working prototype — Growstuff proved me wrong. By having a friendly, mentoring environment with a well-defined process, you can start from zero-lines-of-code and get excellent results quickly. The site has been fully operational for a couple of years now, and it is a great place to be.

    Growstuff is about the friendliest project I have seen.

    Local crop data

    Tomato heirloom        varieties

    I learned the basics of gardening from a couple of "classic" books: the 1970s books by John Seymour which my mom had kept around, and How to Grow More Vegetables, by John Jeavons. These are nominally excellent — they teach you how to double-dig to loosen the soil and keep the topsoil, how to transplant fragile seedlings so you don't damage them, how to do crop rotation.

    However, their recommendations on garden layouts or crop rotations are biased towards the author's location. John Seymour's books are beautifully illustrated, but are about the United Kingdom, where apples and rhubarb may do well, but would be scorched where I live in Mexico. Jeavons's book is biased towards California, which is somewhat closer climate-wise to where I live, but some of the species/varieties he mentions are practically impossible to get here — and, of course, species which are everyday fare here are completely missing in his book. Pity the people outside the tropics, for whom mangoes are a legend from faraway lands.

    The problem is that the books lack knowledge of good crops for wherever you may live. This is the kind of thing that is easily crowdsourced, where "easily" means a Simple Matter Of Programming.

    An API for Open Food Data

    Growstuff has been gathering crop data from people's use of the site. Someone plants spinach. Someone harvests tomatoes. Someone puts out seeds for trade. The next steps are to populate the site with fine-grained varieties of major crops (e.g. the zillions of varieties of peppers or tomatoes), and to provide an API to access planting information in a convenient way for analysis.

    Right now, Growstuff is running a fundraising campaign to implement this API — allowing developers to work on this full-time, instead of scraping from their "free time" otherwise.

    I encourage you to give money to Growstuff's campaign. These are good people.

    To give you a taste of the non-trivialness of implementing this, I invite you to read Skud's post on interop and unique IDs for food data. This campaign is not just about adding some features to Growstuff; it is about making it possible for open food projects to interoperate. Right now there are various free-culture projects around food production, but little communication between them. This fundraising campaign attempts to solve part of that problem.

    I hope you can contribute to Growstuff's campaign. If you are into local food production, local economies, crowdsourced databases, and that sort of thing — these are your people; help them out.

    Resources for more in-depth awesomeness

October 07, 2014

Probing with Gradle

Up until now, Probe relied on dynamic view proxies generated at runtime to intercept View calls. Although very convenient, this approach greatly affects the time to inflate your layouts—which limits the number of use cases for the library, especially in more complex apps.

This is all changing now with Probe’s brand new Gradle plugin which seamlessly generates build-time proxies for your app. This means virtually no overhead at runtime!

Using Probe’s Gradle plugin is very simple. First, add the Gradle plugin as a dependency in your build script.

buildscript {
    dependencies {
        classpath 'org.lucasr.probe:gradle-plugin:0.1.3'

Then apply the plugin to your app’s build.gradle.

apply plugin: 'org.lucasr.probe'

Probe’s proxy generation is disabled by default and needs to be explicitly enabled on specific build variants (build type + product flavour). For example, this is how you enable Probe proxies in debug builds.

probe {
    buildVariants {
        debug {
            enabled = true

And that’s all! You should now be able to deploy interceptors on any part of your UI. Here’s how you could deploy an OvermeasureInterceptor in an activity.

public final class MainActivity extends Activity {
   protected void onCreate(Bundle savedInstanceState) {
       Probe.deploy(this, new OvermeasureInterceptor());

While working on this feature, I have changed DexMaker to be an optional dependency i.e. you have to explicitly add DexMaker as a build dependency in your app in order to use it.

This is my first Gradle plugin. There’s definitely a lot of room for improvement here. These features are available in the 0.1.3 release in Maven Central.

As usual, feedback, bug reports, and fixes are very welcome. Enjoy!