GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

October 20, 2014

GNOME Web 3.14

It’s already been a few weeks since the release of GNOME Web 3.14, so it’s a bit late to blog about the changes it brings, but better late than never. Unlike 3.12, this release doesn’t contain big user interface changes, so the value of the upgrade may not be as immediately clear as it was for 3.12. But this release is still a big step forward: the focus this cycle has just been on polish and safety instead of UI changes. Let’s take a look at some of the improvements since 3.12.1.

Safety First

The most important changes help keep you safer when browsing the web.

Safer Handling of TLS Authentication Failures

When you try to connect securely to a web site (via HTTPS), the site presents identification in the form of a chain of digital certificates to prove that your connection has not been maliciously intercepted. If the last certificate in the chain is not signed by one of the certificates your browser has on file, the browser decides that the connection is not secure: this could be a harmless server configuration error, but it could also be an attacker intercepting your connection. (To be precise, your connection would be secure, but it would be a secure connection to an attacker.) Previously, Web would bet on the former, displaying an insecure lock icon next to the address in the header bar, but loading the page anyway. The problem with this approach is that if there really is an attacker, simply loading the page gives the attacker access to secure cookies, most notably the session cookies used to keep you logged in to a particular web site. Once the attacker controls your session, he can trick the web site into thinking he’s you, change your settings, perhaps make purchases with your account if you’re on a shopping site, for example. Moreover, the lock icon is hardly noticeable enough to warn the user of danger. And let’s face it, we all ignore those warnings anyway, right? Web 3.14 is much stricter: once it decides that an attacker may be in control of a secure connection, it blocks access to the page, like all major browsers already do:

Screenshot from 2014-10-17 19:52:53Click for full size

(The white text on the button is probably a recent compatibility issue with GTK+ master: it’s fine in 3.14.)

Safety team members will note that this will obviously break sites with self-signed certificates, and is incompatible with a trust-on-first-use approach to certificate validation. As much as I agree that the certificate authority system is broken and provides only a moderate security guarantee, I’m also very skeptical of trust-on-first-use. We can certainly discuss this further, but it seemed best to start off with an approach similar to what major browsers already do.

The Load Anyway button is non-ideal, since many users will just click it, but this provides good protection for anyone who doesn’t. So, why don’t we get rid of that Load Anyway button? Well, different browsers have different strategies for validating TLS certificates (a good topic for a future blog post), which is why Web sometimes claims a connection is insecure even though Firefox loads the page fine. If you think this may be the case, and you don’t care about the security of your connection (including any passwords you use on the site), then go ahead and click the button. Needless to say, don’t do this if you’re using somebody else’s Wi-Fi access point, or on an email or banking or shopping site… when you use this button, the address in the address bar does not matter: there’s no telling who you’re really connected to.

But all of the above only applies to your main connection to a web site. When you load a web page, your browser actually creates very many connections to grab subresources (like images, CSS, or trackers) needed to display the page. Prior to 3.14, Web would completely ignore TLS errors for subresources. This means that the secure lock icon was basically worthless, since an attacker could control the page by modifying any script loaded by the page without being detected. (Fortunately, this attack is somewhat unlikely, since major browsers would all block this.) Web 3.14 will verify all TLS certificates used to encrypt subresources, and will block those resources if verification fails. This can cause web pages to break unexpectedly, but it’s how all major browsers I’ve tested behave, and it’s certainly the right thing to do. (We may want to experiment with displaying a warning, though, so that it’s clear what’s gone wrong.)

And if you’re a distributor, please read this mail to learn how not to break TLS verification in your distribution. I’m looking at you, Debian and derivatives.

Fewer TLS Authentication Failures

With glib-networking 2.42, corresponding to GNOME 3.14, Web will now accept certificate chains when the certificates are sent out of order. Sites that do this are basically broken, but all major browsers nevertheless support unordered chains. Sending certificates out of order is a harmless configuration mistake, not a security flaw, so the only harm in accepting unordered certificates is that this makes sites even less likely than before to notice their configuration mistake, harming TLS clients that don’t permute the certificates.

This change should greatly reduce the number of TLS verification failures you experience when using Web. Unfortunately, there are still too many differences in how certificate verification is performed for me to be comfortable with removing the Load Anyway button, but that is definitely the long-term goal.

HTTP Authentication

WebKitGTK+ 2.6.1 plugs a serious HTTP authentication vulnerability. Previously, when a secure web page would require a password before the user could load the page, Web would not validate the page’s TLS certificate until after prompting the user for a password and sending it to the server.

Mixed Content Detection

If a secure (HTTPS) web page displays insecure content (usually an image or video) or executes an insecure script, Web now displays a warning icon instead of a lock icon. This means that the lock icon now indicates that your connection is completely private, with the exception that a passive adversary can always know the web site that you are visiting (but not which page you are visiting on the site). If the warning icon is displayed, then an adversary can compromise some (and possibly all) of the page, and has also learned something that might reveal which page of the site you are visiting, or the contents of the page.

If you’re curious where the insecure content is coming from and don’t mind leaving behind Web’s normally simple user interface, you can check using the web inspector:

Screenshot from 2014-10-17 21:07:52The screenshot is leaked to an attacker, revealing that you’re on the home page. Click for full size.

The focus on safety will continue to drive the development of Web 3.16. Most major browsers, with the notable exception of Safari, take mixed content detection one step further by actively blocking some more dangerous forms of insecure content, such as scripts, on secure pages, and we certainly intend to do so as well. We’re also looking into support for strict transport security (HSTS), to ensure that your connection to HSTS-enabled sites is safe even if you tried to connect via HTTP instead of HTTPS. This is what you normally do when you type an address into the address bar. Many sites will redirect you from an HTTP URL to an HTTPS URL, but an attacker isn’t going to do this kindness for you. Since all HTTP pages are insecure, you get no security warning. This problem is thwarted by strict transport security. We’re currently hoping to have both mixed content blocking and strict transport security complete in time for 3.16.

UI Changes

Of course, security hasn’t been the only thing we’ve been working on.

  • The most noticeable user experience change is not actually a change in Web at all, but in GNOME Document Viewer 3.14. The new Document Viewer browser plugin allows you to read PDFs in Web without having to download the file and open it in Document Viewer. (This is similar to the proprietary Adobe Reader browser plugin.) This is made possible by new support in WebKitGTK+ 2.6 for GTK+ 3 browser plugins.
  • The refresh button has been moved from the address bar and is now next to the new tab button, where it’s always accessible. Previously, you would need to click to show the address bar before the refresh button would appear.
  • The lock icon now opens a little popover to display the security status of the web page, instead of directly presenting the confusing certificate dialog. You can also now click the lock when the title of the page is displayed, without needing to switch to the address bar.

Bugfixes

3.14 also contains some notable bugfixes that will improve your browsing experience.

  • We fixed a race condition that caused the ad blocker to accidentally deletes its list of filters, so ad block will no longer randomly stop working when enabled (it’s off by default). (We still need one additional fix in order to clean this up automatically if it’s already broken, but in the meantime you can reset your filters by deleting ~/.config/epiphany/adblock if you’re affected.)
  • We (probably!) fixed a bug that caused pages to disappear from history after the browser was closed.
  • We fixed a bug in Web’s aggressive removal of tracking parameters from URLs when the do not track preference is enabled (it’s off by default), which caused compatibility issues with some web sites.
  • We fixed a bug that caused Web to sometimes not offer to remember passwords.

These issues have all been backported to our 3.12 branch, but were never released. We’ll need to consider making more frequent stable releases, to ensure that bugfixes reach users more quickly in the future.

Polish

  • There are new context menu entries when right-clicking on an HTML video. Notably, this adds the ability to easily download a copy of the video for watching it offline.
  • Better web app support. Recent changes in 3.14.1 make it much harder for a web app to escape application mode, and ensure that links to other sites open in the default browser when in application mode.
  • Plus a host of smaller improvements: The subtitle of the header bar now changes at the same time as the title, and the URL in the address bar will now always match the current page when you switch to address bar mode. Opening multiple links in quick succession from an external application is now completely reliable (with WebKitGTK+ 2.6.1); previously, some pages would load twice or not at all. The search provider now exits one minute after you search for something in the GNOME Shell overview, rather than staying alive forever. The new history dialog that was added in 3.12 now allows you to sort history by title and URL, not just date. The image buttons in the new cookies, history, and passwords dialogs now have explanitory tooltips. Saving an image, video, or web page over top of an existing file now works properly (with Web 3.14.1). And of course there are also a few memory usage and crash fixes.

As always, the best place to send feedback is <epiphany-list@gnome.org>, or Bugzilla if you’ve found a bug. Comments on this post work too. Happy browsing!

3.14 Games Updates

So, what new things happened to our games in GNOME 3.14?

Hitori

GNOME Hitori has actually been around for a while, but it wasn’t until this cycle that I discovered it. After chatting with Philip Withnall, we agreed that with a minor redesign, the result would be appropriate for GNOME 3. And here it is:

Screenshot from 2014-10-17 18:03:30

The gameplay is similar to Sudoku, but much faster-paced. The goal is to paint squares such that the same digit appears in each row and column no more than once, without ever painting two horizontally- or vertically-adjacent squares and without ever creating a set of unpainted squares that is disconnected both horizontally and vertically from the rest of the unpainted squares. (This sounds a lot more complicated than it is: think about it for a bit and it’s really quite intuitive.) You can usually win each game in a minute or two, depending on the selected board size.

Mines

For Mines, the screenshots speak for themselves. The new design is by Allan Day, and was implemented by Robert Roth.

Screenshot from 2014-10-17 18:09:103.12
Screenshot from 2014-10-17 18:08:063.14

There is only one gameplay change: you can no longer get a hint to help you out of a tough spot at the cost of a small time penalty. You’ll have to actually guess which squares have mines now.

Right now, the buttons on the right disappear when the game is in progress. This may have been a mistake, which we’ll revisit in 3.16. You can comment in Bug #729250 if you want to join our healthy debate on whether or not to use colored numbers.

Sudoku

Sudoku has been rewritten in Vala with the typical GNOME emphasis on simplicity and ease of use. The design is again by Allan Day. Christopher Baines started work on the port for a Google Summer of Code project in 2012, and Parin Porecha completed the work this summer for his own Google Summer of Code project.

Screenshot from 2014-10-17 18:19:023.12 (note: not possible to get rid of the wasted space on the sides)
Screenshot from 2014-10-17 18:20:253.14

We’re also using a new Sudoku generator, QQwing, for puzzle generation. This allows us to avoid reimplementing bugs in our old Sudoku generator (which is documented to have generated at least one impossible puzzle, and sometimes did a very poor job of determining difficulty), and instead rely on a project completely focused on correct Sudoku generation. Stephen Ostermiller is the author of QQwing, and he worked with us to make sure QQwing met our needs by implementing symmetric puzzle generation and merging changes to make it a shared library. QQwing is fairly fast at generating puzzles, so we’ve dropped the store of pregenerated puzzles that Sudoku 3.12 used and now generate puzzles on the fly instead. This means a small (1-10 second) wait if you’re printing dozens of puzzles at once, but it ensures that you no longer get the same puzzle twice, as sometimes occurred in 3.12.

If you noticed from the screenshot, QQwing often uses more interesting symmetries than our old generator did. For the most part, I think this is exciting — symmetric puzzles are intended to be artistic — but I’m interested in comments from players on whether we should disable some of the symmetry options QQwing provides if they’re too flashy. We also need feedback on whether the difficulty levels are set appropriately; I have a suspicion that QQwing’s difficulty rating may not be as refined as our old one (when it was working properly), but I could be completely wrong: we really need player feedback to be sure.

A few features from Sudoku 3.12 did not survive the redesign, or changed significantly. Highlighter mode is now always active and uses a subtle gray instead of rainbow colors. I’m considering making it a preference in 3.16 and turning it off by default, since it’s primarily helpful for keyboard users and seems to get in the way when playing with a mouse. The old notes are now restricted to digits in the top row of the cell, and you set them by right-clicking in a square. (The Ctrl+digit shortcuts will still work.) This feels a lot better, but we need to figure out how to make notes more discoverable to users.  Most notably, the Track Additions feature is completely gone, the victim of our desire to actually ship this update. If you used Track Additions and want it back, we’d really appreciate comments in Bug #731640. Implementation help would be even better. We’d also like to bring back the hint feature, which we removed because the hints in 3.12 were only useful when an easy move exists, and not very helpful in a tough position. Needless to say, we’re definitely open to feedback on all of these changes.

Other Games

We received a Lights Off bug report that the seven-segment display at the bottom of the screen was difficult to read, and didn’t clearly indicate that it corresponded to the current level. With the magic of GtkHeaderBar, we were able to remove it. The result:

Screenshot from 2014-10-17 18:34:593.12
Screenshot from 2014-10-17 18:31:373.14

Robots was our final game (from the historical gnome-games package, so discounting Aisleriot) with a GNOME 2 menu bar. No longer:

Screenshot from 2014-10-17 18:36:363.12
Screenshot from 2014-10-17 18:42:333.14

It doesn’t look as slick as Mines or Sudoku, but it’s still a nice modernization.

I think that’s enough screenshots for one blog post, but I’ll also mention that Swell Foop has switched to using the dark theme (which blends in better with its background), Klotski grew a header bar (so now all of the historical gnome-games have a header bar as well), and Chess will now prompt the player at the first opportunity to claim a draw, allowing us to remove the confusing Claim Draw menu item and also the gear menu with it. (It’s been replaced by a Resign button.)

Easier Easy Modes

The computer player in Four-in-a-row used to be pratically impossible to defeat, even on level one. Nikhar Agrawal wrote a new artificial intelligence for this game as part of his Google Summer of Code project, so now it’s actually possible to win at connect four. And beginning with Iagno 3.14.1, the computer player is much easier to beat when set to level one (the default). Our games are supposed to be simple to play, and it’s not fun when the easiest difficultly level is overwhelming.

Teaser

There have also been plenty of smaller improvements to other games. In particular, Arnaud Bonatti has fixed several Iagno and Sudoku bugs, and improved the window layouts for several of our games. He also wrote a new game that will appear in GNOME 3.16.  But that has nothing to do with 3.14, so I can’t show you that just yet, now can I? For now, I will just say that it will prominently feature the Internet’s favorite animal.

Happy gaming!

Mon 2014/Oct/20

  • Together with GNOME 3.14, we have released Web 3.14. Michael Catanzaro, who has been doing an internship at Igalia for the last few months, wrote an excellent blog post describing the features of this new release. Go and read his blog to find out what we've been doing while we wait for his new blog to be sindicated to Planet GNOME.

  • I've started doing two exciting things lately. The first one is Ashtanga yoga. I had been wanting to try yoga for a long time now, as swimming and running have been pretty good for me but at the same time have made my muscles pretty stiff. Yoga seemed like the obvious choice, so after much tought and hesitation I started visiting the local Ashtanga Yoga school. After a month I'm starting to get somewhere (i.e. my toes) and I'm pretty much addicted to it.

    The second thing is that I started playing the keyboards yesterday. I used to toy around with keyboards when I was a kid but I never really learned anything meaningful, so when I saw an ad for a second-hand WK-1200, I couldn't resist and got it. After an evening of practice I already got the feel of Cohen's Samson in New Orleans and the first 16 bars of Verdi's Va, pensiero, but I'm still horribly bad at playing with both hands.

October 17, 2014

2014-10-17: Friday

  • Early to rise; quick call, mail, breakfast; continued on slideware - really thrilled to use droidAtScreen to demo the LibreOffice on Android viewer.
  • Off to the venue in the coach; prepped slides some more, gave a talk - rather a hard act to follow at the end of the previous talk: a (male) strip-tease, mercifully aborted before it went too far. Presented my slides, informed by a few recent local discussions:
    Hybrid PDF of LibreOffice under-development slides
  • Quick lunch, caught up with mail, customer call, poked Neil & Daniel, continued catching up with the mail & interaction backlog.
  • Conference ended - overall an extremely friendly & positive experience, in a lovely location - most impressed by my first trip to Brazil; cudos to the organizers; and really great to spend some time with Eliane & Olivier on their home turf.

ffs ssl

I just set up SSLTLS on my web site. Everything can be had via https://wingolog.org/, and things appear to work. However the process of transitioning even a simple web site to SSL is so clownshoes bad that it's amazing anyone ever does it. So here's an incomplete list of things that can go wrong when you set up TLS on a web site.

You search "how to set up https" on the Googs and click the first link. It takes you here which tells you how to use StartSSL, which generates the key in your browser. Whoops, your private key is now known to another server on this internet! Why do people even recommend this? It's the worst of the worst of Javascript crypto.

OK so you decide to pay for a certificate, assuming that will be better, and because who knows what's going on with StartSSL. You've heard of RapidSSL so you go to rapidssl.com. WTF their price is 49 dollars for a stupid certificate? Your domain name was only 10 dollars, and domain name resolution is an actual ongoing service, unlike certificate issuance that just happens one time. You can't believe it so you click through to the prices to see, and you get this:

Whatttttttttt

OK so I'm using Epiphany on Debian and I think that uses the system root CA list which is different from what Chrome or Firefox do but Jesus this is shaking my faith in the internet if I can't connect to an SSL certificate provider over SSL.

You remember hearing something on Twitter about cheaper certs, and oh ho ho, it's rapidsslonline.com, not just RapidSSL. WTF. OK. It turns out Geotrust and RapidSSL and Verisign are all owned by Symantec anyway. So you go and you pay. Paying is the first thing you have to do on rapidsslonline, before anything else happens. Welp, cross your fingers and take out your credit card, cause SSLanta Clause is coming to town.

Recall, distantly, that SSL has private keys and public keys. To create an SSL certificate you have to generate a key on your local machine, which is your private key. That key shouldn't leave your control -- that's why the DigitalOcean page is so bogus. The certification authority (CA) then needs to receive your public key and then return it signed. You don't know how to do this, because who does? So you Google and copy and paste command line snippets from a website. Whoops!

Hey neat it didn't delete your home directory, cool. Let's assume that your local machine isn't rooted and that your server isn't rooted and that your hosting provider isn't rooted, because that would invalidate everything. Oh what so the NSA and the five eyes have an ongoing program to root servers? Um, well, water under the bridge I guess. Let's make a key. You google "generate ssl key" and this is the first result.

# openssl genrsa -des3 -out foo.key 1024

Whoops, you just made a 1024-bit key! I don't know if those are even accepted by CAs any more. Happily if you leave off the 1024, it defaults to 2048 bits, which I guess is good.

Also you just made a key with a password on it (that's the -des3 part). This is eminently pointless. In order to use your key, your web server will need the decrypted key, which means it will need the password to the key. Adding a password does nothing for you. If you lost your private key but you did have it password-protected, you're still toast: the available encryption cyphers are meant to be fast, not hard to break. Any serious attacker will crack it directly. And if they have access to your private key in the first place, encrypted or not, you're probably toast already.

OK. So let's say you make your key, and make what's called the "CRTCSR", to ask for the cert.

# openssl req -new -key foo.key -out foo.csr

Now you're presented with a bunch of pointless-looking questions like your country code and your "organization". Seems pointless, right? Well now I have to live with this confidence-inspiring dialog, because I left off the organization:

Don't mess up, kids! But wait there's more. You send in your CSR, finally figure out how to receive mail for hostmaster@yourdomain.org because that's what "verification" means (not, god forbid, control of the actual web site), and you get back a certificate. Now the fun starts!

How are you actually going to serve SSL? The truly paranoid use an out-of-process SSL terminator. Seems legit except if you do that you lose any kind of indication about what IP is connecting to your HTTP server. You can use a more HTTP-oriented terminator like bud but then you have to mess with X-Forwarded-For headers and you only get them on the first request of a connection. You could just enable mod_ssl on your Apache, but that code is terrifying, and do you really want to be running Apache anyway?

In my case I ended up switching over to nginx, which has a startlingly underspecified configuration language, but for which the Debian defaults are actually not bad. So you uncomment that part of the configuration, cross your fingers, Google a bit to remind yourself how systemd works, and restart the web server. Haich Tee Tee Pee Ess ahoy! But did you remember to disable the NULL authentication method? How can you test it? What about the NULL encryption method? These are actual things that are configured into OpenSSL, and specified by standards. (What is the use of a secure communications standard that does not provide any guarantee worth speaking of?) So you google, copy and paste some inscrutable incantation into your config, turn them off. Great, now you are a dilettante tweaking your encryption parameters, I hope you feel like a fool because I sure do.

Except things are still broken if you allow RC4! So you better make sure you disable RC4, which incidentally is exactly the opposite of the advice that people were giving out three years ago.

OK, so you took your certificate that you got from the CA and your private key and mashed them into place and it seems the web browser works. Thing is though, the key that signs your certificate is possibly not in the actual root set of signing keys that browsers use to verify the key validity. If you put just your key on the web site without the "intermediate CA", then things probably work but browsers will make an additional request to get the intermediate CA's key, slowing down everything. So you have to concatenate the text files with your key and the one with the intermediate CA's key. They look the same, just a bunch of numbers, but don't get them in the wrong order because apparently the internet says that won't work!

But don't put in too many keys either! In this image we have a cert for jsbin.com with one intermediate CA:

And here is the same but with an a different root that signed the GeoTrust Global CA certificate. Apparently there was a time in which the GeoTrust cert hadn't been added to all of the root sets yet, and it might not hurt to include them all:

Thing is, the first one shows up "green" in Chrome (yay), but the second one shows problems ("outdated security settings" etc etc etc). Why? Because the link from Equifax to Geotrust uses a SHA-1 signature, and apparently that's not a good idea any more. Good times? (Poor Remy last night was doing some basic science on the internet to bring you these results.)

Or is Chrome denying you the green because it was RapidSSL that signed your certificate with SHA-1 and not SHA-256? It won't tell you! So you Google and apply snakeoil and beg your CA to reissue your cert, hopefully they don't charge for that, and eventually all is well. Chrome gives you the green.

Or does it? Probably not, if you're switching from a web site that is also available over HTTP. Probably you have some images or CSS or Javascript that's being loaded over HTTP. You fix your web site to have scheme-relative URLs (like //wingolog.org/ instead of http://wingolog.org/), and make sure that your software can deal with it all (I had to patch Guile :P). Update all the old blog posts! Edit all the HTMLs! And finally, green! You're golden!

Or not! Because if you left on SSLv3 support you're still broken! Also, TLSv1.0, which is actually greater than SSLv3 for no good reason, also has problems; and then TLS1.1 also has problems, so you better stick with just TLSv1.2. Except, except, older Android phones don't support TLSv1.2, and neither does the Googlebot, so you don't get the rankings boost you were going for in the first place. So you upgrade your phone because that's a thing you want to do with your evenings, and send snarky tweets into the ether about scumbag google wanting to promote HTTPS but not supporting the latest TLS version.

So finally, finally, you have a web site that offers HTTPS and HTTP access. You're good right? Except no! (Catching on to the pattern?) Because what happens is that people just type in web addresses to their URL bars like "google.com" and leave off the HTTP, because why type those stupid things. So you arrange for http://www.wobsite.com to redirect https://www.wobsite.com for users that have visited the HTTPS site. Except no! Because any network attacker can simply strip the redirection from the HTTP site.

The "solution" for this is called HTTP Strict Transport Security, or HSTS. Once a visitor visits your HTTPS site, the server sends a response that tells the browser never to fetch HTTP from this site. Except that doesn't work the first time you go to a web site! So if you're Google, you friggin add your name to a static list in the browser. EXCEPT EVEN THEN watch out for the Delorean.

And what if instead they go to wobsite.com instead of the www.wobsite.com that you configured? Well, better enable HSTS for the whole site, but to do anything useful with such a web request you'll need a wildcard certificate to handle the multiple URLs, and those run like 150 bucks a year, for a one-bit change. Or, just get more single-domain certs and tack them onto your cert, using the precision tool cat, but don't do too many, because if you do you will overflow the initial congestion window of the TCP connection and you'll have to wait for an ACK on your certificate before you can actually exchange keys. Don't know what that means? Better look it up and be an expert, or your wobsite's going to be slow!

If your security goals are more modest, as they probably are, then you could get burned the other way: you could enable HSTS, something could go wrong with your site (an expired certificate perhaps), and then people couldn't access your site at all, even if they have no security needs, because HTTP is turned off.

Now you start to add secure features to your web app, safe with the idea you have SSL. But better not forget to mark your cookies as secure, otherwise they could be leaked in the clear, and better not forget that your website might also be served over HTTP. And better check up on when your cert expires, and better have a plan for embedded browsers that don't have useful feedback to the user about certificate status, and what about your CA's audit trail, and better stay on top of the new developments in security! Did you read it? Did you read it? Did you read it?

It's a wonder anything works. Indeed I wonder if anything does.

XNG: GIFs, but better, and also magical

It might seem like the GIF format is the best we’ll ever see in terms of simple animations. It’s a quite interesting format, but it doesn’t come without its downsides: quite old LZW-based compression, a limited color palette, and no support for using old image data in new locations.

Two competing specifications for animations were developed: APNG and MNG. The two camps have fought wildly and we’ve never gotten a resolution, and different browsers support different formats. So, for the widest range of compatibility, we have just been using GIF… until now.

I have developed a new image format which I’m calling “XNG”, which doesn’t have any of these restrictions, and has the possibility to support more complex features, and works in existing browsers today. It doesn’t require any new features like <canvas> or <video> or any JavaScript libraries at all. In fact, it works without any JavaScript enabled at all. I’ve tested it in both Firefox and Chrome, and it works quite well in either. Just embed it like any other image, e.g. <img src="myanimation.xng">.

It’s magic.

Have a few examples:

I’ve been looking for other examples as well. If you have any cool videos you’d like to see made into XNGs, write a comment and I’ll try to convert it. I wrote out all of these XNG files out by hand.

Over the next few days, I’ll talk a bit more about XNG. I hope all you hackers out there look into it and notice what I’m doing: I think there’s certainly a lot of unexplored ideas in what I’ve developed. We can push this envelope further.

EDIT: Yes, guys, I see all your comments. Sorry, I’ve been busy with other stuff, and haven’t gotten a chance to moderate all of them. I wasn’t ever able to reproduce the bug in Firefox about the image hanging, but Mario Klingemann found a neat trick to get Firefox to behave, and I’ve applied it to all three XNGs above.

October 16, 2014

2014-10-16: Thursday

  • To the venue, crazy handing out of collateral, various talks with people; Advisory Board call, LibreOffice anniversary Cake cutting and eating (by massed hordes).
  • It is extraordinary, and encouraging to see how many young ladies are at the conference, and (hopefully) getting engaged with Free Software: never seen so many at other conferences. As an unfortunate down-side: was amused to fobb off an un-solicited offer of marriage from a 15yr old: hmm.
  • Chewed some mail, bus back in the evening; worked on slides until late, for talk tomorrow.

The Wait Is Over: MimeKit and MailKit Reach 1.0

After about a year in the making for MimeKit and nearly 8 months for MailKit, they've finally reached 1.0 status.

I started really working on MimeKit about a year ago wanting to give the .NET community a top-notch MIME parser that could handle anything the real world could throw at it. I wanted it to run on any platform that can run .NET (including mobile) and do it with remarkable speed and grace. I wanted to make it such that re-serializing the message would be a byte-for-byte copy of the original so that no data would ever be lost. This was also very important for my last goal, which was to support S/MIME and PGP out of the box.

All of these goals for MimeKit have been reached (partly thanks to the BouncyCastle project for the crypto support).

At the start of December last year, I began working on MailKit to aid in the adoption of MimeKit. It became clear that without a way to inter-operate with the various types of mail servers, .NET developers would be unlikely to adopt it.

I started off implementing an SmtpClient with support for SASL authentication, STARTTLS, and PIPELINING support.

Soon after, I began working on a Pop3Client that was designed such that I could use MimeKit to parse messages on the fly, directly from the socket, without needing to read the message data line-by-line looking for a ".\r\n" sequence, concatenating the lines into a massive memory buffer before I could start to parse the message. This fact, combined with the fact that MimeKit's message parser is orders of magnitude faster than any other .NET parser I could find, makes MailKit the fastest POP3 library the world has ever seen.

After a month or so of avoiding the inevitable, I finally began working on an ImapClient which took me roughly two weeks to produce the initial prototype (compared to a single weekend for each of the other protocols). After many months of implementing dozens of the more widely used IMAP4 extensions (including the GMail extensions) and tweaking the APIs (along with bug fixing) thanks to feedback from some of the early adopters, I believe that it is finally complete enough to call 1.0.

In July, at the request of someone involved with a number of the IETF email-related specifications, I also implemented support for the new Internationalized Email standards, making MimeKit and MailKit the first - and only - .NET email libraries to support these standards.

If you want to do anything at all related to email in .NET, take a look at MimeKit and MailKit. I guarantee that you will not be disappointed.

Communities in Real Life

Add this to the list of things I never expected to be doing: opening a grocery store.

At last year’s Open Help Conference, I gave a talk titled Community Lessons From IRL. I told the story of how I got involved in opening a grocery store, and what I’ve learned about community work when the community is your neighbors.

I live in Cincinnati, in a beautiful, historic, walkable neighborhood called Clifton. We pride ourselves on being able to walk to get everything we need. We have a hardware store, a pharmacy, and a florist. We have lots of great restaurants. We had a grocery store, but after generations of serving the people of Clifton, our neighborhood IGA closed its doors nearly four years ago.

The grocery store closing hurt our neighborhood. It hurt our way of life. Other shops saw their business decline. Quite a few even closed their doors. At restaurants and coffee houses and barber shops, all anybody talked about was the grocery store being closed. When will it reopen? Has anybody contacted Trader Joe’s/Whole Foods/Fresh Market? Somebody should do something.

“Somebody should do something” isn’t doing something.

If there’s one thing I’ve learned from over a decade of working in open source, it’s that things only get done when people get up and do them. Talk is cheap, whether it’s in online forums or in the barber shop. So a group of us got up and did something.

Last August, a concerned resident sent out a message that if anybody wanted to take action, she was hosting a gathering at her house. Sometimes just hosting a gathering at your house is all it takes to get the ball rolling. Out of that meeting came a team of people committed to bringing a full-service grocery store back to Clifton as a co-op, owned and controlled by the community.

Thus was born Clifton Market.

Clifton Market display in the window of the vacant IGA building

Clifton Market display in the window of the vacant IGA building

For the last 14 months, I’ve spent whatever free time I could muster trying to open a grocery store. Along with an ever-growing community of volunteers, I’ve surveyed the neighborhood, sold shares, created a business plan, talked to contractors, negotiated real estate, and learned far more about the grocery industry than I ever expected. In many ways, I’ve been well-served by my experience working with volunteer communities in GNOME and other projects. But a lot of things are different when the project is in your backyard staring you down each day.

Opening a grocery store costs money, and we’ve been working hard on raising the money through shares and owner loans. If you want to support our effort, you can buy a share too.

Thu 2014/Oct/16

My first memories of Meritähti are from that first weekend, in late August 2008, when I had just arrived in Helsinki to spend what was supposed to be only a couple of months doing GTK+ and Hildon work for Nokia. Lucas, who was still in Finland at the time, had recommended that I check the program for the Night of the Arts, an evening that serves as the closing of the summer season in the Helsinki region and consists of several dozens of street stages set up all over with all kind of performances. It sounded interesting, and I was looking forward to check the evening vibe out.

I was at the Ruoholahti office that Friday, when Kimmo came over to my desk to invite me to join his mates for dinner. Having the Night of the Arts in mind, I suggested we grab some food downtown before finding an interesting act somewhere, to which he replied emphatically "No! We first go to Meritähti to eat, a place nearby — it's our Friday tradition here." Surprised at the tenacity of his objection and being the new kid in town, I obliged. I can't remember now who else joined us in that summer evening before we headed to the Night of the Arts, probably Jörgen, Marius, and others, but that would be the first of many more to come in the future.

I started taking part of that tradition and I always thought, somehow, that those Meritähti evenings would continue for a long time. Because even after the whole Hildon team was dismantled, even after many of the people in our gang left Nokia and others moved on to work on the now also defunct MeeGo, we still met in Meritähti once in a while for food, a couple of beers, and good laughs. Even after Nokia closed down the Ruoholahti NRC, even after everyone I knew had left the company, even after the company was sold out, and even after half the people we knew had left the country, we still met there for a good old super-special.

But those evenings were not bound to be eternal, and like most good things in life, they are coming to an end. Meritähti is closing in the next weeks, and the handful of renegades who stuck in Helsinki will have to find a new place where to spend our evenings together. László, the friendly Hungarian who ran the place with his family, is moving on to less stressful endeavors. Keeping a bar is too much work, he told us, and everyone has the right to one day say enough. One would want to do or say something to change his mind, but what right do we have? We should instead be glad that the place was there for us and that we had the chance to enjoy uncountable evenings under the starfish lamps that gave the place its name. If we're feeling melancholic, we will always have Kaurismäki's Lights in the dusk and that glorious scene involving a dog in the cold, to remember one of those many times when conflict would ensue whenever a careless dog-owner would stop for a pint in the winter.

Long live Meritähti, long live László, and köszönöm!

October 15, 2014

A Londoner on Losing the YES Vote on BBC 6 O'Clock News!

Here I am backing up my words about losing the YES vote with concrete action...Well sort of: Today I made an appearance on prime time TV and said some words in support of Nicola Sturgeon from the SNP in leading the next phase of the struggle for Scottish Independence.


I am trying to record a screencast of the relevant item because the direct BBC video link expires tomorrow and might not be available abroad either. Unfortunately the instruction on this google plus article is no longer working for me to solve the problem Does anyone at GNOME have any ideas on what I might be able to do to sort that out using GNOME Shell version 3.14.0 on fedora 20?

Thanks to one kind soul who has uploaded the item online, it can be viewed outside the UK and there is no expiry date tomorrow! At some point I would still like to solve the mystery of the screencast sound though (if anyone has any ideas about that?) but for now, at least: Bairns not Bombs!

Video is not enabled

GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>Liberation</id>
  <metadata_license>CC0-1.0</metadata_license>
  <name>Liberation</name>
  <summary>Open source versions of several commercial fonts</summary>
  <description>
    <p>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
    </p>
  </description>
  <updatecontact>richard_at_hughsie_dot_com</updatecontact>
  <url type="homepage">http://fedorahosted.org/liberation-fonts/</url>
</component>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <id>LiberationSerif</id>
  <metadata_license>CC0-1.0</metadata_license>
  <extends>Liberation</extends>
</component>

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

October 14, 2014

Tracker – What do we do now we’re stable?

Books

Over the past month or two, I’ve spent time working on various feature branches for Tracker. This coming after a 1.2 stable release and a new feature set which was added in 1.2.

So a lot has been going on with Tracker internally. I’ve been relatively quiet on my blog of late and I thought it would be a good idea to run a series of blogs relating to what is going on within the project.

Among my blogs, I will be covering:

  • What features did we add in Tracker 1.2 – how can they benefit you?
  • The difference between URIs, URNs, URLs and IRIs – dispelling any confusion; for the bugs we’ve had reported
  • Making Tracker more Git-like – we’re moving towards a new ‘git’ style command line with some new features on the way
  • Preparing for the divorce – is it time to finally split tracker-store, the ontologies and the data-miners?
  • Making Tracker even more idle – using cgroups and perhaps keyboard/mouse idle notifications

If anyone has any questions or concerns they would like to see answered in articles around these subjects, please comment below and I will do my best to address them! :)

.NET Foundation: Forums and Advisory Council

Today, I want to share some news from the .NET Foundation.

Forums: We are launching the official .NET Foundation forums to engage with the larger .NET community and to start the flow of ideas on the future of .NET, the community of users of .NET, and the community of contributors to the .NET ecosystem.

Please join us at forums.dotnetfoundation.org. We are using the powerful Discourse platform. Come join us!

Advisory Council: We want to make the .NET Foundation open and transparent. To achieve that goal, we decided to create an advisory council. But we need your help in shaping the advisory council: its role, its reach, its obligations and its influence on the foundation itself.

To bootstrap the discussion, we have a baseline proposal that was contributed by Shaun Walker. We want to invite the larger .NET community to a conversation about this proposal and help us shape the advisory council.

Check out the Call for Public Comments which has a link to the baseline proposal and come join the discussion at the .NET Forums.

GNOME Summit wrap-up

Day 3

The last day of the Summit was a hacking-focused day.

In one corner,  Rob Taylor was bringing up Linux and GNOME on an Arm tablet. Next to him, Javier Jardon was working on a local gnome-continuous instance . Michael Cantanzaro and Robert Schroll were working on D-Bus communication to let geary use Webkit2. Christan Hergert was adding editor preferences to gnome-builder and reviewed design ideas with Ryan Lerch. I spent most of the day teaching glade about new widgets in GTK+. I also merged OpenGL support for GTK+.

Overall, a quite productive day.

Thanks again to the sponsors

 

and to Walter Bender and the MIT for hosting us.

 

October 13, 2014

quiet strain

as promised during GUADEC, I’m going to blog a bit more about the development of GSK — and now that I have some code, it’s actually easier to do.

so, let’s start from the top, and speak about GDK.

in April 2008 I was in Berlin, enjoying the city, the company, and good food, and incidentally attending the first GTK+ hackfest. those were the days of Project Ridley, and when the plan for GTK+ 3.0 was to release without deprecated symbols and with all the instance structures sealed.

in the long discussions about the issue of a “blessed” canvas library to be used by GTK app developers and by the GNOME project, we ended up discussing the support of the OpenGL API in GDK and GTK+. the [original bug][bug-opegl] had been opened by Owen about 5 years prior, and while we had ancillary libraries like GtkGLExt and GtkGLArea, the integration was a pretty sore point. the consensus at the end of the hackfest was to provide wrappers around the platform-specific bits of OpenGL inside GDK, enough to create a GL context and bind it to a specific GdkWindow, to let people draw with OpenGL commands at the right time in the drawing cycle of GTK+ widgets. the consensus was also that I would look at the bug, as a person that at the time was dealing with OpenGL inside tool kits for his day job.

well, that didn’t really work out, because cue to 6 years after that hackfest, the bug is still open.

to be fair, the landscape of GTK and GDK has changed a lot since those days. we actually released GTK+ 3.0, and with a lot more features than just deprecations removal; the whole frame cycle is much better, and the paint sequence is reliable and completely different than before. yet, we still have to rely on poorly integrated external libraries to deal with OpenGL.

right after GUADEC, I started hacking on getting the minimal amount of API necessary to create a GL context, and being able to use it to draw on a GTK widget. it turns out that it wasn’t that big of a job to get something on the screen in a semi-reliable way — after all, we already had libraries like GtkGLExt and GtkGLArea living outside of the GTK git repository that did that, even if they had to use deprecated or broken API. the complex part of this work involved being able to draw GL inside the same infrastructure that we currently use for Cairo. we need to be able to synchronise the frame drawing, and we need to be able to blend the contents of the GL area with both content that was drawn before and after, likely with Cairo — otherwise we would not be able to do things like drawing an overlay notification on top of the usual spinning gears, while keeping the background color of the window:

welcome to the world of tomorrow (for values of tomorrow close to 2005)

luckily, thanks to Alex, the amount of changes in the internals of GDK was kept to a minimum, and we can enjoy GL rendering running natively on X11 and Wayland, using GLX or EGL respectively.

on top of the low level API, we have a GtkGLArea widget that renders all the GL commands you submit to it, and it behaves like any other GTK+ widgets.

today, Matthias merged the topic branch into master, which means that, barring disastrous regressions, GTK+ 3.16 will finally have native OpenGL support — and we’ll be one step closer to GSK as well.

right now, there’s still some work to do — namely: examples, performance, documentation, porting to MacOS and Windows — but the API is already fairly solid, so we’d all like to get feedback from the users of libraries like GtkGLExt and GtkGLArea, to see what they need or what we missed. feedback is, as usual, best directed at the gtk-devel mailing list, or on the #gtk+ IRC channel.

Technology Catchup

Coincidentally three different people asked me in the last month, to write about new technologies that they should be knowing, to make them more eligible to get a job in a startup. All these people have been C/C++ programmers, in big established companies, for about a decade now. Some of them have had only glimpses of any modern technologies.

I have tried a little bit (with moderate success) to work in all layers of programming with most of the popular modern technologies, by writing little-more-than-trivial programs (long before I heard of the fancy title "full stack developer"). So here I am writing a "technology catchup" post, hoping that it may be useful for some people, who want to know what has happened in the technologies in the last decade or so.

Disclaimer 1: The opinions expressed are totally biased as per my opinion. You should work with the individual technologies to know their true merits.

Disclaimer 2: Instead of learning everything, I personally recommend people to pick whatever they feel they are connected to. I, for example, could not feel connected to node-js even after toying with it for a while, but fell in love with Go. Tastes differ and nothing is inferior. So give everything a good try and pick your choice. Also remember what Donald Knuth said, "There is difference between knowing the name of something and knowing something". So learn deeply.

Disclaimer 3: From whatever I have observed, getting hired in a startup is more about being in the right circles of connection, than being a technology expert. A surprisingly large number of startups start with familiar technology than with the right technology, and then change their technology, once the company is established.

Disclaimer 4: This is actually not a complete list of things one should know. These are just things that I have come across and experimented a little bit at least. There are a lot more interesting things that I would have have missed. If you need something must have been in the list, please comment :-)

With those disclaimers away, let us cut to the chase.


Version Control Systems

The most prominent change in the open source arena, in the last decade or so, is the invention of Git. It is a version controlled system initially designed for keeping the kernel sources and has since then become the de-facto VCS for most modern companies and projects.

Github is a website that allows people to host their open source projects. Often startups recruit people based on their github profile. Even big companies like microsoft, google, facebook, twitter, dropbox etc. have their own github accounts. I personally have received more job queries through my github projects than via my linkedin profile in the last year.

bitbucket is another site that allows people to host code and give even private repos. A lot of the startups that I know of use this, along with the jira project management software. This is your equivalent of MS Project in some sense.

I have observed that most of the startups founded by people who come from Banking or Finance companies to be using Subversion. Git is the choice for people from tech companies though. Mercurial is another open source, distributed VCS which has lost a lot of limelight in the recent times, due to Git. Fossil is another VCS, from the author of sqlite, Dr. Richard Hipp. If you can learn only one VCS for now, start with Git.

Programming Languages & Frameworks

Javascript has evolved to be a leading programming language of the last decade. It is even referred to as the X86 of the web. From its humble beginnings as a client-side scripting language to validate if the user has typed a number or text, it has grown into a behemoth and entered even the server-side programming through the node-js framework. For incorporating ModelViewController pattern, javascript has gained the AngularJS framework. JS is a dynamically typed language and to bring in some statically typed langauges' goodness, we have a coffeescript language too.

Python is another dynamically typed, interpreted programming language. Personally, I felt that it is a lot more tasteful than Javascript. It feels good on eyes too. It helps in rapid application development and is available by default in almost all the Linux distros and Mac machines by default. Django is a web framework that is built on python to make it easy to develop web applications. In addition to being used in a lot of startups, it is used in even big companies like Google and Dropbox. There are variants of Python runtime such that you can run it in the JVM using Jython or in the .NET CLR using the IronPython. I have personally found this language to be lacking in performance though, which is elaborated more in a subsequent section.

Ruby is an old programming language that shot into fame in the recent years through the popular web application framework Ruby on Rails, often called just Rails. I have learnt a lot of engineering philosophies such as DRY, COO etc. while learning RoR.

All these above languages and frameworks use a package manager such as npmBower, pip, gems etc. to install libraries easily.

Go is my personal favorite in the new languages to learn. I see Go becoming as vital and prominent a programming language as C, C++ or Java in the next decade. It is developed in Google for creating large scale systems. It is a statically-typed, automatic-memory-managed language that generates native-machine-code and helps writing concurrent-code easily.

Go is the default language that I use for any programming task in the last year or so. It is amazingly fast even though (just because?) it is still in the 1.X series. In my dayjob we did a prototype in both go and python, and for a highly concurrent workflow in the same hardware, Go puffed Python in performance (20 seconds vs 5 minutes). I won't be surprised if a lot of the python and ruby code gets converted to golang in their next edition of rewrites. Personally, I have found the quality of go libraries to be much higher compared to Ruby or nodejs as well, probably because not everyone has adapted to this language yet. However, this could be just my personal biased opinion.

If you like to get fancy with functional programming, then you can learn Scala (on top of JVM), F# (on top of .NET), Haskell, Erlang, etc. The last two are very old btw but in use even today. Most recently, Whatsapp was known to use Erlang. D is also seen in the news, mostly thanks to Facebook. Dart is another language that is from Google but still to receive any wide deployment afaik, even with Google's massive marketing machinery behind it. It has been compared to VBscript and is criticized, and as of now chrome-only. Dart has received criticism from Mozilla, Webkit (rendering engine that powers Safari (and chrome earlier)), Microsoft IE as well. Dart is done by Lars Bak et al. (the people who gave us V8, chrome's Javascript engine)

Rust is another programming language that is aimed for high-performance concurrent systems. But I have not played around with it, as they don't maintain a stable API and they are not 1.0 yet. Julia is another programming language aimed at doing distributed systems, about which I have heard a lot of praise, but it still remains a exotic language afaik. R is another language which I have seen in a lot of corporate demos where the presenters wanted to show statistics, charts. Learning this may be useful even if you are not a programmer and works with numbers (like a project manager).


There is a Swift programming language from Apple to write iOS apps. I have not tried Swift yet, but from my experience of using Objective C, it cannot be worse.

Bootstrap is a nice web framework from twitter, which provides various GUI elements that you can incorporate into your application, to rapidly prototype beautiful applications, that are fluidic even when viewed in mobile.

jquery is a popular javascript library that is ubiquitous. Cascading Style Sheets (shortly CSS) is a markup language that helps configure the style of the web page UI elements. CSS is becoming mature to the extent of showing animations too. You should ideally spend a few weeks to learn about HTML5 and CSS.

Text Editors

Sublimetext is what the cool kids use these days as the editor. I have found the tutorial on tutsplus to be extra-ordinarily good at explaining sublime. It is a free (as in beer) software and not open source.

Atom is a text-editor from github built using nodejs and chromium. I did not find a linux binary and so did not bother to investigate it. But I have heard it to be good for Javascript programmers than any others, as the editor could be extended by javascript itself.

Brackets is another editor that I have heard good things about. Lime is an editor that is developed in Go, aimed to be an open-source replacement for the sublimetext.

Personally, after trying various text editors, I have always comeback to using vim. There are a few good plugins for vim in the recent times. Vundle, Pathogen are nice plugin managers for vim to ease up installation of plugins. YouCompleteMe is a nice plugin for auto-completion. vim-spf13 is a nice distro of vim, where various plugins and colorschemes are pre-packaged.

Distributed Computing

In the modern day of computing, most programs have been driven by a Service Oriented Architecture (shortly SOA). Webservices are the preferred way of communication among servers as well. While we are talking about services, please read this nice piece by Steve Yegge.

memcached is a distributed (across multiple machines), caching system which can be used in front of your database. This was initially developed by Brad Fritzpatrick, while he was the head of the LiveJournal and who is now (2014) a member of the Go team at Google. While at Google, he has started GroupCache which as the project page says is a replacement for memcache in many cases.

GoogleFileSystem (GFS) is a seminal paper on how Google created a filesystem to suit their large needs of data processing. There is a database built on top of this filesystem named BigTable which powered Google's infrastructure. Apache Hadoop is an open source implementation of these concepts, which was originally started in Yahoo and now a top-level apache project. HDFS  is the equivalent of GFS for the Hadoop. Hive and Pig are technologies to query and analyze data from the Hadoop.

As with the evolution of any software, GFS has evolved into a Colossus filesystem and BigTable has evolved into a Spanner distributed database. I recommend you to read these papers even if you are not going to do any distributed computing development.

Cassandra is another distributed database which was started in Facebook initially, but is used in many companies such as Netflix and Twitter. I have used Cassandra more than any other distributed project and actually like it a lot. It uses a SQL like query language called CQL - Cassandra Query Language. It is modelled after the DynamoDB paper from Amazon. I am too tempted to write an alternative to this in Go, just to have the idea of writing a large scale distributed system, instead of just using it as a client, but have not got around to a good dataset or usecase with which I can test it.

MongoDB is another document oriented database, which I tried using for a pet project of mine. I don't remember exactly but there were some problems with respect to unicode handling. The project was done prior to go becoming 1.0, so the problem could be in any end.

Most of the new age databases are called NOSQL databases but what they really mean is that the database skips a lot of functions (such as datatype validation, stored procedures, etc.) and try to grow by scaling out instead of scaling up.

Cloud

OpenStack is a suite of open source projects that help you create a private cloud. DeltaCloud is a project which was initially started by RedHat, and now an apache top-level project, as a way to provide a single API layer which will work across any cloud in the backend. This project is done in ruby. I was initially interested in participating in its development, until I got introduced to Go and moved into a different tangent.

To start off a software company is a very easy task to do in today's world. The public clouds are becoming cheaper and cheaper everyday and their capacity can be provisioned instantly.

Amazon web services provides an umbrella of various public cloud offerings. I have used Amazon EC2 which is a way to create a Linux (and windows) VM that runs on Amazon's datacenters. The machines come on various sizes. Amazon S3 is a cloud offering that provides you way to store data in buckets. This is used by Dropbox heavily for storing all your data. There are various other services too. In some of our prototyping, we found the performance of Amazon EC2, to be consistent mostly, even in the free tier.

Google is not lagging behind with their cloud offerings either. When Google Reader was shut down, I used Google's Appengine to deploy an alternative FOSS product and I was blown away by the simplicity of creating applications on top of it. Google Compute is the way to get VMs running on the Google Cloud. As with Amazon, there are plenty of other services too.

There are plenty of other players like Microsoft Azure, Heroku etc. but I do not have any experience with their applications. While we are talking about Cloud, you should probably read about Orchestration and know about at least Zookeeper.

In-Process Databases

These are databases which you can embed into your application, without needing a dedicated server. They run on your process-space.

sqlite is the world's most deployed software and it competes with fopen to become the default way to store data for your desktop applications (if you are still writing them ;) ). A new branch is coming with the latest rage on storage datastructures, a log-structured merge tree as well.

leveldb is a database that is written by the eminent Googlers (and trendsetters of technology in the last decade or so) Jeff Dean and Sanjay Ghemawat who gave us MapReduce, GFS etc. It is forked by Facebook into RocksDB as well.

KyotoCabinet and LMDB are other projects on this space.

Linux Filesystems

Since we have covered GFS, HDFS, etc. earlier. We will look at other popular filesystems.

btrfs is a copy-on-write filesystem in Linux. It is intended to be the defacto linux filesystem in the future, possibly obsoleting ext series in the longer run.

XFS is a filesystem that initially came from SGI to Linux. This is my personal favorite and I have been using it on all my linux machines. In addition to good performance, this offers robustness and comes with a load of features that are useful to me, like defragmentation.

We also have the big daddy of filesystems zfs too on linux.

Ceph is another interesting distributed filesystem that works on the kernel space and is already merged in the linux kernel sources for a long time now. GlusterFS is another distributed filesystem which works in the userspace. Both of these filesystems focus on scaling out instead of scaling up.

Conclusion

Pick any of these technologies that you like and start writing a toy application on it, may be as simple as a ToDo application and learn through all the stages. This approach has helped me. It may help you also.

I have written this post from a Thinkpad T430 running openSUSE Factory and GNOME Shell with a bunch of KDE tools. I like this machine, However, in the past few months I have realized that, in today's world, If you are a developer, it is best if you run Linux on your server and Mac on your laptop.

October 12, 2014

GNOME Summit update

The Boston summit is well underway, time for some update on what is happening.

Day 1

BreakfastAround 20 people got together Saturday morning and, after bagels and coffee got together in a room and collected topics that everybody was interested in working on. With Christian in attendance, gnome-builder was high on the list, but OSTree and gnome-continuous also showed up several times on the list.

Our first session was discussing issues of application writers with GTK+. Examples that were mentioned are the headerbar child order changing between 3.10 and 3.12, and icon sizes changing for some icons between 3.12 and 3.14 . We haven’t come to any great conclusions as to what we can do here, other than being careful about changes. GTK+ has a README file that gets updated with release notes for each major release, but it is not very visible. I will make an effort to draw attention to such potentially problematic changes earlier in the cycle.

Another kind of problem that was pointed out is that  the old way of doing action-based menus with GtkAction and GtkUIBuilder has been deprecated, but the ‘new way’ with GAction, GMenuModel and GtkApplication has not really been fully developed yet; at least, there is not great documentation for migrating from the old to the new, or for just starting from scratch with the new way. The HowDoI‘s for this are a start, but not sufficient.

RMS

After the morning session, Richard Stallman came by and spent an hour with us, discussing the history of free software, current challenges, and how GNOME fits into this larger picture from his perspective.

In the afternoon, we had a long session discussion ‘future apps’, which is hard to summarize. It revolved around runtimes, sdks, bundling, pros and cons of btrfs vs ostree, portals,  how to get apps ported to it, what to do about qt apps. This thread covers a lot of the same ground.

I managed to land GType statistics support in GObject and GtkInspector.

Day 2

Over breakfast, we talked about the best way to communicate with a webkit webview process, which somehow led us to kdbus. Takeaways from this discussion:

  • We (GNOME release team) should push for getting all users of webkit ported to webkit2 this cycle
  • We should create a document that explains what application and service authors and distributors need to do when porting to kdbus

Day 2

After breakfast, we started out with a live demo of gnome-builder, which looks really promising. Christian is reusing a lot of existing things, from gtksourceview over devhelp to gitg and glade. But having a polished, unified application bringing it all together makes a real difference!

As this session was winding down, I started testing a GNOME 3.14.0 Wayland session with the goal of recording as many of the smaller (and larger) issues as possible in bugs.  The result so far can be found here. While I reported a long list of Wayland issues today,  all is not bad. It was no problem at all to write this blog post in a Wayland session!

Thanks to Red Hat and Codethink,  who are sponsoring this event

 

The GNOME Infrastructure’s FreeIPA move behind the scenes

A few days ago I wrote about the GNOME Infrastructure moving to FreeIPA, the post was mainly an announcement to the relevant involved parties with many informative details for contributors to properly migrate their account details off from the old authentication system to the new one. Today’s post is a follow-up to that announcement but it’s going to take into account the reasons about our choice to migrate to FreeIPA, what we found interesting and compelling about the software and why we think more projects (them being either smaller or bigger) should migrate to it. Additionally I’ll provide some details about how I performed the migration from our previous OpenLDAP setup with a step-by-step guide that will hopefully help more people to migrate the infrastructure they manage themselves.

The GNOME case

It’s very clear to everyone an infrastructure should reflect the needs of its user base, in the case of GNOME a multitude between developers, translators, documenters and between them a very good number of Foundation members, contributors that have proven their non-trivial contributions and have received the status of members of the GNOME Foundation with all the relevant benefits connected to it.

The situation we had before was very tricky, LDAP accounts were managed through our LDAP istance while Foundation members were being stored on a MySQL database with many of the tables being related to the yearly Board of Director’s elections and one specifically meant to store all the information from each of the members. One of the available fields on that table was defined as ‘userid’ and was supposed to store the LDAP ‘uid’ field the Membership Committee member processing a certain application had to update when accepting the application. This procedure had two issues:

  1. Membership Committee members had no access to LDAP information
  2. No checks were being run on the web UI to verify the ‘userid’ field was populated correctly taking in multiple inconsistencies between LDAP and the MySQL database

In addition to the above Mango (the software that helped the GNOME administrative teams to manage the user data for multiple years had no maintainer, no commits on its core since 2008 and several limitations)

What were we looking for as a replacement for the current setup?

It was very obvious to me we would have had to look around for possible replacements to Mango. What we were aiming for was a software with the following characteristics:

  1. It had to come with a pre-built web UI providing a wide view on several LDAP fields
  2. The web UI had to be extensible in some form as we had some custom LDAP schemas we wanted users to see and modify
  3. The sofware had to be actively developed and responsive to eventual security reports (given the high security impact a breach on LDAP could take in)

FreeIPA clearly matched all our expectations on all the above points.

The Migration process – RFC2307 vs RFC2307bis

Our previous OpenLDAP setup was following RFC 2307, which means that above all the other available LDAP attributes (listed on the RFC under point 2.2) group’s membership was being represented through the ‘memberUid’ attribute. An example:

cn=foundation,cn=groups,cn=compat,dc=gnome,dc=org
objectClass: posixGroup
objectClass: top
gidNumber: 524
memberUid: foo
memberUid: bar
memberUid: foobar

As you can each of the members of the group ‘foundation’ are represented using the ‘memberUid’ attribute followed by the ‘uid’ of the user itself. FreeIPA does not make directly use of RFC2307 for its trees, but RFC2307bis instead. (RFC2307bis was not published as a RFC by the IETF as the author didn’t decide to pursue it nor the companies (HP, Sun) that then adopted it)

RFC2307bis uses a different attribute to represent group’s membership, it being ‘member’. Another example:

cn=foundation,cn=groups,cn=accounts,dc=gnome,dc=org
objectClass: posixGroup
objectClass: top
gidNumber: 524
member: uid=foo,cn=users,cn=accounts,dc=gnome,dc=org
member: uid=bar,cn=users,cn=accounts,dc=gnome,dc=org
member: uid=foobar,cn=users,cn=accounts,dc=gnome,dc=org

As you can see the DN representing the group ‘foundation’ differs between the two examples. That is why FreeIPA comes with a Compatibility plugin (cn=compat) which automatically creates RFC2307-compliant trees and entries whenever an append / modify / delete operation happens on any of the hosted RFC2307bis-compliant trees. What’s the point of doing this when we could just stick with RFC2307bis trees and go with it? As the plugin name points out the Compatibility plugin is there to prevent breakages between the directory server and any of the clients or softwares out there still retrieving information and data by using the ‘memberUid’ attribute as specified on RFC2307.

FreeIPA migration tools (ipa migrate-ds) do come with a ‘–schema’ flag you can use to specify what attribute the istance you are migrating from was following (values are RFC2307 and RFC2307bis as you may have guessed already), in the case of GNOME the complete command we ran (after installing all the relevant tools through ‘ipa-server-install’ and copying the custom schemas under /etc/dirsrv/slapd-$ISTANCE-NAME/schema) was:

ipa migrate-ds --bind-dn=cn=Manager,dc=gnome,dc=org --user-container=ou=people,dc=gnome,dc=org --group-container=ou=groups,dc=gnome,dc=org --group-objectclass=posixGroup ldap://internal-IP:389 --schema=RFC2307

Please note that before running the command you should make sure custom schemas you had on the istance you are migrating from are available to the directory server you are migrating your tree to.

More information on the migration process from an existing OpenLDAP istance can be found HERE.

The Migration process – Extending the directory server with custom schemas

One of the other challenges we had to face has been extending the available LDAP schemas to include Foundation membership attributes. This operation requires the following changes:

  1. Build the LDIF (that will include two new custom fields: FirstAdded and LastRenewedOn)
  2. Adding the LDIF in place on /etc/dirsrv/slapd-$ISTANCE-NAME/schema
  3. Extend the web UI to include the new attributes

I won’t be explaining how to build a LDIF on this post but I’m however pasting the schema I made to help you getting an idea:

attributeTypes: ( 1.3.6.1.4.1.3319.8.2 NAME 'LastRenewedOn' SINGLE-VALUE EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch DESC 'Last renewed on date' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
attributeTypes: ( 1.3.6.1.4.1.3319.8.3 NAME 'FirstAdded' SINGLE-VALUE EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch DESC 'First added date' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 )
objectClasses: ( 1.3.6.1.4.1.3319.8.1 NAME 'FoundationFields' AUXILIARY MAY ( LastRenewedOn $ FirstAdded ) )

After copying the schema in place and restarting the directory server, extend the web UI:

on /usr/lib/python2.7/site-packages/ipalib/plugins/foundation.py:

from ipalib.plugins import user
from ipalib.parameters import Str
from ipalib import _
from time import strftime
import re

def validate_date(ugettext, value):
if not re.match("^[0-9]{4}-(0[1-9]|1[0-2])-(0[1-9]|[1-2][0-9]|3[0-1])$", value):
return _("The entered date is wrong, please make sure it matches the YYYY-MM-DD syntax")

user.user.takes_params = user.user.takes_params + (
Str('firstadded?', validate_date,
cli_name='firstadded',
label=_('First Added date'),
),
Str('lastrenewedon?', validate_date,
cli_name='lastrenewedon',
label=_('Last Renewed on date'),
),
)

on /usr/share/ipa/ui/js/plugins/foundation/foundation.js:

define([
'freeipa/phases',
'freeipa/user'],
function(phases, user_mod) {

// helper function
function get_item(array, attr, value) {
for (var i=0,l=array.length; i&lt;l; i++) {
if (array[i][attr] === value) return array[i];
}

return null;
}

var foundation_plugin = {};

foundation_plugin.add_foundation_fields = function() {

var facet = get_item(user_mod.entity_spec.facets, '$type', 'details');
var section = get_item(facet.sections, 'name', 'identity');
section.fields.push({
name: 'firstadded',
label: 'Foundation Member since'
});

section.fields.push({
name: 'lastrenewedon',
label: 'Last Renewed on date'
});

section.fields.push({
name: 'description',
label: 'Previous account changes'
});

return true;

};

phases.on('customization', foundation_plugin.add_foundation_fields);
return foundation_plugin;
});

Once done, restart the web server. The next step would be migrating all the FirstAdded and LastRenewedOn attributes off from MySQL into LDAP now that our custom schema has been injected.

The relevant MySQL fields were following the YYYY-MM-DD syntax to store the dates and a little Python script to read from MySQL and populate the LDAP attributes was then made. If interested or you are in a similar situation you can find it HERE.

The Migration process – Own SSL certificates for HTTPD

As you may be aware of FreeIPA comes with its own certificate tools (powered by Certmonger), that means a CA is created (during the ipa-server-install run) and certificates for the various services you provide are then created and signed with it. This is definitely great and removes the burden to maintain an underlying self-hosted PKI infrastructure. At the same time this seems to be a problem for publicly-facing web services as browsers will start complaining they don’t trust the CA that signed the certificate the website you are trying to reach is using.

The problem is not really a problem as you can specify what certificate HTTPD should be using for displaying FreeIPA’s web UI. The procedure is simple and involves the NSS database at /etc/httpd/alias:

certutil -d /etc/httpd/alias/ -A -n "StartSSL CA" -t CT,C, -a -i sub.class2.server.ca.pem
certutil -d /etc/pki/nssdb -A -n "StartSSL CA" -t CT,C, -a -i sub.class2.server.ca.pem
openssl pkcs12 -inkey freeipa.example.org.key -in freeipa.example.org.crt -export -out freeipa.example.org.p12 -nodes -name 'HTTPD-Server-Certificate'
pk12util -i freeipa.example.org.p12 -d /etc/httpd/alias/

Once done, update /etc/httpd/conf.d/nss.conf with the correct NSSNickname value. (which should match the one you entered after ‘-name’ on the third of the above commands)

The Migration process – Equivalent of authorized_keys’ “command”

At GNOME we do run several services that require users to login to specific machines and run a command. At the same time and for security purposes we don’t want all the users to reach a shell. Originally we were making use of SSH’s authorized_keys file to specify the “command” these users should have been restricted to. FreeIPA handles Public Key authentications differently (through the sss_ssh_authorizedkeys binary) which means we had to find an alternative way to restrict groups to only a specific command. SSH’s ForceCommand came in help, an example given a group called ‘foo':

Match Group foo,!bar
X11Forwarding no
PermitTunnel no
ForceCommand /home/admin/bin/reset-my-password.py

The above Match Group will be applied to all the users of the ‘foo’ group except the ones that are also part of the ‘bar’ group. If you are interested in the reset-my-password.py script which resets a certain user password and sends a temporary one to the registered email address (by checking the mail LDAP attr) for the user, click HERE.

The Migration process – Kerberos host Keytabs

Here, at GNOME, we still have one or two RHEL 5 hosts hanging around and SSSD reported a failure when trying to authenticate with the given Keytab (generated with RHEL 7 default values) to the KDC running (as you may have guessed) RHEL 7. The issue is simple as RHEL 5 does not support many of the encryption types which the Keytab was being encrypted with. Apparently the only currently supported Keytab encryption type on a RHEL 5 machine is rc4-hmac. Creating a Keytab on the KDC accordingly can be done this way:

ipa-getkeytab -s server.example.org -p host/client.example.org -e rc4-hmac -k /root/keytabs/client.example.org.keytab

That should be all for today, I’ll make sure to update this post with further details or answers to possible comments.

Testing a NetworkManager VPN plugin password dialog

Testing the password dialog of a NetworkManager VPN plugin is as simple as:

echo -e 'DATA_KEY=foo\nDATA_VAL=bar\nDONE\nQUIT\n' | ./auth-dialog/nm-iodine-auth-dialog -n test -u $(uuid) -i

The above is for the iodine plugin when run from the built source tree. This allows one to test these dialogs although one didn't see them since ages since GNOME shell uses the external UI mode to query for the password.

This blog is flattr enabled.

summing up 63

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • is there love in the telematic embrace? by roy ascott. it is the computer that is at the heart of this circulation system, and, like the heart, it works best when least noticed - that is to say, when it becomes invisible. at present, the computer as a physical, material presence is too much with us; it dominates our inventory of tools, instruments, appliances, and apparatus as the ultimate machine. in our artistic and educational environments it is all too solidly there, a computational block to poetry and imagination. it is not transparent, nor is it yet fully understood as pure system, a universal transformative matrix. the computer is not primarily a thing, an object, but a set of behaviors, a system, actually a system of systems. data constitute its lingua franca. it is the agent of the datafield, the constructor of dataspace. where it is seen simply as a screen presenting the pages of an illuminated book, or as an internally lit painting, it is of no artistic value. where its considerable speed of processing is used simply to simulate filmic or photographic representations, it becomes the agent of passive voyeurism. where access to its transformative power is constrained by a typewriter keyboard, the user is forced into the posture of a clerk. the electronic palette, the light pen, and even the mouse bind us to past practices. the power of the computer's presence, particularly the power of the interface to shape language and thought, cannot be overestimated. it may not be an exaggeration to say that the "content" of a telematic art will depend in large measure on the nature of the interface; that is, the kind of configurations and assemblies of image, sound, and text, the kind of restructuring and articulation of environment that telematic interactivity might yield, will be determined by the freedoms and fluidity available at the interface. highly recommended
  • on the reliability of programs, by e.w. dijkstra. automatic computers are with us for twenty years and in that period of time they have proved to be extremely flexible and powerful tools, the usage of which seems to be changing the face of the earth (and the moon, for that matter!) in spite of their tremendous influence on nearly every activity whenever they are called to assist, it is my considered opinion that we underestimate the computer's significance for our culture as long as we only view them in their capacity of tools that can be used. they have taught us much more: they have taught us that programming any non-trivial performance is really very difficult and i expect a much more profound influence from the advent of the automatic computer in its capacity of a formidable intellectual challenge which is unequalled in the history of mankind. this opinion is meant as a very practical remark, for it means that unless the scope of this challenge is realized, unless we admit that the tasks ahead are so difficult that even the best of tools and methods will be hardly sufficient, the software failure will remain with us. we may continue to think that programming is not essentially difficult, that it can be done by accurate morons, provided you have enough of them, but then we continue to fool ourselves and no one can do so for a long time unpunished
  • "institutions will try to preserve the problem to which they are the solution", the shirky principle
  • if no one reads the manual, that's okay, if you think about it, the technical writer is in an unusual role. users hate the presence of manuals as much as they hate missing manuals. they despise lack of detail yet curse length. if no one reads the help, your position lacks value. if everyone reads the help, you're on a sinking ship. ideally, you want the user interface to be simple enough not to need help. but the more you contribute to this user interface simplicity, the less you're needed
  • 44 engineering management lessons
  • the grain of the material in design, because of the particular characteristics of a specific piece's grain, a design can't simply be imposed on the material. you can "go with the grain" or "go against the grain," but either way you have to understand the grain of the material to successfully design and produce a work. design for technology shouldn't be done separately from the material - it must be done as an intimate and tactile collaboration with the material of technology.

Freedom and Human Rights in the UK

Apparently, Britain has been overcome with a racist frenzy as UKIP secure their first by-election seat in Southend on Sea. Well, Southend on Sea is hardly Britain and I cannot say I am totally surprised. This is Southend we're talking about. Besides that, the UKIP guy who got in was already a well known Tory guy before he changed sides. They've obviously got a lot of media coverage and this seems a bit like a fluke. Maybe I should be more worried about this. It's hard to tell.

Hate and Fear

The obvious concern is that lately the media has been telling British people that their problem is immigration, the sick, the elderly, the disabled and people who are on benefits. Basically, all the most vulnerable groups...

Daily Mail Front Page Headline: TERROR AS GIANT MUSLIM SPIDERS BRING DEADLY EBOLA TO THE UK

OK, so that's not a real headline, but it may as well be. Once again, Russell Brand has summed up my feelings nicely in lots of ways but Scroobius Pip's comments really clarifies why UKIP are so wrong on just about every level.


Video disabled.

What UKIP Might Mean for The General Election

Under a First Past the Post system, the rise of UKIP could mean the fall of the Tories because the could essentially split the right wing vote in enough seats to put the opposition a stronger position to win.

However, it is worth noting that UKIP seem to be stealing only the lower class Tory hearts rather than upper middle class (who are less anti-immigration) ones which might seats do not get split since poor people do not tend to live near rich people so much outside of London, anyway. The Tories might just tactically concentrate very hard on gaining enough of the posh seats to win the election, letting UKIP take a few areas like Southend, if they take any at all.

Human Rights

As I said though, I am still not convinced UKIP are a threat. Thus I am far more inclined to get very upset about the thought of the Tories, especially as they are now talking about doing away with the Human Rights Act of 1998 and replacing it with ominous shit.

For those who do not know about the Human Rights Act of 1998 there is a summary available from the Equality and Human Rights Commission, UK:
Some of the above a what's known as "qualified rights" not "absolute rights". For example the "private and family life" one is not absolute (e..g. because the rights of children to not be abused come before the rights of adults who may not be taking proper care of them). Overall this Human Rights Act of 1998 thing has been thought through and is actually very important to have.

The Tories also want to make the UN an "advisory" body removing its powers from the UK. Don't get me wrong, I am not a great believer in the UN's ability to protect human rights in Europe but they do seem to be better than whatever Cameron is after, for "us".

Europe

I think the Tories are wanting the UK to leave Europe, too. I am not sure I agree with the EU's privacy laws which forced Google to start removing stuff from the internet. Privacy seems to be one of those double-edged swords when it comes to freedom and human rights. That notwithstanding though, Europe have influenced some important legislation in the UK. Specifically, the Equality Act 2010 and again, they are probably a lot better than what Cameron is angling for.

Being able to travel freely is something I am keen not to give up to live in some kind of fascist state that UKIP and the Tories are cooking up in their twisted minds, either.

The Internet

Let nobody forget that the Tories have already blocked parts of the British internet citing porn as an excuse...

Whilst I can empathise with the reasons why a lot of parents were in favour of the blocks I highly doubt that Thatcher the milk snatcher's reincarnation: David Cameron, gives a crap about protecting kids from adult content.

That said, I think the opponents of the internet block used pretty poor arguments to bring people round with. It seems pretty unfair to blame parents for not being able to keep up with the technology of the newer generations when that really seems like an inevitable result of technical progress. Society absolutely has some role to play in protecting its children from adult content.

Ultimately, it does not matter what I think: The point is that parents were never going to be persuaded by being told that they should get great at computers and watch their kids 24/7 because these are not a reasonable demands for society to place on them.

I think the solution there might have been quite easily found elsewhere if anyone had looked. So, the opponents of the blocks would have been better advised to have tried that approach. We didn't. But, to be fair we were not given much time... My memory is sketchy, but it seemed like nobody knew it was going ahead until it was already set up to happen. Maybe I just was not paying attention. Maybe nobody was. Maybe, there is a lesson in there somewhere...

Freedom

Perhaps there is a lot the free software movement could be doing to prevent this sort of thing from eroding freedoms in the UK:

Objectively, it seems pretty obvious that a person is not going to be interested in protecting rights that they have no idea how to use.Why would they be?

Engagement seems like an appropriate word.

October 11, 2014

Recent Reading Responses

Data & Society (which I persist in thinking of as "that New York City think tank that danah boyd is in" in case you want a glimpse of the social graph inside my head) has just published a few papers. I picked up "Understanding Fair Labor Practices in a Networked Age" which summarized many things well. A point that struck me, in its discussion of Uber and of relational labor:

The importance of selling oneself is a key aspect of this kind of piecemeal or contract work, particular because of the large power differential between management and workers and because of the perceived disposability of workers. In order to be considered for future jobs, workers must maintain their high ratings and receive generally positive reviews or they may be booted from the system.

In this description I recognize dynamics that play out, though less compactly, among knowledge workers in my corner of tech.

This pressure to perform relational labor, plus the sexist expectation that women always be "friendly" and never "abrasive" (including online), further silences women's ability to publicly organize around grievances. Those expectations additionally put us in an authenticity bind, since these circumstances demand a public persona that never speaks critically -- inherently inauthentic. Since genuine warmth, and therefore influence, largely derive from authenticity, this impairs our growth as leaders. And here's another pathway that gets blocked off: since since criticizing other people/institutions raises the status of the speaker, these expectations also remove a means for us to gain status.

Speaking of softening abrasive messages, I kept nodding as I read Jocelyn Goldfein's guide to asking for a raise if you're a knowledge worker (especially an engineer) at a company big enough to have compensation bands and levels. I especially liked how she articulated the dilemma of seeking more money -- and perhaps more power -- in a place where ambition is a dirty word (personally I do not consider ambition a dirty word; thank you Dr. Anna Fels), and the same scripts she offers for softening your manager's emotional reaction to bargaining.

I also kept nodding as I read "Rules for Radicals and Developer Marketing" by Rachel Chalmers. Of course she says a number of things that sound like really good advice and that I should take, and she made me want to go read Alinsky and spend more time with Beautiful Trouble, but she also mentions an attitude I share (mutatis mutandis, namely, I've only been working in tech since ~1998):

I've been in the industry 20 years. Companies come and go, relationships endure. The people who are in the Valley, a lot of us are lifers and the configurations of the groups that we're allied to shift over time. This is a big part of why I'm really into not lying and being generous: because I want to continue working with awesome, smart people, and I don't want to burn them just because they happen to be working for a competitor right now. In 10 years' time, who knows?

Relationships, both within the Valley and with your customer, are impossible to fake, and is really the only social capital you have left when you die.

No segue here! Feel the disruption! (Your incumbent Big Media types are all about smooth experience but with the infernokrusher approach I EXPLODE those old tropes so you can Make Your Own Meaning!)

Mark Guzdial, who thinks constantly about computer science education, mentions, in discussing legitimate peripheral participation:

Newcomers have to be able to participate in a way that's meaningful while working at the edge of the community of practice. Asking the noobs in an open-source project to write the docs or to do user testing is not a form of legitimate peripheral participation because most open source projects don’t care about either of those. The activity is not valued.
This point hit me right between the eyes. I have absolutely been that optimist cheerfully encouraging a newbie to write documentation or write up a user testing report. After reading Guzdial's legitimate critique, I wonder: maybe there are pre-qualifying steps we can take to check whether particular open source projects do genuinely value user testing and/or docs, to see whether we should suggest them to newbies.

Speaking of open source: I frequently recommend Dreaming in Code by Scott Rosenberg. It tells the story of the Chandler open source project as a case study, and uses examples from Chandler's process to explain the software engineering process to readers.

When I read Dreaming in Code several years ago, as the story of Chandler progressed, I noticed how many women popped up as engineers, designers, and managers. Rosenberg addressed my surprise late in the book:

Something very unusual had happened to the Chandler team over time. Not by design but maybe not entirely coincidentally, it had become an open source project largely managed by women. [Mitch] Kapor [a man] was still the 'benevolent dictator for life'... But with Katie Parlante and Lisa Dusseault running the engineering groups, Sheila Mooney in charge of product management, and Mimi Yin as the lead designer, Chandler had what was, in the world of software development, an impressive depth of female leadership.....

...No one at OSAF [Open Source Applications Foundation] whom I asked had ever before worked on a software team with so many women in charge, and nearly everyone felt that this rare situation might have something to do with the overwhelming civility around the office -- the relative rarity of nasty turf wars and rude insult and aggressive ego display. There was conflict, yes, but it was carefully muted. Had Kapor set a different tone for the project that removed common barriers to women advancing? Or had the talented women risen to the top and then created a congenial environment?

Such chicken-egg questions are probably unanswerable....


-Scott Rosenberg, Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest For Transcendent Software, 2007, Crown. pp. 322-323.

I have a bunch of anecdotal evidence that projects whose discussions stay civil attract and retain women more, but I'd love real statistics on that. And in the seven years since Dreaming in Code I think we haven't amassed enough data points in open source specifically to see whether women-led projects generally feel more civil, which means of course that means here's where I exhort the women reading this to found and lead projects!

(Parenthetically: Women have been noticing sexism in free and open source software for as long as FOSS has existed, and fighting it in organized groups for 15 or more years. Valerie Aurora first published "HOWTO Encourage Women in Linux" in 2002. And we need everyone's help, and you, whatever your gender, have the power to genuinely help. A man cofounded GNOME's Outreach Program for Women, for instance. And I'm grateful to everyone of every gender who gave to the Ada Initiative this year! With your help, we can -- among other things -- amass data to answer Scott Rosenberg's rhetorical questions. ;-) )

October 10, 2014

Life update

Like many others on planet.gnome, it seems I also don't feel like posting much on my blog any more since I post almost all major events of my life on social media (or SOME, as its for some reason now known as in Finland). To be honest, the thought usually doesn't even occur to me anymore. :( Well, anyway! Here is a brief of what's been up for the last many months:
  • Got divorced. Yeah, not nice at all but life goes on! At least I got to keep my lovely cat.

  • Its been almost an year (14 days less) that I moved to London. In a way it was good that I was in a new city at the time of divorce as its an opportunity to start a new life. I made some cool new friends, mostly the GNOME gang in here.

    London has its quirks but over all I'm pretty happy to be living here. One big issue is that most of my friends are in Finland so I miss them very much. Hopefully, in time I'll also make a lot more friends in London and also my friends from Finland will visit me too.

    The best thing about London is the weather! No, I'm not joking at all. Not only its a big improvement when compared to Helsinki, the rumours about "Its always raining in London" are greatly (I can't stress on this word enough) exaggerated.
  • I got my eyes Z-LASIK'ed so no more glasses!

  • Started taking:

    • Driving lessons. Failed the first driving test today. Having known what I did wrong, I'm sure I wont repeat the same mistakes again next time and will pass.
    • Helicopter flying lessons. Yes! I'm not joking. I grew up watching Airwolf and ever since then I've been fascinated by helicopters and wanted to fly them but never got around to doing it. Its very expensive, as you'd imagine so I'm only taking two lessons a month. With this pace, I should be have my PPL(H) by end of 2015.

      Turns out that I'm very good at one thing that most people find very challenging to master: Hovering. The rest isn't hard either in practice. Theory is the biggest challenge for me. Here is the video recording of the 15 mins trial lesson I started with.

Always Follow the Money

Selena Larson wrote an article describing the Male Allies Plenary Panel at the Anita Borg Institute's Grace Hopper Celebration on Wednesday night. There is a video available of the panel (that's the youtube link, the links on Anita Borg Institute's website don't work with Free Software).

Selena's article pretty much covers it. The only point that I thought useful to add was that one can “follow the money” here. Interestingly enough, Facebook, Google, GoDaddy, and Intuit were all listed as top-tier sponsors of the event. I find it a strange correlation that not one man on this panel is from a company that didn't sponsor the event. Are there no male allies to the cause of women in tech worth hearing from who work for companies that, say, don't have enough money to sponsor the event? Perhaps that's true, but it's somewhat surprising.

Honest US Congresspeople often say that the main problem with corruption of campaign funds is that those who donate simply have more access and time to make their case to the congressional representatives. They aren't buying votes; they're buying access for conversations. (This was covered well in This American Life, Episode 461).

I often see a similar problem in the “Open Source” world. The loudest microphones can be bought by the highest bidder (in various ways), so we hear more from the wealthiest companies. The amazing thing about this story, frankly, is that buying the microphone didn't work this time. I'm very glad the audience refused to let it happen! I'd love to see a similar reaction at the corporate-controlled “Open Source and Linux” conferences!

Update later in the day: The conference I'm commenting on above is the same conference where Satya Nadella, CEO of Microsoft, said that women shouldn't ask for raises, and Microsoft is also a top-tier sponsor of the conference. I'm left wondering if anyone who spoke at this conference didn't pay for the privilege of making these gaffes.

The End For Pylyglot

Background

It was around 2005 when I started doing translations for Free and Open-Source Software. Back then I was warmly welcomed to the Ubuntu family and quickly learned all there was to know about using their Rosetta online tool to translate and/or review existing translations for the Brazilian Portuguese language. I spent so much time doing it, even during working hours, that eventually I sort of “made a name for myself” and made my way up to the upper layers of the Ubuntu Community echelon.

Then I “graduated” and started doing translations for the upstream projects, such as GNOME, Xfce, LXDE, and Openbox. I took on more responsabilities, learned to use Git and make commits for myself as well as for other contributors, and strived to unify all Brazilian Portuguese translations across as many different projects as possible. Many discussions were had, (literally) hundreds of hours were spent going though also hundreds of thoundands of translations for hundreds of different applications, none of it bringing me any monetary of financial advantage, but all done for the simple pleasure of knowing that I was helping make FOSS applications “speak” Brazilian Portuguese.

I certainly learned a lot though the experience of working on these many projects… some times I made mistakes, other times I “fought” alone to make sure that standards and procedures were complied with. All in all, looking back I only have one regret: not being nominated to become the leader for the Brazilian GNOME translation team.

Having handled 50% of the translations for one of the GNOME releases (the other 50% was handled by a good friend, Vladimir Melo while the leader did nothing to help) and spent much time making sure that the release would go out the door 100% translated, I really thought I’d be nominated to become the next leader. Not that I felt that I needed a ‘title’ to show off to other people, but in a way I wanted to feel that my peers acknowledged my hard work and commitment to the project.

Seeing other people, even people with no previous experience, being nominated by the current leader to replace him was a slap in the face. It really hurt me… but I made sure to be supportive and continue to work just as hard. I guess you could say that I lived and breathed translations, my passion not knowing any limits or knowing when to stop…

But stop I eventually did, several years ago, when I realized how hard it was to land a job that would allow me to support my family (back then I had 2 small kids) and continue to do the thing I cared the most. I confess that I even went through a series of job interviews for the translation role that Jono Bacon, Canonical’s former community manager, was trying to hire, but in the end things didn’t work out the way I wanted. I also flirted with another similar role at MeeGo but since they wanted me to move to the West Coast I decided not to pursue it (I also had fallen in love with my then current job).

Pylyglot

As a way to keep myself somewhat still involved with the translation communities and at the same time learn a bit more about the Django framework, I then created Pylyglot, “a web based glossary compedium for Free and Open Source Software translators heavily inspired on the Open-tran.eu web site… with the objective to ‘provide a concise, yet comprehensive compilation of a body of knowledge’ for translators derived from existing Free and Open Source Software translations.”

Pylyglot

I have been running this service on my own and paying for the cost of domain registration and database costs out of my own pocket for a while now, and I now find myself facing the dilema of renewing the domain registration and keep Pylyglot alive for another year… or retire it and end once and for all my relationship with FOSS translations.

Having spent the last couple of months thinking about it, I have now arrived at the conclusion that it is time to let this chapter of my life rest. Though the US$140/year that I won’t be spending won’t make me any richer, I don’t foresee myself either maintaining or spending any time improving the project. So this July 21st, 2014 Pylyglot will close its doors and cease to exist in its current form.

To those who knew about Pylyglot and used it and, hopefuly, found it to be useful, my sincere thanks for using it. To those who supported my idea and the project itself, whether by submitting code patches, building the web site or just giving me moral support, thank you!

FauxFactory 0.3.0

Took some time from my vacation and released FauxFactory 0.3.0 to make it Python 3 compatible and to add a new generate_utf8 method (plus some nice tweaks and code clean up).

As always, the package is available on Pypi and can be installed via pip install fauxfactory.

If you have any constructive feedback, suggestions, or file a bug report or feature request, please use the Github page.

October 09, 2014

CUDA Programming

I was invited to attend a CUDA workshop, this event was promoted by DIA PUCP. Thanks to the professor, Dr. Manuel Ujaldon, who trained us for about 12 hours using C. We use the cloud of NVIDIA to practice and we do exercises to optimise  vector functions. Concepts of register, blocks, kernel and algorithms like compute bound and memory bound, memory shared, tiling, GPU/CPU technology, CUDA software (v6 and v6.5), which are compatible with CUDA hardware: Tesla(2008 – v1,2,3 with 8 cores), Fermi (2010 – v1,2 with 32 cores), Kepler(2012 – v3 and 3.5 with 192 cores) and Maxwell(2014 – v5 with 128 cores) and Pascal architecture for future.

Screen Shot 2014-10-09 at 3.23.00 PM

We started with this device:

CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: “GRID K520″
  CUDA Driver Version / Runtime Version          6.0 / 6.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 4096 MBytes (4294770688 bytes)
  ( 8) Multiprocessors, (192) CUDA Cores/MP:     1536 CUDA Cores
  GPU Clock rate:                                797 MHz (0.80 GHz)
  Memory Clock rate:                             2500 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           0 / 3
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.0, CUDA Runtime Version = 6.0, NumDevs = 1, Device0 = GRID K520
Result = PASS

We must analyse if we use the strategy of fine-grain or coarse-grain. In our first example was so convenient because we do not need so much the use of memory. But, if we use coarse-grain, we sacrifice parallelism. Not so much blocks are available, then we do not have enough backups of blocks. E.g. 128×128 is equal to 2 elevated to 14, which is 16384 with 256 threads, with 64 blocks for each SMX. 16 blocks equivalent to 1 block for each SMX.

CUDA_PUCP

Thanks to Genghis Rios to organise this workshop. More pictures here>>>

IMG_5923 IMG_5956 IMG_5962 IMG_5914 IMG_5922


Filed under: Education, GNOME, GNU/Linux/Open Source, τεχνολογια :: Technology, Programming

October 08, 2014

Wed 2014/Oct/08

  • Growstuff's Crowdfunding Campaign for an API for Open Food Data

    During GUADEC 2012, Alex Skud Bailey gave a keynote titled What's Next? From Open Source to Open Everything. It was about how principles like de-centralization, piecemeal growth, and shared knowledge are being applied in many areas, not just software development. I was delighted to listen to such a keynote, which validated my own talk from that year, GNOME and the Systems of Free Infrastructure.

    During the hallway track I had the chance to talk to Skud. She is an avid knitter and was telling me about Ravelry, a web site for people who knit/crochet. They have an excellent database of knitting patterns, a yarn database, and all sorts of deep knowledge on the craft gathered over the years.

    At that time I was starting my vegetable garden at home. It turned out that Skud is also an avid gardener. We ended up talking about how it would be nice to have a site like Ravelry, but for small-scale food gardeners. You would be able to track your own crops, but also consult about the best times to plant and harvest certain species. You would be able to say how well a certain variety did in your location and climate. Over time, by aggregating people's data, we would be able to compile a free database of crop data, local varieties, and climate information.

    Growstuff begins

    Growstuff

    Skud started coding Growstuff from scratch. I had never seen a project start from zero-lines-of-code, and be run in an agile fashion, for absolutely everything, and I must say: I am very impressed!

    Every single feature runs through the same process: definition of a story, pair programming, integration. Newbies are encouraged to participate. They pair up with a more experienced developer, and they get mentored.

    They did that even for the very basic skeleton of the web site: in the beginning there were stories for "the web site should display a footer with links to About and the FAQ", and "the web site should have a login form". I used to think that in order to have a collaboratively-developed project, one had to start with at least a basic skeleton, or a working prototype — Growstuff proved me wrong. By having a friendly, mentoring environment with a well-defined process, you can start from zero-lines-of-code and get excellent results quickly. The site has been fully operational for a couple of years now, and it is a great place to be.

    Growstuff is about the friendliest project I have seen.

    Local crop data

    Tomato heirloom        varieties

    I learned the basics of gardening from a couple of "classic" books: the 1970s books by John Seymour which my mom had kept around, and How to Grow More Vegetables, by John Jeavons. These are nominally excellent — they teach you how to double-dig to loosen the soil and keep the topsoil, how to transplant fragile seedlings so you don't damage them, how to do crop rotation.

    However, their recommendations on garden layouts or crop rotations are biased towards the author's location. John Seymour's books are beautifully illustrated, but are about the United Kingdom, where apples and rhubarb may do well, but would be scorched where I live in Mexico. Jeavons's book is biased towards California, which is somewhat closer climate-wise to where I live, but some of the species/varieties he mentions are practically impossible to get here — and, of course, species which are everyday fare here are completely missing in his book. Pity the people outside the tropics, for whom mangoes are a legend from faraway lands.

    The problem is that the books lack knowledge of good crops for wherever you may live. This is the kind of thing that is easily crowdsourced, where "easily" means a Simple Matter Of Programming.

    An API for Open Food Data

    Growstuff has been gathering crop data from people's use of the site. Someone plants spinach. Someone harvests tomatoes. Someone puts out seeds for trade. The next steps are to populate the site with fine-grained varieties of major crops (e.g. the zillions of varieties of peppers or tomatoes), and to provide an API to access planting information in a convenient way for analysis.

    Right now, Growstuff is running a fundraising campaign to implement this API — allowing developers to work on this full-time, instead of scraping from their "free time" otherwise.

    I encourage you to give money to Growstuff's campaign. These are good people.

    To give you a taste of the non-trivialness of implementing this, I invite you to read Skud's post on interop and unique IDs for food data. This campaign is not just about adding some features to Growstuff; it is about making it possible for open food projects to interoperate. Right now there are various free-culture projects around food production, but little communication between them. This fundraising campaign attempts to solve part of that problem.

    I hope you can contribute to Growstuff's campaign. If you are into local food production, local economies, crowdsourced databases, and that sort of thing — these are your people; help them out.

    Resources for more in-depth awesomeness

October 07, 2014

Probing with Gradle

Up until now, Probe relied on dynamic view proxies generated at runtime to intercept View calls. Although very convenient, this approach greatly affects the time to inflate your layouts—which limits the number of use cases for the library, especially in more complex apps.

This is all changing now with Probe’s brand new Gradle plugin which seamlessly generates build-time proxies for your app. This means virtually no overhead at runtime!

Using Probe’s Gradle plugin is very simple. First, add the Gradle plugin as a dependency in your build script.

buildscript {
    ...
    dependencies {
        ...
        classpath 'org.lucasr.probe:gradle-plugin:0.1.3'
    }
}

Then apply the plugin to your app’s build.gradle.

apply plugin: 'org.lucasr.probe'

Probe’s proxy generation is disabled by default and needs to be explicitly enabled on specific build variants (build type + product flavour). For example, this is how you enable Probe proxies in debug builds.

probe {
    buildVariants {
        debug {
            enabled = true
        }
    }
}

And that’s all! You should now be able to deploy interceptors on any part of your UI. Here’s how you could deploy an OvermeasureInterceptor in an activity.

public final class MainActivity extends Activity {
   @Override
   protected void onCreate(Bundle savedInstanceState) {
       Probe.deploy(this, new OvermeasureInterceptor());
       super.onCreate(savedInstanceState);
       setContentView(R.id.main_activity);
   }
}

While working on this feature, I have changed DexMaker to be an optional dependency i.e. you have to explicitly add DexMaker as a build dependency in your app in order to use it.

This is my first Gradle plugin. There’s definitely a lot of room for improvement here. These features are available in the 0.1.3 release in Maven Central.

As usual, feedback, bug reports, and fixes are very welcome. Enjoy!

The GNOME Infrastructure is now powered by FreeIPA!

As preannounced here the GNOME Infrastructure switched to a new Account Management System which is reachable at https://account.gnome.org. All the details will follow.

Introduction

It’s been a while since someone actually touched the underlying authentication infrastructure that powers the GNOME machines. The very first setup was originally configured by Jonathan Blandford (jrb) who configured an OpenLDAP istance with several customized schemas. (pServer fields in the old CVS days, pubAuthorizedKeys and GNOME modules related fields in recent times)

While OpenLDAP-server was living on the GNOME machine called clipboard (aka ldap.gnome.org) the clients were configured to synchronize users, groups, passwords through the nslcd daemon. After several years Jeff Schroeder joined the Sysadmin Team and during one cold evening (date is Tue, February 1st 2011) spent some time configuring SSSD to replace the nslcd daemon which was missing one of the most important SSSD features: caching. What surely convinced Jeff to adopt SSSD (a very new but promising sofware at that time as the first release happened right before 2010’s Christmas) and as the commit log also states (“New sssd module for ldap information caching”) was SSSD’s caching feature.

It was enough for a certain user to log in once and the ‘/var/lib/sss/db’ directory was populated with its login information preventing the LDAP daemon in charge of picking up login details (from the LDAP server) to query the LDAP server itself every single time a request was made against it. This feature has definitely helped in many occasions especially when the LDAP server was down for a particular reason and sysadmins needed to access a specific machine or service: without SSSD this wasn’t ever going to work and sysadmins were probably going to be locked out from the machines they were used to manage. (except if you still had ‘/etc/passwd’, ‘/etc/group’ and ‘/etc/shadow’ entries as fallback)

Things were working just fine except for a few downsides that appeared later on:

  1. the web interface (view) on our LDAP user database was managed by Mango, an outdated tool which many wanted to rewrite in Django that slowly became a huge dinosaur nobody ever wanted to look into again
  2. the Foundation membership information were managed through a MySQL database, so two databases, two sets of users unrelated to each other
  3. users were not able to modify their own account information on their own but even a single e-mail change required them to mail the GNOME Accounts Team which was then going to authenticate their request and finally update the account.

Today’s infrastructure changes are here to finally say the issues outlined at (1, 2, 3) are now fixed.

What has changed?

The GNOME Infrastructure is now powered by Red Hat’s FreeIPA which bundles several FOSS softwares into one big “bundle” all surrounded by an easy and intuitive web UI that will help users update their account information on their own without the need of the Accounts Team or any other administrative entity. Users will also find two custom fields on their “Overview” page, these being “Foundation Member since” and “Last Renewed on date”. As you may have understood already we finally managed to migrate the Foundation membership database into LDAP itself to store the information we want once and for all. As a side note it might be possible that some users that were Foundation members in the past won’t find any detail stored on the Foundation fields outlined above. That is actually expected as we were able to migrate all the current and old Foundation members that had an LDAP account registered at the time of the migration. If that’s your case and you still would like the information to be stored on the new setup please get in contact with the Membership Committee at stating so.

Where can I get my first login credentials?

Let’s make a little distinction between users that previously had access to Mango (usually maintainers) and users that didn’t. If you were used to access Mango before you should be able to login on the new Account Management System by entering your GNOME username and the password you were used to use for loggin in into Mango. (after loggin in the very first time you will be prompted to update your password, please choose a strong password as this account will be unique across all the GNOME Infrastructure)

If you never had access to Mango, you lost your password or the first time you read the word Mango on this post you thought “why is he talking about a fruit now?” you should be able to reset it by using the following command:

ssh -l yourgnomeuserid account.gnome.org

The command will start an SSH connection between you and account.gnome.org, once authenticated (with the SSH key you previously had registered on our Infrastructure) you will trigger a command that will directly send your brand new password on the e-mail registered for your account. From my tests seems GMail sees the e-mail as a phishing attempt probably because the body contains the word “password” twice. That said if the e-mail won’t appear on your INBOX, please double-check your Spam folder.

Now that Mango is gone how can I request a new account?

With Mango we used to have a form that automatically e-mailed the maintainer of the selected GNOME module which was then going to approve / reject the request. From there and in the case of a positive vote from the maintainer the Accounts Team was going to create the account itself.

With the recent introduction of a commit robot directly on l10n.gnome.org the number of account requests reduced its numbers. In addition to that users will now be able to perform pretty much all the needed maintenance on their accounts themselves. That said and while we will probably work on building a form in the future we feel that requesting accounts can definitely be achieved directly by mailing the Accounts Team itself which will mail the maintainer of the respective module and create the account. As just said the number of account creations has become very low and the queue is currently clear. The documentation has been updated to reflect these changes at:

https://wiki.gnome.org/AccountsTeam
https://wiki.gnome.org/AccountsTeam/NewAccounts

I was used to have access to a specific service but I don’t anymore, what should I do?

The migration of all the user data and ACLs has been massive and I’ve been spending a lot of time reviewing the existing HBAC rules trying to spot possible errors or misconfigurations. If you happen to not being able to access a certain service as you were used to in the past, please get in contact with the Sysadmin Team. All the possible ways to contact us are available at https://wiki.gnome.org/Sysadmin/Contact.

What is missing still?

Now that the Foundation membership information has been moved to LDAP I’ll be looking at porting some of the existing membership scripts to it. What I managed to port already are welcome e-mails for new or existing members. (renewals)

Next step will be generating a membership page from LDAP (to populate http://www.gnome.org/foundation/membership) and all the your-membership-is-going-to-lapse e-mails that were being sent till today.

Other news – /home/users mount on master.gnome.org

You will notice that loggin in into master.gnome.org will result in your home directory being empty, don’t worry, you did not lose any of your files but master.gnome.org is now currently hosting your home directories itself. As you may have been aware of adding files to the public_html directory on master resulted in them appearing on your people.gnome.org/~userid space. That was unfortunately expected as both master and webapps2 (the machine serving people.gnome.org’s webspaces) were mounting the same GlusterFS share.

We wanted to prevent that behaviour to happen as we wanted to know who has access to what resource and where. From today master’s home directories will be there just as a temporary spot for your tarballs, just scp and use ftpadmin against them, that should be all you need from master. If you are interested in receiving or keeping using your people.gnome.org’s webspace please mail <accounts AT gnome DOT org> stating so.

Other news – a shiny and new error 500 page has been deployed

Thanks to Magdalen Berns (magpie) a new error 500 web page has been deployed on all the Apache istances we host. The page contains an iframe of status.gnome.org and will appear every single time the web server behind the service you are trying to reach will be unreachable for maintenance or other purposes. While I hope you won’t see the page that often you can still enjoy it at https://static.gnome.org/error-500/500.html. Make sure to whitelist status.gnome.org on your browser as it currently loads it without https. (as the service is currently hosted on OpenShift which provides us with a *.rhcloud.com wildcard certificate, which differs from the CN the browser would expect it to be)

Updates

UPDATE on status.gnome.org’s SSL certificate: the certificate has been provisioned and it should result in the 500’s page to be displayed correctly with no warnings from your browser.

UPDATE from Adam Young on Kerberos ports being closed on many DC’s firewalls:

The next version of upstream MIT Kerberos will have support for fetching a ticket via ports 443 and marshalling the request over HTTPS. We’ll need to run a proxy on the server side, but we should be able to make it work:

Read up here
http://adam.younglogic.com/2014/06/kerberos-firewalls

October 06, 2014

Building GStreamer for Mac OS X and iOS

As part of the 1.4.3 release of GStreamer I helped the team by making the OS X and iOS builds. The process is easy but has a long sequence of steps. So it is worth sharing it here just in case you might want to run your own GStreamer in any of these platforms.

1. First, you need to download CMake
http://www.cmake.org/files/v3.0/cmake-3.0.2-Darwin-universal.dmg

2. Add CMake to your PATH
$ export PATH=$PATH:/Applications/CMake.app/Contents/bin

3. Prepare the destination (as root)
$ mkdir /Library/Frameworks/GStreamer.framework
$ chown user:user /Library/Frameworks/GStreamer.framework


4. Check out the GStreamer release code
$ git clone git://anongit.freedesktop.org/gstreamer/sdk/cerbero
$ cd cerbero
$ git checkout -b 1.4 origin/1.4


5. Pin the commits to build
edit config/osx-universal.cbc to have the following:

prefix='/Library/Frameworks/GStreamer.framework/Versions/1.0'

recipes_commits = {
'gstreamer-1.0' : '1.4.3',
'gstreamer-1.0-static' : '1.4.3',
'gst-plugins-base-1.0' : '1.4.3',
'gst-plugins-base-1.0-static' : '1.4.3',
'gst-plugins-good-1.0' : '1.4.3',
'gst-plugins-good-1.0-static' : '1.4.3',
'gst-plugins-bad-1.0' : '1.4.3',
'gst-plugins-bad-1.0-static' : '1.4.3',
'gst-plugins-ugly-1.0' : '1.4.3',
'gst-plugins-ugly-1.0-static' : '1.4.3',
'gst-libav-1.0' : '1.4.3',
'gst-libav-1.0-static' : '1.4.3',
'gnonlin-1.0' : '1.2.1',
'gnonlin-1.0-static' : '1.2.1',
'gst-editing-services-1.0' : '1.2.1',
'gst-rtsp-server-1.0' : '1.4.3',
}


6. Run the bootstrap
$ ./cerbero-uninstalled bootstrap
$ echo "allow_parallel_build = True" > ~/.cerbero/cerbero.cbc


7. Run the build for OS X. Patience, it needs to build ~80 modules.
$ ./cerbero-uninstalled -c config/osx-universal.cbc package gstreamer-1.0

8. Run the build for iOS. Some extra steps are necessary for this build.
$ ./cerbero-uninstalled -c config/cross-ios-universal.cbc buildone gettext libiconv
$ ./cerbero-uninstalled -c config/cross-ios-universal.cbc package gstreamer-1.0
$ ./cerbero-uninstalled -c config/cross-ios-universal.cbc buildone gstreamer-ios-templates

After GUADEC 2014

I was busy immediately after GUADEC, so I haven’t blogged for a long time since then.

I went to France two months ago to attend GUADEC 2014. It was my first time going out of my country, and there were some small problems along the way (like being lost a few times) but everything ended well.

First, I stayed in Paris for a day before going to Strasbourg. I only went to see the Eiffel Tower because I don’t have enough time that day.

Then the next day, a day before GUADEC started, I rode the train to go to Strasbourg in the morning. I stayed in FEC during GUADEC. I attended the pre-registration event at night.

During the core days, I used up most of my time attending talks mostly about GStreamer and GTK+, except during the afternoon of the first day because I was a volunteer as a runner during that time.

I’ve delivered a lightning talk about my internship in GSoC at the end of the second day. The GNOME Music developers (attended by me, Vadim and Sai) had a short BoF near the end of the third day.

Since we weren’t able to have a BoF after the core days, I’ve spent some of my time walking around Strasbourg. During the last day, Sai and I coded in one of BoF’s rooms, and we’ve talked to Allan Day on how we could improve the user interface of GNOME Music. Some of these are improvements to the Search view, but I wasn’t able to implement the changes before the UI freeze, so it will probably show up in the next version. I helped in cleaning up in the venue afterwards then left to ride the train to the airport.

I wasn’t able to talk much during GUADEC because I’m not used to talking to foreigners (I sometimes can’t understand what other people are talking about because of differences in accent) and I’m not confident with my English-speaking skills. But even with that problem, I’m very happy that I was able to attend GUADEC and was able to meet many developers in person! Thank you to GNOME Foundation for the sponsorship! :D

As for the reason I wasn’t able to post about GUADEC, enrollment in our university started a day after I arrived. I’m not around during the pre-enlistment period, so I have to spent the first two weeks of classes looking for subjects, asking professors to accept me even though their classes are overloaded.

Then a few hours before I locked my subjects to finish the registration/enrollment, I learned from one of my professors that I broke an internal (deparment) rule, because I failed an important subject last semester, so I have to focus more in studies from now on. However, I just moved to the dormitories last week, freeing 5 hours usually spent on transportation every day, so I’ll probably back to contributing soon.

GNOME 3.14

It's been ten days that 3.14 was published and what have I been up to? Parties, mostly. It started even before the actual release date with our biannual Brussels gathering, organized by Guillaume of Empathy fame; a few beers at the Chaff, a nice Lebanese restaurant, then more beers in the hidden bar of a youth hostel. Quiet evening before getting back to work and making the release happen (e.g. testing stuff and pestering a few persons for tarballs, as the technical release process was handled by Matthias).

Then on Thursday morning I went back to Strasbourg, as Alexandre was also having a release party, around ten persons showed up for a dinner with Alsacian dishes I couldn't spell or pronounce. It was also the unannounced start of a serie of podcasts Alexandre is preparing, so we discussed the GNOME project with microphones on between meals and desserts.

tables at release party

Strasbourg, September 25th, 2014

My plan was then to go to Lyon but they had already had their party, on Tuesday evening, and that's how my tour of GNOME parties ended short.

Fortunately that idea of a tour was just an excuse to get on the road^Wtracks, as friends were having a big party in Ariège, in the south of France, on Saturday night. And while some computer-related conversations popped up, it was mostly a GNOME-free night, that later continued in a GNOME-free week as I joined an old friend and lent a hand to his market gardening, in beautiful Périgord.

market gardening in Périgord

Savignac-les-Églises, October 1st, 2014

I am now back and enjoying 3.14, a quality release, with a very strong GTK+, applications converging to the GNOME 3 features (headerbars, popovers...), and the long missing piece, updated human interface guidelines (thanks Allan!).

Boston Summit 2014

I'll be attending the Boston Summit next weekend. Those that attend will get to see Builder demos. Hope to see you there!

October 05, 2014

Shellshock will happen again

As usual, I’m a month late, the big Bash bug known as Shellshock has come and gone, and the world was confused as to why this ever happened in the first place. It’s been fixed for a few weeks now. The questions have started: Why has nobody spotted this earlier? Can we can prevent it? Are the maintainers overworked and underfunded? Should we donate to the FSF? Should we switch to another shell by default? Can we ever trust bash again?

During the whole thing, there’s a big piece of evidence that I didn’t see anybody point out. And I think it helps answer all of these questions. So here it is.

I present to you the upstream git log for bash: http://git.savannah.gnu.org/cgit/bash.git/log/

Every programmer who has just clicked that link is now filled with disgust and disappointment.

It’s all crystal clear now: Nobody would have spotted this earlier. No, we can’t really prevent it. No, the maintainers aren’t overworked and underfunded. No, we shouldn’t donate to the FSF. Perhaps we should switch to another shell. No, we cannot trust bash. Not until a serious change in its management comes along.

For those of you who aren’t programmers, you might be staring at that page, not quite understanding what it all means. And that’s OK. Let me help explain it to you.

There’s a saying in the open-source development community: “With enough eyeballs, all bugs are shallow”. I don’t believe in it as strongly as I used to, but I think there’s some truth to it. It can be found in other disciplines as well: in science, it’s known as “peer-review”, where all papers and discoveries should be rigorously double-checked by peers to make sure you didn’t make any mistakes. In other sorts of writing, the person reviewing it is the editor. Basically, “have somebody else double-check your work”.

The issue with this, though, is that you need enough eyeballs to double-check your work. And while it was assumed before that all open-source software had enough eyeballs, that doesn’t seem to be the case. That said, you can certainly design and maintain a software project in certain ways to attract enough eyeballs. And unfortunately, bash isn’t doing these things.

First of all, you can see that there’s zero eyeballs on the code: one person, Chet Ramey, is writing the code, and nobody double-checks it. Because there’s only one developer, we might assume that there’s no big motivation to do code cleanups or try to make it accessible to anybody other than Chet, since nobody is working on it. And that’s true. This makes the eyeballs wander elsewhere.

But, in fact, that isn’t even true: Florian Weimer of the Red Hat Security Team has developed multiple fixes for the Shellshock bug, but his work was included in bash uncredited. Developers really need to be credited for their work. This makes the eyeballs wander elsewhere.

The code isn’t really that actively developed. Down the bottom of that page, we see dates and times that are from 2012. It seems like nobody actually cares about this code anymore, and nobody is really trying to fix bugs and make it modern. This makes the eyeballs wander elsewhere.

There are no detailed descriptions of what changed between versions, and Which commits in that log are ones that are serious and fixed CVEs, and which might just fix minor documentation bugs? It’s impossible to tell. This makes the eyeballs wander elsewhere.

And even with the corresponding code change, it can be difficult to tell whether a specific commit is an important security fix, a new feature, or a minor bug fix. There’s no explanation in the commit message for why this change was mode. or any sort of changelog, which makes it hard for people redistributing and patching bash to know what fixes are important, and which aren’t. The eyeballs will wander elsewhere.

In comparison, look at the commit log for the Linux kernel. There’s a large number of different people contributing, and all of them explain what changes they make and why they’re making them. To use a recent example (at the time of this writing), this NFS change describes in perfect detail why the change was made (for compatibility with Solaris hosts), and includes a link to a bug report with further information from debugging. As a result, even though bash is more commonly used and included in more things than the Linux kernel itself, the Linux kernel has more eyeballs and more developers.

So, what should we do? How should we fix this situation? I don’t really know. Moving to a new shell isn’t really a solution, and neither is a fork of bash. The best case scenario would be for bash to indeed change its development practices to be more like the Linux kernel, and adopt a thriving community of its own. I don’t have enough power or motivation to enact such a change. I can only hope that I can convince enough people of the right way to maintain a project.

Perhaps, Chet, if you’re out there, do you want to talk about it? This is a discussion we really should be having about the future of your project, and your responsibilities.

Cross-platform keyboard issues

We often receive bugreports from windows or OSX users with keyboard problems. For example on windows changing the keys ” to ¨ and ‘ to ´. Or on OSX the square brackets [] are not possible on a German keyboard layout (see https://bugzilla.gnome.org/show_bug.cgi?id=737547), or curly braces not on AZERTY layout (see https://bugzilla.gnome.org/show_bug.cgi?id=692721).

From what I understood this is caused by gtk handling the keyboard events itself, instead of waiting for the key-value that the operating system provides.

Is there a way how this can be fixed? Either in Bluefish or in GTK?


[pgo] mrmcd darmstadt DOM-based XSS

After last year’s fabulous event, I was really looking forward to this year’s mrmcd in Darmstadt, Germany. It outgrew last year’s edition and had probably around 250 to 300 people attending. Maybe even more. In fact, 450 clients generated 423 GB traffic during the conference which lasted 60 hours or so. That’s around 2MB/s. That’s […]

October 04, 2014

Sponsorships for GUADEC 2014

The big kidWe had a grand total of 59 sponsorship applications for GUADEC (up 5 on 2013) and a budget of €30,000 (approximately $43,000). The requests amounted to $63,799 which we calculated would be more realistic at $61,899 after verifying travel and accommodation costs. We were able to offer sponsorships to the amount of $39295 to 58 applicants. We helped find an alternative source of sponsorship for the applicant who was not offered sponsorship.

The applicants consisted of 32 Foundation members, 11 speakers, 24 GSoC students, 2 OPW interns and 1 journalist.

The first application was received on the 12th of May (thank you for being organised) and seven applications were late (one of which was over a month late).

Some people who couldn’t make it to GUADEC did not tell us so beforehand. Unfortunately, the lack of communication has cost the Foundation a little bit of money as we did not have a chance to cancel the rooms which has been booked for those people. At a time when the sponsorship budget is very tight, this could have been handled with more consideration.

9 people sent in their receipts past the deadline.

A fair few people used the plain text application form when sending in their sponsorship requests, which I find easier to process than the LibreOffice form as I don’t need to open it in another application.

In somewhat related news, the Travel Committee is now verifying considerably more stringently that the sponsored attendees for all events fulfil their duties.

(André is always happy when the Foundation can help people come to GUADEC.)

An update from the Pitivi 2014 summer battlefront

Hello gentle readers! You may have been wondering what has been going on since the 0.93 release and the Pitivi fundraising campaign. There are a few reasons why we’ve been quiet on the blogging side this summer:

  • Mathieu and Thibault have been working hard to bring us towards “1.0 quality”, improving and stabilizing various parts of GStreamer to make the backend of Pitivi more reliable (more details on this further below). They preferred to write code rather than spending their time doing marketing/fundraising. This is understandable, it is a better use of our scarce specialized resources.
  • Personally, I have been juggling with many obligations (my daily business, preparing for the conferences season, serving on the board of the GNOME Foundation, and Life in General), which left me with pretty much no time or energy to do development on marketing-related activities on Pitivi, just enough to participate in some discussions and help with administration/co-mentorship a bit. I did not have time to research blogging material about what others were doing, hence the lack of status updates in recent times.

Now that I finally have a little bit of time on my hands, I will provide you with the overdue high-level status update from the trenches.

Summer Wars. That’s totally how coding happens in the real world.

GUADEC, status of the 2014 fundraiser

For the curious among you, my account of GUADEC 2014 is here. Among the multiple presentations I gave there, my main talk was about Pitivi. I touched upon the status of the fundraiser in my talk; however, the recordings are not yet available, so I’ll share some select notes on this topic here:

  • Personally, I’ve always thought that, to be worth it, we should raise 200 thousand dollars per year, minimum (you’ll be able to hear the explanation for this belief of mine in the economic context I presented in my talk).
  • For this fundraiser, we aimed for a “modest” minimum target of 35k and an “optimistic” target of 100k. So, much less than 200k.
  • Early on after the campaign launch, we had to scale back on hopes of hitting the “optimistic” target and set 35k as the new “maximum” we could expect, as it became clear from the trend that we would not reach 100k.
  • Eventually, the fundraiser reached its plateau, at approximately 19K €, a little bit over half of our base target.

We had a flexible funding scheme, a great website and fundraising approach, we have the reputation and the skills to deliver, we had one of the best run campaigns out there (we actually got praised on that)… and yet, it seems that was not enough. shrugAfter four months spent preparing and curating our fundraiser, at one point we had to reassess our priorities and focus on “more urgent” things than full-time fundraising: improving code, earning a living, etc. Pushing further would have meant many more months of energy-draining marketing work which, as mentioned in the introduction of this post, was not feasible or productive for us at that point in time. Our friends at MediaGoblin certainly succeeded, in big part through their amazing focus and persistence (Chris Webber spent three months writing substantial motivational blog posts twice a week and applying for grants to achieve his goal. Think about it: fourteen blog articles!).

Okay so now you’re thinking, “But you still got a bit of money, so what have you guys done with that?”. We’ve accomplished some great QA/bugfixing work, just not as fast or as extensively as we’d like to. Pitivi 1.0 will happen but, short of seeing a large amount of donations, it will take more time to reach that goal (unless people step up with patches :).

What Mathieu & Thibault have been up to

For starters, they set up a continuous integration and multi-platform build system for quality assurance.

Then they worked on the GStreamer video mixer, basically doing a complete rework of our mixing stack, and made the beast thread-safe… this is supposed to fix a ton of deadlocks related to videomixing that were killing our user experience by causing frequent freezes. They are preparing a blog post specifically on this topic, but in the meantime you can see some gory details by looking at these commits they landed in GStreamer so far (more are on the way, pending review):

Then they pretty much rewrote all of GNonLin with a different, simpler design, and integrated it directly into GES under a new name: “NLE” (Non Linear Engine):

d82acb9454ea65266f23a4c6cc305bb4

The only part that survived from GNonLin, as far as I know, is the tree data structure generation. So, with “NLE” in GES, deadlocks from GNonLin should be a thing of the past; seeking around should be much more reliable and not cause freezes like it used to. This is still a major chunk of code: it represents around six thousand lines of new code in GES. Work is ongoing in this branch, expected to be completed and merged sometime in October, so I’m waiting to see what comes out of it in practice.

This is in addition to crazy bugs like bug 736896 and our regular bug fixing operations. See also the Pitivi tracker bug for GTK3, introspection and GST1 bugs and the Pitivi tracker bug for Python3 port issues.

The way forward

Now that a big chunk of the hardcore backend work has been done, Thibault and Mathieu will be able to focus on Pitivi (the UI) again. Here is the rough plan for coming months:

  1. “Scenarios” in Pitivi: each action from the user would be serialized (saved) as GstValidateScenario actions, allowing us to easily reproduce issues and bugs that you encounter.
  2. Go over the list of all reported Pitivi bugs and fix a crapton of them!
  3. At some point, we will call upon you to help us with extensive testing (including reporting the bugs and discussing with us to investigate them). We will then continue fixing bugs, release often and make sure we reach the quality you can expect of a 1.0 release.

More news to come

That’s it for today, but you can look forward to more status updates in the near future. The following topics shall be covered in separate blog posts:

  • Details on our thread-safe mixing elements reimplementation
  • A general status update on the outcome of our 2014 GSoC projects
  • The 0.94 release
  • “GNonLin is dead, long live NLE”
  • Closure of the fundraiser

Feeds