GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

January 05, 2016

Fedora Workstation and the quest for stability and robustness

One of the things that makes me really happy in terms of the public reception to the Fedora Workstation is all the people calling out how stable and solid it is, as this was and is one of our big goals from the start of the Fedora Workstation effort.

From the start we wanted to bury the old idea of Fedora being only for people who didn’t mind risking a lot of instability in return for being on the so called bleeding edge. We also wanted to bury the related idea that by using Fedora you where basically alpha testing highly unstable and unfinished software for Red Hat Enterprise Linux. Yet at the same time we did want to preserve and build upon the idea that Fedora is a great operating system if you want to experience a lot of the latest and greatest new developments as they are happening. At first glance those two goals might seem a bit contradictory, but we decided that we should be able to do both by both adjusting our policies a bit and also by relying more on the Fedora retrace server as our bug fixing prioritization tool.

So in terms of policies the division of Fedora into a distinct server and workstation images and also the clearer separation of the spins, allowed us to start making decisions without worrying so much how they affected other usecases than our own. Because sometimes what from a user perspective seems like a bug or something being broken was non-workstation policy decisions getting in the way of the desktop behaving as expected, for instance firewall rules hindering basic desktop functions.

Secondly we incorporated a more careful approach into what and when we brought in new stuff, meaning we still try to keep on top of major upstream developments and be a leading edge system, but at the same time we do a little mental exercise for each decision to make sure its a decision that makes us ‘leading edge’ and not ‘bleeding edge’. And if we really want something in, but it isn’t 100% ready for prime time yet we do what we have done with Wayland or the GTK3 port of LibreOffice, we make it available as an option for early adopters, but we default to the safer choice while we work out the last wrinkles. (Btw, if you are interested in progress on Wayland, Kevin Martin, sent out an emailing with a link to a good Wayland development status just before the Holidays.

The final piece of the puzzle is regularly checking and identifying important bugs from the Fedora retrace server. Because like almost all developers we get way more bug reports than we realistically can ever address, so having the data from the retrace server allows us to easily identify the crashes that affect the most users, and just as importantly lets us filter out the bug reports that are likely caused by users installing weird stuff on their system. When we started using retrace various desktop modules tended to dominate the top 3 pages when sorting bugs based on count, but due to a continuous effort over the last few years desktop modules appearing in the top crashers list are few and far between and when they do appear we make sure to get fixes done quickly for them. So if you ever wonder if the data collected by these kind of systems are actually helping developers working on the software you use better, I can say that it is true for Fedora for sure.

That said I thought it could be interesting to explain a bit the challenges we have with tracking our progress in this area. So lets start by looking at a graph I pulled from the retrace server.
fedora-bug-statistics
Looking at that graph one could say that it is clear that we have made great strides in improving system stability and I do believe that is the case, however the graphs doesn’t truly prove that inconclusively, they are just an indication. The reason they are not hard evidence is that there are a lot of things you need to take into consideration when reading them. First of all they are not adjusted based on total user population, which means that if you win or lose a lost of users between releases it can create an appearance of increased instability or decreased instability which is actually due to increase or decrease in user population, not in ‘how well does the system run on an individual users system’. So from what we see through other metrics our user population has been increasing since we launched the Fedora Workstation which means we shouldn’t be getting any ‘help’ in these graphs from a declining user population.

A second reason is that there are a lot of false positives being reported here, for instance we had an issue for a long while that the Intel graphics drivers generating a ton of this crash reports without it actually being crashes as such. So while they did represent bugs that should ideally be fixed they where not issues you might actually have noticed as a user of the system. So we spent some effort between Fedora Workstation 21 and Fedora Workstation 22 to reduce the amount of noise caused by this, which was an useful effort for us in terms of reducing noise in our retrace server, but from a user perspective it didn’t really make a tangible difference. And even with our efforts there are a still a lot of kernel issues showing up here which are not impacting users in a way that they are likely to perceive as the system being unstable.

A third item that might in a given release skewer the statistics is that we currently don’t differentiate between Fedora Workstation and spins in the statistics, which means that there might be issues caused by one of the spins generating a lot of bug reports against a module, but that might be a bug or an API usage issue that is not triggered by the Workstation edition and thus those items appearing or disappearing might affect the statistics, but as a user of the Fedora Workstation you would never experience it.

So keeping this is mind the retrace server is an important tool for us and one that at least gives us a decent indication of how we are doing with quality. But we can always do better so we will keep reviewing the reports we get through the ABRT and retrace systems and I also do strong recommend any application or library maintainers out there to look into what major issues are reported against their own modules.

Quick guide to port an app for Gtk+ 3.20

Recently, there was a lot of action happening in Gtk+ git repository. It was the so called CSS Nodes, which supposely came to improve Gtk’s CSS features and other good things. If you want to know more about that, you can check these two great posts by Matthias Clasen: “A GTK+ update” and “CSS boxed in GTK+”. They, however, come with a downside: it needs some fixups from theme authors and developers too.

Since I just ported GNOME To Do and GNOME Calendar to this new model, and I had quite a hard time tracking down some issues, I decided to write this quick, small and incomplete guide for developers to port their applications to Gtk+ 3.19+.

Note: this is totally based on my personal experience porting the applications, so please, please, if you find any errors, report it and I’ll immediately fix.

The 3-step solution

In order to port your application codebase to Gtk+ 3.19+, there are 3 basic steps to follow:

  1. Replace gtk_widget_get_state_flags by gtk_style_context_get_state_flags
  2. Double check gtk_style_context_get* family of functions
  3. Add a CSS name (optional)

Lets check each step closely. I’m using Calendar’s port to use a CSS name as a case study here.

1. Replace gtk_widget_{get/set}_state_flags

Pretty self-explanatory title. An example (relevant section in bold letters):

GtkStyleContext *context;
GtkStateFlags state_flags;

context = gtk_widget_get_style_context (widget);
state_flags = gtk_widget_get_state_flags (widget);

turns into

GtkStyleContext *context;
GtkStateFlags state_flags;

context = gtk_widget_get_style_context (widget);
state_flags = gtk_style_context_get_state (context);

Since we’re now using GtkStyleContext API, the same is valid for gtk_widget_set_state_flags. You should use gtk_style_context_set_state instead.

2. Double check gtk_style_context_get*

Now the part the took most of my time to figure out how to fix. Obviously, if I had read Matthias’ blog post correctly, I would’ve done it way faster. While tracking down Calendar warning messages, they were coming from this function (among others):

gtk_style_context_get (context,
                       GTK_STATE_FLAG_SELECTED,
                       "font", &font_desc,
                       NULL);

What’s wrong here? The issue is that we’re passing a state that can be different from the GtkStyleContext’s one.

To fix that, we simply have to set the GtkStyleContext’s state before calling the function, as demonstrated below:

gtk_style_context_save (context);
gtk_style_context_set_state (context, GTK_STATE_FLAG_SELECTED);
gtk_style_context_get (context,
                       gtk_style_context_get_state (context),
                       "font", &font_desc,
                       NULL);
gtk_style_context_restore (context);

Again, the relevant sections are bolded. Now, the last piece of this cake.

3. Add a CSS name (optional)

This one is quite easy. Instead of using the class type to match it, the widget sets a CSS name in the class_init function. Let’s check:

static void
gcal_event_widget_class_init (GcalEventWidgetClass *klass)
{
  GtkWidgetClass *widget_class = GTK_WIDGET_CLASS (klass);

  [...]

  gtk_widget_class_set_css_name (widget_class, "event-widget");
}

Now we have to update the theme file. In GNOME Calendar, the theme is stored in gtk-style.css:

GcalEventWidget {
  /* CSS properties */
}
turns into
event-widget {
  /* CSS properties */
}

Tada! It’s done. You now ported your widget to use a CSS name.

Conclusion

I reiterate that this is only my personal experience adjusting the widgets, and is prone to error. This is an ongoing work, so there may be changes after this guide is published (but I’ll try to keep this document updated).

Feel free to share your comments, ideas and rants :)

 

January 04, 2016

Eye tracking in usability tests

A few weeks ago, I read an opensource.com article about a Python-based open source eye tracking tool.

Eye tracking can be an important tool in usability testing. When we conduct a usability test, we usually ask participants to speak aloud whatever they are thinking during the test. For example, if the tester is looking for a Print button, we encourage the tester to say "I'm looking for a Print button." Using the "speak aloud" method allows the moderator or observer to take notes on what happened while the tester was trying to complete each scenario task.

This works well as long as testers are willing to talk out loud and give a "stream of consciousness" narration. Some testers do this better than others; some prefer not to do it at all. But without that input, we don't know why a tester had problems completing a task. Was the tester looking for a menu instead of an icon on the tool bar? Where was the tester looking on the screen for the solution? If we know the answers to these questions, we can better understand how users approach the software. In turn, the designers and developers can modify the interface to make the software easier to use.

That's where I wish for easily available eye tracking. And with PyGaze, it looks like this may finally be within reach of open source usability testing! PyGaze is a set of software that, among other things, provides eye tracking. You can learn more about PyGaze, including samples of heat maps, fixation maps, and scan paths, at the page that describes PyGaze Analyzer.

Here's a sample image from the PyGaze website, showing an eye tracking session for a website. "Figure 7 shows that our volunteer first looked at the pictures on the documentation site of OpenSesame (an open-source graphical experiment builder that’s becoming popular in the social sciences), and then started to read the text."



As we do more usability testing with GNOME via Outreachy, I hope our future interns can get PyGaze working so we can examine eye tracking along with our other usability data.

2016-01-04 Monday.

  • Mail chew; thankfully not so much, poked at patches, re-worked GL / bitmap context use, cleaned up various GL warnings. A number of 1:1's, partner call; team meeting(s).
  • Dinner; fixed a random 'base' bug - someone has to; pushed a number of fixes variously.

Unboxing a Siswoo C55

For a couple of days now, I am an owner of a Siswoo Longbow C55. It’s a 5.5″ Chinese smartphone with an interesting set of specs for the 130 EUR it costs. For one, it has a removable battery with 3300mAh. That powers the phone for two days which I consider to be quite good. A removable battery is harder and harder to get these days :-/ But I absolutely want to be able to replace the battery in case it’s worn out, hard reboot it when it locks up, or simply make sure that it’s off. It also has 802.11a WiFi which seems to be rare for phones in that price range. Another very rare thing these days is an IR interface. The Android 5.1 based firmware also comes with a remote control app to control various TVs, aircons, DVRs, etc. The new Android version is refreshing and is fun to use. I don’t count on getting updates though, although the maker seems to be open about it.

The does not have NFC, but something called hotknot. The feature is described as being similar to NFC, but works with induction on the screen. So when you want to connect two devices, you need to make the screens touch. I haven’t tried that out yet, simply because I haven’t seen anyone with that technology yet. It also does not have illuminated lower buttons. So if you’re depending on that then the phone does not work for you. A minor annoyance for me is the missing notification LED. I do wonder why such a cheap part is not being built into those cheap Chinese phones. I think it’s a very handy indicator and it annoys me to having to power on the screen only to see whether I have received a message.

I was curious whether the firmware on the phone matches the official firmware offered on the web site. So I got hold of a GNU/Linux version of the flashtool which is Qt-based BLOB. Still better than running Windows… That tool started but couldn’t make contact with the phone. I was pulling my hair out to find out why it wouldn’t work. Eventually, I took care of ModemManager, i.e. systemd disable ModemManager or do something like sudo mv /usr/share/dbus-1/system-services/org.freedesktop.ModemManager1.service{,.bak} and kill modem-manager. So apparently it got in the way when the flashtool was trying to establish a connection. I have yet to find out whether this

/etc/udev/rules.d/21-android-ignore-modemmanager.rules

works for me:

ACTION!="add|change|move", GOTO="mm_custom_blacklist_end"
SUBSYSTEM!="usb", GOTO="mm_custom_blacklist_end"
ENV{DEVTYPE}!="usb_device", GOTO="mm_custom_blacklist_end"
ATTR{idVendor}=="0e8d", ATTR{idProduct}=="2000", ENV{ID_MM_DEVICE_IGNORE}="1"
LABEL="mm_custom_blacklist_end"

I “downloaded” the firmware off the phone and compared it with the official firmware. At first I was concerned because they didn’t hash to the same value, but it turns out that the flash tool can only download full blocks and the official images do not seem to be aligned to full blocks. Once I took as many bytes of the phone’s firmware as the original firmware images had, the hash sums matched. I haven’t found a way yet to get full privileges on that Android 5.1, but given that flashing firmware works (sic!) it should only be a matter of messing with the system partition. If you have any experience doing that, let me know.

The device performs sufficiently well. The battery power is good, the 2GB of RAM make it unlikely for the OOM killer to stop applications. What is annoying though is the sheer size of the device. I found 5.0″ to be too big already, so 5.5″ is simply too much for my hands. Using the phone single handedly barely works. I wonder why there are so many so huge devices out there now. Another minor annoyance is that some applications simply crash. I guess they don’t handle the 64bit architecture well or have problems with Android 5.1 APIs.

FWIW: I bought from one of those Chinese shops with a European warehouse and their support seems to be comparatively good. My interaction with them was limited, but their English was perfect and, so far, they have kept what they promised. I pre-ordered the phone and it was sent a day earlier than they said it would be. The promise was that they take care of the customs and all and they did. So there was absolutely no hassle on my side, except that shipping took seven days, instead of, say, two. At least for my order, they used SFBest as shipping company.

Do you have any experience with (cheap) Chinese smartphones or those shops?

Using NetworkManager to export your WiFi settings as a barcode

With my new phone, I needed to migrate all the WiFi settings. For some reason, it seems to be hard to export WiFi configuration from Android and import it in another. The same holds true for GNOME, I guess.

The only way of getting WiFi configuration into your Android phone (when not being able to write the wpa_supplicant file) seems to be barcodes! When using the barcode reader application, you can scan a code in a certain format and the application would then create a wifi configuration for you.

I quickly cooked up something that allows me to “export” my laptop’s NetworkManager WiFis via a QR code. You can run create_barcode_from_wifi.py and it creates a barcode of your currently active configuration, if any. You will also see a list of known configurations which you can then select via the index. The excellent examples in the NetworkManager’s git repository helped me to get my things done quickly. There’s really good stuff in there.

I found out that I needed to explicitely render the QR code black on white, otherwise the scanning app wouldn’t work nicely. Also, I needed to make the terminal’s font smaller or go into fullscreen with F11 in order for the barcode to be printed fully on my screen. If you have a smaller screen than, say, 1360×768, I guess you will have a problem using that. In that case, you can simply let PyQRCode render a PNG, EPS, or SVG. Funnily enough, I found it extremely hard to print either of those formats on an A4 sheet. The generated EPS looks empty:

Printing that anyway through Evince makes either CUPS or my printer die. Converting with ImageMagick, using convert /tmp/barcode.eps -resize 1240x1753 -extent 1240x1753 -gravity center -units PixelsPerInch -density 150x150 /tmp/barcode.eps.pdf
makes everything very blurry.

Using the PNG version with Eye of GNOME does not allow to scale the image up to my desired size, although I do want to print the code as big as possible on my A4 sheet:

Now you could argue that, well, just render your PNG bigger. But I can’t. It seems to be a limitation of the PyQRCode library. But there is the SVG, right? Turns out, that eog still doesn’t allow me to print the image any bigger. Needless to say that I didn’t have inkscape installed to make it work… So I went ahead and used LaTeX instead

Anyway, you can get the code on github and gitlab. I guess it might make sense to push it down to NetworkManager, but as I am more productive in writing Python, I went ahead with it without thinking much about proper integration.

After being able to produce Android compatible WiFi QR codes, I also wanted to be able to scan those with my GNOME Laptop to not having to enter passwords manually. The ingredients for a solution to this problem is parsing the string encoded as a barcode and creating a connection via the excellent NetworkManager API. Creating the connection is comparatively easy, given that an example already exists. Parsing the string, however, is a bit more complex than I initially thought. The grammar of that WiFi encoding language is a bit insane in the sense that it allows multiple encodings for the same thing and that it is not clear to encode (or decode) certain networks. For example, imagine your password is 12345678. The encoding format now wants to know whether that is ASCII characters or the hex encoded passphrase (i.e. the hex encoded bytes 0x12,0x34,0x56,0x78). In the former case, the encoded passphrase must be quoted with double quotes, e.g. P:"12345678";. Fair enough. Now, let’s imagine the password is "12345678" (yes, with the quotes). Then you need to hex encode that ASCII string to P:22313233343536373822. But, as it turns out, that’s not what people have done, so I have seen quite a few weird QR codes for Wifis out there :(

Long story short, the scan_wifi_code.py program should also scan your barcode and create a new WiFi connection for you.

Do you have any other ideas how to migrate wifi settings from one device to another?

A Quick Update

Happy 2016 everyone!

While I did mention a while back (almost two years ago, wow) that I was taking a break, I realised recently that I hadn’t posted an update from when I started again.

For the last year and a half, I’ve been providing freelance consulting around PulseAudio, GStreamer, and various other directly and tangentially related projects. There’s a brief list of the kind of work I’ve been involved in.

If you’re looking for help with PulseAudio, GStreamer, multimedia middleware or anything else you might’ve come across on this blog, do get in touch!

Maps and Outreachy

Outreachy is the successor of the Outreach Program for Women (OPW). OPW was inspired by Google Summer of Code and by how few women applied for it.

The program was renamed to Outreachy with the goal of expanding to engage people from various underrepresented groups and was moved to Software Freedom Conservancy as its organizational home.

For this period (December 2015 - March 2016) Maps has two Outreachy interns!

Amisha Singla will be working on adding the possibility of printing the route in Maps. Working towards the preliminary mockups done by Andreas Nilsson

Her blog has been connected to Planet GNOME but to see her earlier blog post please se the direct link here.


Hashem Nasarat will be working on bringing support for KML/KMZ to Maps.
as well as formalizing the UX and UI for dealing with custom layers.

He is currently working on realizing these mockups:
His blog is also on Planet GNOME and the direct link is here.

Both interns are doing awesome work and we are very very lucky to have them! Maps needs them and also you! For doing design, code evaluation of infrastructure and work flow!





Made a Blog

screenshot-of-blog

(for some definition of the word ‘made’)

I didn’t program a blogging engine from scratch, but I did spent a bit too much time provisioning a VPS with Linode, setting up Debian Jessie, learning about DNS Zones, Whois privacy, vanity TLDs, squatted domains, registrar differences — I eventually settled on dynadot… I wanted to buy from hover because they seem vaguely feminist, but they couldn’t offer privacy for mx domains — working around buggy public key ssh-agent logins, fighting with out of date and incorrectly-documented Debian-made WordPress packages, throwing everything out to go with the upstream release, and fighting with Pretty Permalinks, RSS, and apache2 configurations.

What’s next on my todo list?

  • Get blog 

Work starts to continue

Image showcasing the new UI and GeoJSON overlays in GNOME Maps.Showcasing the new UI and GeoJSON overlays.

It’s almost the new year, so I’m pleased to have  series of patches ready for review that implement an important part of the UI needed for my Outreachy project. As soon as the patches get reviewed (and touched up), you’ll be able to manage the display of GeoJSON overlay files straight from the UI! Next up is the meat of my project: adding support for KML overlay files.

Outreachy has been a fair bit of work as of now, but it’s all been very rewarding. It’s amusing that I’ve spent every winter for the past 3 years working on and learning GNOME stuff. It’s nice finally getting compensated a bit this time around! :) For a few days this past week I’ve been helping get another newcomer on IRC set up GNOME Maps so they can contribute too. It’s hard getting a feeling of progress over so many years of discontinuous work, but little things stick out to me every once in a while: knowing who to talk to, understanding some of the history of why things are the way they are, being able to wrestle with jhbuild, finally slightly understanding a bit of autotools… things like that.

On one hand it’s good that me, amisha, and jonas are all spaced 6 or 12 hours apart because someone is always on IRC available to help newbies! On the other hand, it’s made finding times for collaboration a little challenging, but I don’t know how much of that is due to people being busy during new years and other holidays. As always, my biggest challenge is staying productive! I’ve found that libraries work well for me. Apart from that, my goal for the new year is to be as disciplined as this OPW intern was a year ago!

Happy new year!

January 03, 2016

2016-01-03 Sunday.

  • Up lateish; breakfast, to St. Peters & St. Pauls with the babes. Back for lunch, drove home in the rain. Babes watched E.T. Continued 1000 years of Annoying the French - a rather curious book.

January 02, 2016

Babel 2.2 Released

Good evening everybody!

Today I have the pleasure to announce the release of Babel 2.2 – everybody’s favourite python internationalization library.

This release features new data from CLDR 28, official support for Python 3.5 (that was about time…), bugfixes, performance improvements and a lot of other features. You can read up on all changes at https://github.com/python-babel/babel/releases/tag/2.2.0.

I am especially happy to tell, that Aarni Koskela joined the Babel developers. He helped us reviewing PRs and created quite an impressive amount of code himself. This helped tremendously in giving life back to Babel and bringing this release to you. (It is no secret that we others have the problem of coming up with enough time to maintain the Babel project the way we’d love to.)

Our community is small – but it gets stronger every day: the lost child get’s a new family. Many thanks to everyone who contributed to Babel!

Cheers!

January 01, 2016

Climbing up...

Hello Pals,

With a warm welcome to the new year and joyous ending of the old one, I am back with the project updates. I left the previous one upto rendering the Map View. In the last two weeks, I have worked upon rendering the Map View along some text.

For rendering the text, I got familiar with Pango. Pango is a library for laying out and rendering of text. Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in the context of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.



After this I tried hands on getting Map View with some text. To render the text I used Pango layout with Cairo as backend. Basically Cairo sets the source surface. And then PangoCairo helps in glueing them together.



Now the actual requirement of having this text is to print route directions along with the Map View. I have fetched the route directions using
"Application.routeService.route.turnPoints[index].instruction"
which gives instruction for each turn point appearing in the route, using the index.



My mentor, Jonas suggested one more method of doing it i.e. By using OffscreenWindow. As it will tend to give the same outlook i.e. the icon , instruction and distance as it looks out in instruction box. Following is the screenshot of the same:


Currently I am working on getting comprehensive understanding of OffscreenWindow. After a meetup with the design team, we can have the final decision on the further proceedings.

Till then, stay tuned pals. Wish you all a happy and prosperous new year. :)

Cheers,
Amisha


The current state of boot security

I gave a presentation at 32C3 this week. One of the things I said was "If any of you are doing seriously confidential work on Apple laptops, stop. For the love of god, please stop." I didn't really have time to go into the details of that at the time, but right now I'm sitting on a plane with a ridiculous sinus headache and the pseudoephedrine hasn't kicked in yet so here we go.

The basic premise of my presentation was that it's very difficult to determine whether your system is in a trustworthy state before you start typing your secrets (such as your disk decryption passphrase) into it. If it's easy for an attacker to modify your system such that it's not trustworthy at the point where you type in a password, it's easy for an attacker to obtain your password. So, if you actually care about your disk encryption being resistant to anybody who can get temporary physical possession of your laptop, you care about it being difficult for someone to compromise your early boot process without you noticing.

There's two approaches to this. The first is UEFI Secure Boot. If you cryptographically verify each component of the boot process, it's not possible for a user to compromise the boot process. The second is a measured boot. If you measure each component of the boot process into the TPM, and if you use these measurements to control access to a secret that allows the laptop to prove that it's trustworthy (such as Joanna Rutkowska's Anti Evil Maid or my variant on the theme), an attacker can compromise the boot process but you'll know that they've done so before you start typing.

So, how do current operating systems stack up here?

Windows: Supports UEFI Secure Boot in a meaningful way. Supports measured boot, but provides no mechanism for the system to attest that it hasn't been compromised. Good, but not perfect.

Linux: Supports UEFI Secure Boot[1], but doesn't verify signatures on the initrd[2]. This means that attacks such as Evil Abigail are still possible. Measured boot isn't in a good state, but it's possible to incorporate with a bunch of manual work. Vulnerable out of the box, but can be configured to be better than Windows.

Apple: Ha. Snare talked about attacking the Apple boot process in 2012 - basically everything he described then is still possible. Apple recently hired the people behind Legbacore, so there's hope - but right now all shipping Apple hardware has no firmware support for UEFI Secure Boot and no TPM. This makes it impossible to provide any kind of boot attestation, and there's no real way you can verify that your system hasn't been compromised.

Now, to be fair, there's attacks that even Windows and properly configured Linux will still be vulnerable to. Firmware defects that permit modification of System Management Mode code can still be used to circumvent these protections, and the Management Engine is in a position to just do whatever it wants and fuck all of you. But that's really not an excuse to just ignore everything else. Improving the current state of boot security makes it more difficult for adversaries to compromise a system, and if we ever do get to the point of systems which aren't running any hidden proprietary code we'll still need this functionality. It's worth doing, and it's worth doing now.

[1] Well, except Ubuntu's signed bootloader will happily boot unsigned kernels which kind of defeats the entire point of the exercise
[2] Initrds are built on the local machine, so we can't just ship signed images

comment count unavailable comments

December 31, 2015

2015 Learning Retrospective


  • The year began well. Started working on keeri with an aim to implement a distributed database, thereby learning the distributed systems concepts and leveraging my storage / filesystem experience. 
  • Took the coursera's cloud computing concepts course to understand the fundamentals that will help in implementing keeri
  • As part of the database implementation, needed to implement a SQL parser which will convert given SQL statements into a decision tree. Took a coursera course on compilers. Implemented a decent recursive-descent (note the wordplay) parser that will process SQL queries with parentheses, Logical operators and Relational operators.
  • Having already been tired with the non-core aspects of the "distributed" database, abandoned the project temporarily.
  • Need to learn more about NewSQL technologies. Especially in the areas around how it helps for better tooling (for IDEs and the like) and also for parallelism.
  • Studied a bit of database literature around ARIES, Voltdb etc.
  • Attempted to read part-time parliament but lost interest midway because of reading raft, which is for similar purpose but a lot simpler to read, follow. Did a paper reading session together with Sureshkumar Thangavel for this.
  • Played around with Continous Integration systems (travis, jenkins, etc.) out of interest, which later helped in projects in two different dayjobs.
  • Wasted a lot of time, pretending to be preparing for interviews but did not do anything more than chatting with job change aspirants. But no complaints as time enjoyed is not time wasted.
  • Learnt a little bit in more detail about queueing systems (Amazon SQS, rabbitmq to be specific)
  • Wrote some test / tutorial programs for the Amazon Go SDK
  • Did a few prototypes using Go for the API backend, Angular and React as the web frontends for few project ideas. Bothered about the fatigue induced by the constant reinvention in the frontend JS technologies. The future looks potentially even more heavily fragmented with no sanity in the horizon.
  • Learnt to create docker images. Did some non-trivial dockerization for a legacy product with then employer. Wanted to checkout kubernetes, rocket and potentially provide patches. But lost interest.
  • Wrote a bunch of long blog posts which triggered some nice private discussions. 1    2    3
  • Worked a little bit on ithavi - the book on operating systems in tamil, but shamefully minuscule progress. Should do more next year at least.
  • The year began well but lost steam midway, probably due to the decision to change the dayjob after 10+ years with SUSE/Novell. It led to distraction, lack of interest and some sentimental times leading to lesser productivity towards hobby projects. Hopefully the next year will be better, but with a job in a startup that works fast, I am not sure how much bandwidth I may have.
  • Still not convinced if I should work on any of these system software anymore or if I should focus on some other paradigm that is at its infancy. Ken Thompson and Dennis Ritchie worked on Unix when OSes were not mature. Leslie Lamport worked on distributed systems papers which became valuable after more than two decades. Go is now using a paper on garbage collection by Dijkstra and Lamport written in the 70s. So, I am thinking if I should focus on some problems / technologies whose time has not come yet, to feel that excitement of walking on unchartered territories. There are a few options like quantum cryptography etc. which have good theorists who need programmers. If I could collaborate with such intelligent people and synergistically add some value, it will be satisfying. I briefly discussed with some researchers in India (IIT Madras, TIFR etc.) about doing a PhD or helping as an assistant. But not any progress and nothing sounds too promising if work has to be done from India, thanks to our country's brain drain and the Government of India's focus on doing research on loony vedic technologies instead of on useful things. That is enough rant for the year :)
This is a series of blog posts, that I write every year to document and reflect on the learnings, that I have had outside the dayjob. Previous editions: 2014, 2013

Happy New Year!

2016happynewyear

All ZeMarmot team wishes you a fun and happy new year. Thanks (for all the fish!) 2015. Here we come,2016!

End of Year - 2015

Review of 2015

Another year has gone by and I guess it is time to review the things I set out to do and grade myself on how well (or poorly) I fared. Here are some of my goals for 2015:

Read 70 Books

Grade: PASS

Even though I had a very, very busy year at work, with many releases of Red Hat Satellite 5 and Red Hat Satellite 6 shipped to our customers, I managed to surpass my goal of reading 70 books, finishing the year with a whopping 79 books read! You can see the books I read here: Year in Books

This year I also spent a good chunk of my time looking at old, used books, and my personal book collection increased considerably. At one point I had so many piles of books lying around the house that I had to buy 4 new book cases to store them. At first I wanted to have them custom made, but the estimates I got from 3-4 different people were way out of my budget. In the end I went with 4 Billy Bookcases from Ikea, which cost me about 10 times less!

If you want to see what I'm reading or want to recommend a book which you think I might enjoy reading, please feel free to add me on GoodReads.

coala platypus

Hi everybody!

I’m pleased to announce the third alpha release (platypus!) of the coala project – because free developer tools matter. It has been a long time since the last release but it paid off: while until now, coala was merely a nice little framework with no actual use during software development, it has matured a lot and gained analysis functionality for a lot of languages.

Although coala’s primary purpose is to make the creation of analysis routines easy, we have taken an effort to include functionality of other open source linters into it. coala can automatically fix the indentation of your Octave files, sort and correct Python imports or add the missing dereferenciation operator to your C++ code (greetings from Clang!) – the list is growing every week. Try running coala with the -A argument to see what we’ve got!

We also took an effort to round up our general features: you can now easily ignore certain files or regions within them – with actually readable code:

# Start ignoring PyImportSortBear, because those imports rely on sys.path
sys.path.insert(0, ".")
from somewhere import something
from somewhere.else import something
# Stop ignoring

We have also added the opportunity to automatically apply patches delivered from specific bears – if you want to let coala clean your whole codebase without having to acknowledge each patch on its own.

Want to know more? Visit http://coala-analyzer.org/ and most importantly drop us a note on our Gitter channel at https://gitter.im/coala-analyzer/coala.

Want to help? Visit us on our Gitter channel as well! It’s dead easy to integrate a new linter with coala and if you want to write own analysis routines (e.g. for a thesis) without having to care about the other stuff you’ve come to the right place.

Many thanks go especially to Mischa Krüger, Abdeali Kothari and Fabian Neuschmidt for helping me driving this freetime project forward!

December 30, 2015

A Requiem for Ian Murdock

[ This post was crossposted on Conservancy's website. ]

I first met Ian Murdock gathered around a table at some bar, somewhere, after some conference in the late 1990s. Progeny Linux Systems' founding was soon to be announced, and Ian had invited a group from the Debian BoF along to hear about “something interesting”; the post-BoF meetup was actually a briefing on his plans for Progeny.

Many of the details (such as which conference and where on the planet it was), I've forgotten, but I've never forgotten Ian gathering us around, bending my ear to hear in the loud bar, and getting one of my first insider scoops on something big that was about to happen in Free Software. Ian was truly famous in my world; I felt like I'd won the jackpot of meeting a rock star.

More recently, I gave a keynote at DebConf this year and talked about how long I've used Debian and how much it has meant to me. I've since then talked with many people about how the Debian community is rapidly becoming a unicorn among Free Software projects — one of the last true community-driven, non-commercial projects.

A culture like that needs a huge group to rise to fruition, and there are no specific actions that can ensure creation of a multi-generational project like Debian. But, there are lots of ways to make the wrong decisions early. As near as I can tell, Ian artfully avoided the project-ending mistakes; he made the early decisions right.

Ian cared about Free Software and wanted to make something useful for the community. He teamed up with (for a time in Debian's earliest history) the FSF to help Debian in its non-profit connections and roots. And, when the time came, he did what all great leaders do: he stepped aside and let a democratic structure form. He paved the way for the creation of Debian's strong Constitutional and democratic governance. Debian has had many great leaders in its long history, but Ian was (effectively) the first DPL, and he chose not to be a BDFL.

The Free Software community remains relatively young. Thus, loss of our community members jar us in the manner that uniquely unsettles the young. In other words, anyone we lose now, as we've lost Ian this week, has died too young. It's a cliché to say, but I say anyway that we should remind ourselves to engage with those around us every day, and to welcome new people gladly. When Ian invited me around that table, I was truly nobody: he'd never met me before — indeed no one in the Free Software community knew who I was then. Yet, the mere fact that I stayed late at a conference to attend the Debian BoF was enough for him — enough for him to even invite me to hear the secret plans of his new company. Ian's trust — his welcoming nature — remains for me unforgettable. I hope to watch that nature flourish in our community for the remainder of all our lives.

From a lawyer who hates litigation

Before I started working in free and open source software, before I found out I had a heart condition and became passionate about software freedom, I was a corporate lawyer at a law firm. I worked on various financial transactions. There were ups and downs to this kind of work but throughout I was always extremely vocal about how happy I was that I didn’t do any litigation.

Litigation is expensive and it is exhausting. As a lawyer you’re dealing with unhappy people who can’t resolve their problems in a professional manner, whose relationships, however rosy they may have been, have completely broken down. When I started working in free and open source software, I started out primarily as a nonprofits lawyer. As I did more in copyright and trademark, I continued to avoid GPL ligation. I wasn’t really convinced that it was needed and I was sure I wanted no part of the actual work. I also was pretty license agnostic. X.Org, Apache Foundation and other permissively licensed projects were my clients and their passion for free software was very inspiring. I did think that the legal mechanisms in copyleft were fascinating.

Like Keith Packard, my view has changed considerably over the years. I became frustrated seeing companies wrest control of permissively licensed projects, or more often, engineer that from the outset. I’ve seen developers convinced that the only way a new project will gain adoption is through a lax permissive license only to find down the road that so much of their code had been proprietarized. I think there are times that a permissive license may be the right choice, but I’m now thoroughly convinced about the benefits of copyleft. Seeing the exceptional collaboration in the Linux kernel, for example, has sold me.

But as Bradley put it in our oggcast, “The GPL is not magic pixie dust.” Just choosing a license is not enough. As you surely have too, I’ve seen companies abuse rights granted to them under the GPL over and over again. As the years pass, it seems that more and more of them want to walk as close to the edge of infringement as they can, and some flagrantly adopt a catch-me-if-you-can attitude.

As a controntation-adverse person who has always hated litigation, I was certain that I would be able to help with the situation and convince companies to do the right thing. I really thought that some plucky upbeat bridge building would make the difference and that I was just the woman to do it. But what I found is that these attempts are futile if there are no consequences to violating the license. You can talk about compliance until you are blue in the face, run webinars, publish educational materials, form working groups and discussion lists but you cannot take the first step of asking for compliance if at some point someone isn’t willing to take that last step of a lawsuit. We at Conservancy are committed to doing this in the ways that are best for long-term free software adoption. This is hard work. And because it’s adversarial, no matter how nicely we try to do it, no matter how much time we give to companies to come into compliance and no matter how much help we try to give, we can’t count on corporate donors to support it (though many of the individuals working at those companies privately tell me they support it and that it helps them be able to establish budgets around compliance internally).

Conservancy is a public charity, not a for profit company or trade association. We serve the public’s interest. I am deeply convinced that GPL enforcement is necessary and good for the free software ecosystem. Bradley is too. So are the members of our Copyleft Compliance Projects. But that’s simply not enough. It’s not enough from a financial perspective and it’s not enough from an ideological one either. What matters is what the public thinks. What matters is what you think. This fundraiser is not a ploy to raise more money with an empty threat. If we can’t establish support for enforcement then we just shouldn’t be doing it.

Despite the fact that I am an employee of the organization, I am myself signing up as a Conservancy Supporter (in addition to my FSF associate membership). I hope you will join me now too. GPL enforcement is too important to hibernate.

Frogr 1.0 released

I’ve just released frogr 1.0. I can’t believe it took me 6 years to move from the 0.x series to the 1.0 release, but here it is finally. For good or bad.

Screenshot of frogr 1.0This release is again a small increment on top of the previous one that fixes a few bugs, should make the UI look a bit more consistent and “modern”, and includes some cleanups at the code level that I’ve been wanting to do for some time, like using G_DECLARE_FINAL_TYPE, which helped me get rid of ~1.7K LoC.

Last, I’ve created a few packages for Ubuntu in my PPA that you can use now already if you’re in Vivid or later while it does not get packaged by the distro itself, although I’d expect it to be eventually available via the usual means in different distros, hopefully soon. For extra information, just take a look to frogr’s website at live.gnome.org.

Now remember to take lots of pictures so that you can upload them with frogr :)

Happy new year!

December 27, 2015

libsmartcols-bindings 0.0.2

I’ve just released version 0.0.2 of libsmartcols-bindings where I added Ruby support, improved Perl support and fixed some things.

December 25, 2015

Skizze - A probabilistic data-structures service and storage (Alpha)

Skizze - A probabilistic data-structures service and storage (Alpha)

At my day job we deal with a lot of incoming data for our product, which requires us to be able to calculate histograms and other statistics on the data-stream as fast as possible.

One of the best tools for this is Redis, which will give you 100% accuracy in O(1) (except for its HyperLogLog implementation which is a probabilistic data-structure). All in all Redis does a great job.
The problem with Redis for me personally is that, when using it for 100 of millions of counters, I could end up with Gigabytes of memory.

I also tend to use Top-K, which is not implemented in Redis but via Lua scripting can be built on top of the ZSet data-structure. The Top-K data-structure is used to keep track of the top "k" heavy hitters in a stream without having to keep track all "n" flows (k < n), with a O(1) complexity.

Anyhow, dealing with a massive amount of data the interest is most of the time in heavy hitters, that could be estimated while using less memory with an O(1) complexity for reading and writing (that is if you don't care about a count being 124352435 or 124352011 because on the UI of an app you will be showing "over 124 Million").

There are a lot of algorithms floating around and used to solve counting, frequency, membership and top-k problems, which in practice are implemented and used as part of a data-stream pipeline where stuff is counted, merged then stored.

I couldn't find a one-stop-shop service to fire & forget my data at.

Basically in need of a solution where I can set up sketches to answer cardinality, frequency, membership as well as ranking queries about my data-stream (without having to reimplement the algorithm in a pipeline embedded in storm, spark, etc...) led to the development of Skizze (which is in alpha state).

What is Skizze?

Skizze ([ˈskɪt͡sə]: german for sketch) is a probabilistic data-structures (sketch) service & store to deal with all problems around counting and sketching using probabilistic data-structures. (https://github.com/seiflotfy/skizze)

Unlike a Key-Value store, Skizze does not store values, but rather appends values to sketches, to solve frequency and cardinality queries in near O(1) time, with minimal memory footprint.

Which data structures are supported?

Currently the following data structures are supported:

  • HyperLogLog++ to query cardinality of values in the sketch.
  • Count-Min-Log Sketch to query frequency of values in the sketch.
  • Top-K to list the top k values in the sketch.
  • Bloom Filter query membership of a value in the sketch.
  • Dictionary to 100% accurately query membership and frequency of values in the sketch.

What are upcoming data structures

Soon we intend to implement/integrate the following sketches:

How to use?

Skizze runs as a single service for now, and exposes a Restful API.

Who helped out?

I'd like to thank the following contributors who helped develop this project:

What else?

The project is in alpha state, and we intend to improve it in every possible way, e.g:

  • Benchmarks to test data structures against each other.
  • Referencing algorithms, instead of copying into the local source (once Go 1.6's vendoring lands)
  • Storage currently Skizze writes to disc after n seconds or m operations. Soon I'd like to be able to write only dirty segments of the sketch to disc (in case a sketch is large e.g: 1 GB).

Feel free to open issues or help out with more specs development of the project on GitHub. All input appreciated.

December 23, 2015

libsmartcols-bindings 0.0.1

First “Birthday” release of libsmartcols-bindings is here!

Some time ago Karel Zak introduced libsmartcols (The libsmartcols library is used for smart adaptive formatting of tabular data.) and I really wanted to use it from Python and I asked to add python bindings. Few days ago I started to work on this on my own because I got some free time. I have implemented something working (only for python) and sent pull request and then I have been fighting with autocraptools and Travis CI. I didn’tt finish it because I was getting some weird errors in Travis which I couldn’t reproduce on my laptop so I just left it as is. And yesterday I realized that would be great to get bindings for languages like perl, lua, ruby, others and I started to work on this in my separate repository (which is not related to util-linux, just bindings for libsmartcols) and now we have first release with some documentation and 3 languages supported more or less (look at examples).

“Birthday” is because today I have birthday ;-)

Hardware accelerated video playing with Totem

Introduction

Various months ago I had hardware problems. To debug this and because I wanted a small server I bought an Intel NUC5PPYH. It’s a really small low power PC. Using this I discovered that my hardware troubles weren’t related to my SSD. Since that time I’ve been using the NUC as my main machine. My previous machine had upgraded parts, but GPU/motherboard and memory all were from around 2007. A slow low power 2015 NUC is somewhat in the same performance range as that 2007 machine (50% slower in some things, faster in others) while using way less power. My previous machine had 2 cores, the new one has 4.

My previous machine already had difficulty with some of the super high quality videos. Depending on the settings, some videos can use a very high amount of CPU. The CPU of the NUC is slightly slower to play videos.

Initially the various GNOME video playing bits crashed when trying to play video. Those crasher bugs got fixed, but Totem never properly played anything. I quickly discovered that mpv didn’t have any problem with hardware accelerated video playing, so used that and stopped filing bugs.

It helped me to improve the Mageia mpv package. Ensuring it had an OSD, was compiled with vaapi support, etc. Then recently gstreamer-vaapi 0.7.0 was released and gave me the idea to try Totem again. I’m guessing that change made things work, though not exactly sure.

Hardware accelerated video playing

Two frames from Tears of Steel showing Koen Martens, GUADEC 2010 organizer (together with Vincent and Reinout). First screenshot shows Totem with 6-7% CPU usage. The second shows MPV with 3-4% CPU usage. System Monitor itself uses around 20% CPU. Now that it works nicely, I’m working on having Mageia 6 automatically install VAAPI for you if your system has a similar GPU as mine.

Playing 1080p movie using Totem with 6-7% CPU usageTears of Steel using Totem – 6-7% CPU
Playing 1080p movie using mpv with 3-4% CPUTears of Steel using MPV – 3-4% CPU

Secret bonus: Polari

If you look at the CPU usage, you’ll not spot something. I’m also running Polari on a different workspace. It previously used 10-20% even when idle. I filed a bug but the developer couldn’t reproduce. Now I cannot reproduce as well — 0% CPU!! :-D

Alive And Well In Largo

This blog has been quiet sadly, the last year has been so incredibly busy with changes in technology and high demand for IT services.  So many things have evolved, and I probably should have blogged about them as they were being developed.  This blog will try and get caught up to the biggest advancements.

Thinner And Fatter Workstation Delivery With NX

User requirements have been changing the last few years, and it's been a R&D project to find ways to try and find the right balance between centralized computing and mobility.   Our older GNOME desktop servers were wonderful in the sense that you could log in anywhere in the City and obtain all of your software and files.  But using remote Xwindows as the delivery layer required that you log off in one place to log into another.  Increasingly, users wanted to be able to resume sessions.  Remote Xwindows also is not able to handle certain changing technology needs.  Playing a Flash video over Xwindows will very easily grab 600Mb for just one user on your 1Gb network card -- certainly something that will not scale to many 100s of users.  So the last 1+ year has been spent changing the way we delivery software to the workstations. 

NX/Nomachine technology is a software layer that is installed on a GNOME desktop server that gives you a highly compressed connection layer that can replace remote Xwindows.  We had used NX technology for many years at our remote sites because it solved the issue of bandwidth very nicely, but did not use it for sites with fiber optic lines.  The changes made in NX4 were very suitable for our requirements, and after a lengthy beta period we are moving users to this technology in ever increasing numbers.   This is how we have become thinner.

NX can use a codec to compress the software layer or it can use what is called "Lightweight Mode".  In our testing, we found using a codec to be very expensive in CPU cycles on a multi-user server.  With a requirement to run 300 or more people, the server would have struggled to keep up.   In all but one use case, Lightweight Mode was able to solve this issue.  CPU loads on both server and workstation are extremely low and response time is crisp and fast.  The one caveat of this mode, is that it cannot play many Flash videos inside of Firefox.  Flash can detect latency and will reduce frame rates over remote X and on a stand alone workstation.  But because NX is running on powerful servers, Flash "sees" lots of bandwidth and CPU cycles and plays with no throttling.  The problem happens when the frames are sent down to the workstation -- even in lightweight mode, it just can't keep up.  So everything except videos was working great, how best to solve this?

The concept of running a browser on the local hardware was discussed, and experimentally I installed Firefox into the flash device of our various HP thin clients.  It works, which was expected.  And with a local video card, it plays videos very well.  But with the ever changing versions of Firefox and Flash (it seems like one of these is patched every 1-2 weeks), the update cycles to 600 workstations would not have been pleasant.  So experimentally the concept of launching Firefox over a NFS mount was tested.  When you click on the Firefox icon on the workstation, it NFS mounts our backend server, and starts up a 32bit version of Firefox/Flash and it worked as well and as fast as having it stored locally.  When an update comes out, I install it on the server and all of the workstations immediately pick up the change the next time they run this software.  Video playback is fantastic and we're now able to allow users to play HD videos -- something not possible in high numbers over Xwindows.  This is how we became fatter.

New Workstation Model

Our aging HP 5725 and HP 5745 workstations were nearing the end of their duty cycle, and they were not really powerful enough to handle the requirements of running Firefox locally.  A few of the new HP workstations were tested and based on the pricing and performance, we selected the HP t620PLUS.  They are blazingly fast to run the NX client piece, and also can run Firefox very quickly and well.  Money was available to buy about 180 of them, which would replace about 1/3 of our total number of deployed workstations.  So the last few months were spent receiving, unpacking, and deploying them to the users that needed them the most.  Feedback has been positive so far on these workstations and they are working very well.

Old Worktation RetroFit

400 or so older workstation will not be upgraded for another 12 months because of funding, so time was spent optimizing them to run with these latest advancements.  A workstation build was created that is identical to the one on the t620PLUS model in appearance.  The 5725 model cannot run a local browser, it's just too slow.  But it connects nicely to the NX GNOME server and performance is better than it was with Xwindows.  The 5745 model can run Firefox locally -- not as well as the new workstations, but well enough that it works and videos do work and play.  When users move around through the City on these three models, they look and work almost the same in all regards.  In the coming 30-60 days, these 400 will be moved off of Xwindows and over to NX.

ChromeBook Testing

NX supports Linux, Mac, MS Windows and tablets via a client piece.  All have been tested and are in use in various parts of the City.  Another login method is available.   NX supports logging in with just a browser.  I have been testing this with a Chromebook with success -- it's very fast and all of our software runs well.  The prospect of being able to have a mobile solution while using $250 devices with a laptop footprint is very attractive and potentially will offer a great amount of dollar savings.  I'll blog more about this in the coming weeks.

LibreOffice

Yup, we're still using LibreOffice!  Many thousands of documents a day are touced with this software and it does the job nicely.   The QA guys have been wonderful, and helped teach me how to bibisect bugs.  When a bug is found here that impacts us greatly, I can now do the leg work to find the regression and the developers have been wonderful in creating patches quickly.  About 200 users are now using version 5.0 with no known issues.  The rest of the users will be migrated in the coming weeks as part of receiving upgraded workstation builds and being migrated to the new GNOME server. 

Firefox Delivery

In the past, we had a server running GNOME and when a user clicked on Firefox, it handed that process off to another server and Firefox then remote displayed back to the workstation.  This met our needs for many years.  When using NX as the transport however, having Firefox running on its own server meant that there was an Xwindow hop in the middle.  Because of the network hungry nature of Firefox, this application was moved and now runs directly on the same server as GNOME/NX.  This gives Firefox direct access to the NX/Xserver with no hop in the middle.  Firefox therefore is very much faster, scrolling and typing is far superior.  This also meant that our scaling and loads have changed and required tuning and in the coming weeks some load balancing.  The server version of Firefox is used for all aspects of user requirements, except for video playback which is now handled by launching the Firefox version found on the local workstation. 



In Progress Projects

A lot of the ongoing projects have been mentioned in these prior paragraphs.  My top action items in the coming weeks:

* Continue moving more users to NX technology
* Tune and monitor the servers as the user loads increase
* Upgrade the NX4 technology to NX5
* Install and deploy the NX Cloud server piece, so users can log in with web browsers
* Add a second NX node, so that we have load balancing and can increase user counts
* Work on project to allow for embed of Youtube videos into LibreOffice for our employees and return the source to the community
* Continue working on our in-house support software and adding various features that have been scheduled.

Very kind regards to all of the people that ask me about our deployment even after all of these years.  It's all still working, and continues to provide significant cost savings.

GNOME Software and xdg-app

Here’s a little Christmas present. This is GNOME Software working with xdg-app to allow live updates of apps and runtimes.

Screenshot from 2015-12-22 15-06-44

This is very much a prototype and needs a lot more work, but seems to work for me with xdg-app from git master (compile with --enable-libxdgapp). If any upstream projects needed any more encouragement, not including an AppData file means the application gets marked as nonfree as we don’t have any project licensing information. Inkscape, I’m looking at you.

December 22, 2015

Conservancy's Year In Review 2015

If you've noticed my blog a little silent the past few weeks, I've been spending my blogging time in December writing blogs on Conservancy's site for Conservancy's 2015: Year in Review series.

So far, these are the ones that were posted:

Generally speaking, if you want to keep up with my work, you probably should subscribe not only to my blog but also to Conservancy's. I tend to crosspost the more personal pieces, but if something is purely a Conservancy matter and doesn't relate to usual things I write about here, I don't crosspost.

Mono's Cooperative Mode for SGen GC

Mono's master tree now contains support for a new mode of operation for our garbage collector, we call this the cooperative mode. This is in contrast with the default mode of operation, the preemptive mode.

This mode is currently enabled by setting the MONO_ENABLE_COOP environment variable.

We implemented this new mode of operation to make it simpler to debug our GC, to have access to more data on the runtime during GC times and also to support certain platforms that do not provide the APIs that our preemptive system needed.

Behind Preemptive Mode

When we started building Mono back in 2001, we wanted to get something up and running very quickly. The idea was to have enough of a system running on Linux that we could have a fully self-hosting C# environment in a short period of time, and we managed to do this within eight months.

We were very lucky when it came to garbage collection that the fabulous Boehm GC existed. We were able to quickly add garbage collection to Mono, without having to think much about the problem.

Boehm is fabulous because it does not really require the cooperation of the runtime to work. It is a garbage collector that was originally designed to add garbage collection capabilities to programs written in C or C++. It performs garbage collection without much developer intervention. And it achieves this for existing code: multi-threaded, assembly-loving, low-level code.

Boehm GC is a thing of beauty.

Boehm achieves its magic by pulling some very sophisticated low-level tricks. For example, when it needs to perform a garbage collection it relies on various operating system facilities to stop all running threads, examine the stacks for all these threads to gather roots from the stack, perform the actual GC job then resume the operation of the program.

While Boehm is fantastic, in Mono, we had needs that would be better served with a custom garbage collector. One that was generational and reduced collection times. One fit more closely with .NET. It was then that we built the current GC for Mono: SGen.

SGen has grown by leaps and bounds and has been key in supporting many advanced scenarios on Android and iOS as well as being a higher performance and lower latency GC for Mono.

When we implemented SGen, we had to make some substantial changes to Mono's code generator. This was the first time that Mono's code generator had to coordinate with the GC.

SGen kept a key feature of Boehm: most running code was blissfully unaware that it could be stopped and resumed at any point.

This meant that we did not have to do too much work to integrate SGen into Mono [1]. There are two main downsides with this.

The first downside is that we still required the host platform to support some mechanism to stop, resume and inspect threads. This alone is pretty obnoxious and caused much grief to developers porting Mono to strange platforms.

The second downside is that code that runs during the collection is not really allowed to use many of the runtime APIs or primitives, because the collector might be running in parallel to the regular code. You can only use reentrant code.

This is a major handicap for development and debugging of the collector. One that is just too obnoxious to deal with and one that has wasted too much of our time.

Cooperative Mode

In the new cooperative mode, the generated code is instrumented to support voluntarily stopping execution

Conceptually, you can think of the generated code as one that basically checks on every back-branch, or every call site that the collector has requested for the thread to stop.

The supporting Mono runtime has been instrumented as well to deal with this scenario. This means that every API that is implemented in the C runtime has been audited to determine whether it can run in a finite amount of time, or if it is a blocking operation and adjusted to participate accordingly.

For methods that run in a finite amount of time, we just wait for them to return back to managed code, where we will stop.

For methods that might potentially block, we need to add some annotations that inform our GC that it is safe to assume that the thread is not running any mutating code. Consider the internal call that implements the CreateDirectory method. It now has been decorated with MONO_PREPARE_BLOCKING and MONO_FINISH_BLOCKING to delimit blocking code.

This means that threads do not stop right away as they used to, but they stop soon enough. And it turns out that soon enough is good enough.

This has a number of benefits. First, it allows us to support platforms that do not have enough system primitives to stop, resume and examine arbitrary threads. Those include things like the Windows Store, WatchOS and various gaming consoles.

But selfishly, the most important thing for us is that we will be able to treat the garbage collector code as something that is a first class citizen in the runtime: when the collector works, it will be running in such a state that accessing various runtime structures is fine (or even using any tasty C libraries that we choose to use).

Today

As of today, Mono's Coop engine can either be compiled in by default (by passing --with-cooperative-gc to configure), or by setting the MONO_ENABLE_COOP environment variable to any value.

We have used a precursor of Coop for about 18 months, and now we have a fully productized version of it on Mono master and we are looking for developers to try it out.

We are hoping to enable this by default next year. [1] Astute readers will notice that it still took years of development to make SGen the default collector in Mono.

CSS boxes in GTK+

In my last update, I talked about CSS nodes in GTK+, which are used to match theme style information to widgets, and to hold state that is needed to handle e.g. animations.

Today, I’ll focus on CSS boxes in GTK+. This is where we take size information from the theme, such as margins, padding, and minimum sizes, and apply them to the widget layout. The internal name we’ve chosen for the objects that encapsulate this information is gadgets . I’m well aware that we’re not exactly breaking new ground with this name, but the name isn’t really important (as none of this is currently exposed as public API). For the purpose of this post, you can think of gadgets simply as CSS boxes that make up widgets.

Lets start with a simple example widget, to see what this is about: a radio button (all of the screenshots here are showing GtkInspector, which is available in any GTK+ application, so you can easily do these experiments yourself).

radiobutton1
We start by making the border of the widget’s box visible. The CSS snipplet shown below is using the element name radiobutton, which GtkRadioButton sets on its main CSS node, so this selector will match.

Radio button
This is how it looks. If you compare carefully with the earlier screenshot, you can see that GTK+ has made the widget bigger to make room for the 6 pixel border, while giving the same size as before to the content of the widget.

Radiobutton
The CSS box model has more than just a border, it also allows for padding (inside the border) and margins (outside the border). In fact, the box model is much more complicated than this, and we’re only scraping the surface here. The GTK+ implementation of the box model handles most of the important parts, but doesn’t cover some things that don’t fit well in our situation.

So, lets add a margin and padding. To make this visible, we use some features of CSS background rendering: We specify two backgrounds (a solid blue one and a solid yellow one), and we let one of it be clipped to the size of the ‘content box’ (the area given to the widget content) and one to the size of the ‘padding box’ (which is the content area plus the padding around it).

Radiobutton
This is how it looks. The margin is not very visible here, but you can clearly see the padding (yellow) and the content (blue). Note again how the widget has gotten larger to accommodate the padding and margin.

Radiobutton
When I talked about CSS nodes, I mentioned how widgets can have extra nodes for their ‘components’. This extends to gadgets: each of the widgets components gets their own CSS box, with its own margin, padding, background and whatnot.

So, lets continue the coloring exercise by making our CSS snipplet match not just the radiobutton itself, but also the label and the indicator. Their CSS names are label and radio.

Radiobutton
This is how it looks. Here, we can actually see the margin of the label have an effect: it causes the content area (in blue) to be bigger than it would otherwise be.

Radiobutton
I hope by now it is obvious that this is giving a lot of expressive power to theme authors, who can use all of this to affect the layout and rendering of all the widgets and their components. And there are a lot of them:

Boxes
The GtkInspector is a really useful tool for exploring what is possible with all of this. It gives easy access to both the CSS nodes of each widget:

Nodes
…and to the CSS properties of each node, which is basically the outcome of matching the theme CSS information to the node tree, also taking into account inherited properties and other complications:

CSS properties

Whats next ? All of what I’ve shown here  is already available in GTK+ 3.19.5. We’ve made good progress on converting most widgets to use gadgets internally (‘progress’ is a bit of an understatement, it was a herculean effort, mainly by Benjamin Otte, Cosimo Cecchi and myself). But we are not done yet, so we will continue working on completing the conversion, and on documenting GTKs CSS capabilities better.

When that is done,  we can look at adding other interesting bits of the CSS spec, like calc() or border collapsing. CSS is a huge standard, and theme designers always ask for more.

December 21, 2015

GPL enforcement is a social good

The Software Freedom Conservancy is currently running a fundraising program in an attempt to raise enough money to continue funding GPL compliance work. If they don't gain enough supporters, the majority of their compliance work will cease. And, since SFC are one of the only groups currently actively involved in performing GPL compliance work, that basically means that there will be nobody working to ensure that users have the rights that copyright holders chose to give them.

Why does this matter? More people are using GPLed software than at any point in history. Hundreds of millions of Android devices were sold this year, all including GPLed code. An unknowably vast number of IoT devices run Linux. Cameras, Blu Ray players, TVs, light switches, coffee machines. Software running in places that we would never have previously imagined. And much of it abandoned immediately after shipping, gently rotting, exposing an increasingly large number of widely known security vulnerabilities to an increasingly hostile internet. Devices that become useless because of protocol updates. Toys that have a "Guaranteed to work until" date, and then suddenly Barbie goes dead and you're forced to have an unexpected conversation about API mortality with your 5-year old child.

We can't fix all of these things. Many of these devices have important functionality locked inside proprietary components, released under licenses that grant no permission for people to examine or improve them. But there are many that we can. Millions of devices are running modern and secure versions of Android despite being abandoned by their manufacturers, purely because the vendor released appropriate source code and a community grew up to maintain it. But this can only happen when the vendor plays by the rules.

Vendors who don't release their code remove that freedom from their users, and the weapons users have to fight against that are limited. Most users hold no copyright over the software in the device and are unable to take direct action themselves. A vendor's failure to comply dooms them to having to choose between buying a new device in 12 months or no longer receiving security updates. When yet more examples of vendor-supplied malware are discovered, it's more difficult to produce new builds without them. The utility of the devices that the user purchased is curtailed significantly.

The Software Freedom Conservancy is one of the only organisations actively fighting against this, and if they're forced to give up their enforcement work the pressure on vendors to comply with the GPL will be reduced even further. If we want users to control their devices, to be able to obtain security updates even after the vendor has given up, we need to keep that pressure up. Supporting the SFC's work has a real impact on the security of the internet and people's lives. Please consider giving them money.

comment count unavailable comments

Stallman on happiness and perseverance

What does happiness signify to you, I asked him, if it isn’t based on wealth and comfort?

“Happiness for me is a combination of feeling good about myself and having love,” he said. “And to feel good about myself, I have to do things that convince me I deserve it.”

(…)

“The point is, even though it’s sad to see people being foolish, there’s no use giving up. Nothing good can come of giving up. That just means you lose completely, right away.”

—Richard Stallman on the 30th anniversary of the GNU Manifesto.

It’s amazing to think that a broken printer lead to the creation of the Free Software movement which, many years later, would give me a professional career, an education, and incredible friends around the world.

Which reminds me I still keep the sticker Richard personally handed at the end of his first talk in Perú, back in August 2003:

DSCF9847-web

GNU & Linux: the dynamic duo.

Bonus: Enjoy The Free Software Song by The GNU/Stallmans.

December 19, 2015

Yay! It begins here... :)


Hello Pals!

Here I am, with my very first blog post to share my journey towards contribution in FOSS Projects. I felt elated at knowing that I am selected for Outreachy internship. Knowing that, the next three months will involve heck lot of coding, IRCing and learning cool stuff, I became freaking excited. The best part about it is that you can survive the project following any time zone, any place. Yes night owls, this one is for you. :p

Outreachy is an amazing opportunity provided by GNOME along with various other FOSS organizations for passionate women who want to engage themselves in doing exciting, varied and valuable work in FOSS projects. It involves working on a project for 3 months and this time the internship duration has began from December, 7 . I have given project work updates of the last two weeks, in the latter section.

I am thankful to my mentor, Jonas Danielsson for supporting me throughout the process. Be it making the initial contribution or bearing with my silly doubts, he has always been there. :p I am also grateful to my organization, GNOME for making me a part of it.

Getting Involved

The flourishing culture of open source programming in my university, with lots of people getting selected for GSOC every next year, inspired me to get involved myself in the same arena. I looked for the process on the web and also the various organizations to which one can contribute. I found applications of organization GNOME interesting and also, I could match up with the skills required (in terms of programming language for code understanding) to start contributing to it. This link is quite useful for newbies.

Next exciting thing I got acquaint with, is IRC client (Internet Relay Chat) where one can join various channels associated with multiple applications (For example - #gnome-games, #photos, #gnome-maps and many more) and can seek the help for a head start. Believe me, there exists a cool set of people hanging around in those channels who are willing to help you at every tiny step. I started lurking in IRC and was given a suggestion to make a little application of my own for a start. At that time I was working on minesweeper algorithm, therefore made the same game using GTK and C. GTK is the primary library used to construct user interfaces in GNOME applications. It provides user interface controls and signal callbacks to control user interfaces. Here's the link to the github repository and following is the screenshot of the game made by me.


What's next?

I went through the list of various applications of GNOME and the bugs attached to them in bugzilla. For dealing with the bugs, one needs to build the application first. That indeed requires setting up jhbuild, a command-line program which automates downloading, building and running the latest source code for GNOME programs. As per my experience with jhbuild or rather I would call it demon, it requires a lot of patience and time to free yourself off those web of errors, you get in between. And StackOverFlow and IRC fellas act as best companions at that time. I remember, I struggled for a complete night and half a day with those dependencies errors. But at the end of it, was able to build the first application i.e. photos. :)

Following are the links which helped me a lot in dealing with the demon.
https://wiki.gnome.org/Newcomers/JhbuildIntroduction
https://wiki.gnome.org/HowDoI/Jhbuild

And one more part, I was guided by IRC fellas to switch my distro from Ubuntu to Fedora when i was caught with dependencies errors. The reason being that Fedora ships the very latest development version of GNOME as compared to Ubuntu which is often one version behind with GNOME. The switch actually made my life little easier  :)

Later on I started following gnome games. Playing those games, bugging-debugging them became my past-time.The first bug-fix made me freaking happy. It feels so awesome when little tweaks (though meaningful) made by you in the code are pushed to the master branch. Michael Catanzaro ,my games mentor guided me a lot in gearing up with more critical bugs.

Project Updates

I applied for Outreachy program through project of GNOMEMaps application. GNOME Maps is a simple maps application which fetches data from OpenStreetMap. It is written in javascript using Gjs bindings.
 
Following is the link to my Outreachy application, which contains the log of bugs I have solved and also the project details.
https://drive.google.com/file/d/0B6JC08767mluVWRsQmctYlJ2Njg/view?usp=sharing

Print Support for routes in Gnome-Maps



 Requirement of the issue is that one should be able to print the route, alongside the map. Following is the mockup by the design team to achieve the same.

The project will be done as per the following division:
  • Learning the printing API doing simple non-related printing operations and also understanding how printing works in GTK+.
  • Printing basic Map View.
  • Understanding Cairo.
  • Layout the route in a printable way as a GTK UI file.
  • Layout the route in Cairo.
  • Add print UI. 
  • Testing

In the last week, I read about GTK Printing API and Cairo library provided by GNOME. It helped me in getting a basic print of the Map View. Here is link of the video : https://youtu.be/p92e5rcyM3I

So pals, Stay tuned for more updates. Hope you liked it :)
        Cheers,
          Amisha




December 18, 2015

On digital engagement

I recently found an article I'd saved from last year's EDUCAUSE Magazine, on Setting the Stage for Digital Engagement: A Five-Step Approach. The Fall 2014 article advocates that now is the time for higher ed institutions to "build an institutional structure for digital engagement" and to do so "smartly and creatively." It's an interesting read, moreso because it introduces concepts from Usability testing for higher ed to create more engaging and interactive online systems.

I wanted to highlight the five steps listed in the article:

1. Understand the environment
It's all about understanding the users. If you have worked in Usability testing, you will recognize the first step as crafting sample Personas for your products. From the article: "Developing personas creates an opportunity to build with the end in mind by creating a shared understanding of who the product or service is for. " And the article includes a sample Persona for "Elena," to demonstrate the method.
2. Position the institution for digital success
This step highlights the execution of a digital strategy, from finding "the right people to advance core digital initiatives" to embracing "matrixed reporting and project teams." Digital engagement must become a method, so incorporate it into project planning.
3. Develop a product management mindset and approach
From the article: "Product management in higher education has often been associated with stewarding a large vendor's product through the enterprise, perhaps an HR performance management tool or a student information system. Today that product management concept needs to extend to embrace new types of properties, such as high-frequency publishing websites built on open-source software." Organizations must instill a cultural value in favor of digital engagement.
4. Champion user experience
I view usability and user experience as related, but different. Usability is about getting something done; user experience is about the user's emotional impression. Lots of things can affect the emotional experience, including colors, fonts, and images or icons.
5. Prepare for the next wave of digital and social engagement
Engagement doesn't stop with the application; in a modern enterprise that must respond to user demand, digital engagement also means user interaction in digital media. The article advises: "Find the local social media leaders and empower them. All colleges and universities today have people who are passionate about their work and who are blogging, tweeting, and/or posting. … Find such people, support them, and connect them to others within the institution."
photo: Ged Carroll

BBC Radio’s adaptation of Isaac Asimov’s Foundation trilogy

Other than for self-improvement, I’m not a big fan of books (nor podcasts) in general, because of the big time investment required. THIS, however, is such an amazing masterpiece of a radio adaptation that I can heartily recommend it to anyone who has a good grasp of spoken British English (it was produced over fourty years ago by the BBC, after all). After the first episode, I was hooked, and ate through the entire série in a week or two. I found it best listened to while relaxing, with eyes closed to immerse yourself in the intergalactic drama at play.

asimov_foundation_trilogy_covers-small

I’ll smugly say I foresaw a couple of the plot twists (including a big part of the chapters concerning the Mule), but Asimov kept surprising me otherwise.

Besides having very talented voice actors give life to what might otherwise be a bit of a dry story for non-sci-fi connaisseurs, it turns out that the radio adaptation has a special segment about the life of farmers on Rossem. That segment is absolutely hilarious, contrasting heavily with the doomy & gloomy nature of the whole series. It is also fairly philosophical, touching on the question of life fulfilment. The exchange between Pritcher and the Mule, after talking with those farmers, was a great emotional portrayal: you could actually feel perplexity and doubt in Pritcher’s voice, and shock and urgency in the Mule’s.

December 17, 2015

My first post as an Outreachy intern

Hi!

My name is Jordana Luft, I am an undergraduate student of computer engineering at Federal University in the city of Pelotas, south of Brazil. This is my first post in a series of posts that I intend to write from December/2015 to March/2016, related to my experiences and work during the internship in Outreachy program.

I’m so happy to be writing about this amazing program which helps underrepresented groups to get involved in the free and open source software.

In order to participate in such a program you just need to find a mentor and make a little contribution in a open source project that is participating in Outreachy, such as GNOME, Mozilla and Linux Kernel. During the applications process I got a lot of help and support from the community and especially from Felipe Borges, who will be my mentor during all the internship.

I’m extremely happy and thankful for been accepted in the program! This really is a great opportunity for me and I can’t hardly wait for all the stuff that I’ll be learning and all the people that I will be meeting during the program.

Over these 3 months I’ll be working on a playlist widget for gnome-music. This will improve user experience since we will be able to see the current playlist.

This is a mockup made by the community for this project:
Mockup

You can watch my progress in this blog, and also in bugzilla.

In the next week, I will improve my GTK knowledge making a stand-alone version of this widget.

That’s all for now, see you in the next post!

xdg-app christmas update

Yesterday I released xdg-app 0.4.6 and I wanted to take some time to talk about what is new in this version what is happening around xdg-app.

libxdg-app and gnome-software integration

In the release, but disabled by default, is a new library called “libxdg-app”. It is intended for applications that want to present a user interface for managing xdg-app applications. We’re working on integrating this with gnome-software so that we can have graphical installation and updating of applications. This is work in progress, and the APIs are not yet stable, but it is very important progress that we will continue working on in the near future.

New xdg-app-builder tool

The basics of how to bundle and application with xdg-app is very simple. You initialize an application directory with build-init. For example:

$ xdg-app build-init appdir 
          org.example.ExampleApp 
          org.gnome.Sdk org.gnome.Platform 3.18

This gives you an place where you can both run the build, and store the application being build. Typically you then go to your source directory and run something like:

$ xdg-app build appdir ./configure --prefix=/app
$ xdg-app build appdir make
$ xdg-app build appdir make install

At this point the application is mostly done, but you need to run build-finish in order to export things like desktop files and icons as well as configure some application metadata and permissions, and then export the directory to an ostree repository that your users can install it from:

$ xdg-app build-finish appdir
     --command=run-example --socket=x11
     --share=network --filesystem=host
$ xdg-app build-export appdir /path/to/repo

This is pretty easy, as long as all the tools you need to build your app are in the sdk, and all the dependencies the app needs are in the runtime. However, most apps need a few extra dependencies, which was a large pain point for  people experimenting with xdg-app.

I decided to write a tool that automates this, and thus xdg-app-builder was born. It builds on experience from the Gnome continuous integration system and the nightly xdg-app build work that I did a while ago. Its based on the build-api proposal from Colin Walters, and the idea is to push as much build-knowledge upstream as possible, so that all you need to do is list your dependencies.

Here is an example json manifest that describes the above steps, plus adds a dependency:

{
  "app-id": "org.example.ExampleApp",
  "version": "master",
  "runtime": "org.gnome.Platform",
  "runtime-version": "3.18",
  "sdk": "org.gnome.Sdk",
  "command": "run-example",
  "finish-args": ["--socket=x11", 
                  "--share=network", 
                  "--filesystem=host" ],
  "build-options" : {
    "cflags": "-O2 -g",
    "env": {
        "V": "1"
    }
  },
  "cleanup": ["/include", "*.a"],
  "modules": [
    {
      "name": "some-dependency",
      "config-opts": [ "--disable-something" ],
      "cleanup": [ "/bin" ],
      "sources": [
        {
          "type": "archive",
          "url": "http://someting.org/somethinbg-1.0.tar.xz",
          "sha256": "93cc067b23c4ef7421380d3e8bd7c940b2027668446750787d7c1cb42720248e"
         }
       ]
    },
    {
      "name": "example-app",
      "sources": [
        {
          "type": "git",
          "url": "git://git.gnome.org/gimp"
        }
      ]
    }
  ]
}

In addition to just building things this will also automatically download tarballs and pull git/bzr repos and clean up and strip things after install. It even has a caching system so that any module that did not change (in the manifest, or in the git repos) will have the results taken from the cache on consecutive builds, rather than rebuilding.

Some people have started using this, including the pitivi and glom developers, and I’ve converted the existing nightly builds of gimp and inkscape to use this instead of the custom scripts that was used before. If you’re interested in playing with xdg-app-builder those links should give you some examples to work from. There is also pretty complete docs in the manpages for xdg-app-builder.

Updated nightly builds

As I mentioned above the nightly builds were converted to xdg-app-builder, but I have also extended the set of builds with Darktable, MyPaint and Scribus, in addition to the old Gimp and Inkscape builds. The scribus build have some issues which I don’t understand (help needed), but the others seem to work well.

If you’re interested in using these, take a look at https://wiki.gnome.org/Projects/SandboxedApps/NightlyBuilds which has instructions on how to get builds of xdg-app for your distro and how to use it to test the nightly builds.

Updated runtime and sdk

Since more people have started testing the Gnome runtimes I’ve fixed quite a few issues that were found in them, as well as added some new tools to the sdk. If you installed the old one, make sure to update it.

Upcoming work

The basic functionality of xdg-app is pretty much there, at least for non-sandboxed applications. The main focus of the work right now is to finish the integration with gnome-software. But after that I will return to work on sandboxing, finishing the work on the file chooser portal and the other APIs required to run apps in a sandboxed fashion.

Improving disk I/O performance in QEMU 2.5 with the qcow2 L2 cache

QEMU 2.5 has just been released, with a lot of new features. As with the previous release, we have also created a video changelog.

I plan to write a few blog posts explaining some of the things I have been working on. In this one I’m going to talk about how to control the size of the qcow2 L2 cache. But first, let’s see why that cache is useful.

The qcow2 file format

qcow2 is the main format for disk images used by QEMU. One of the features of this format is that its size grows on demand, and the disk space is only allocated when it is actually needed by the virtual machine.

A qcow2 file is organized in units of constant size called clusters. The virtual disk seen by the guest is also divided into guest clusters of the same size. QEMU defaults to 64KB clusters, but a different value can be specified when creating a new image:

qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

In order to map the virtual disk as seen by the guest to the qcow2 image in the host, the qcow2 image contains a set of tables organized in a two-level structure. These are called the L1 and L2 tables.

There is one single L1 table per disk image. This table is small and is always kept in memory.

There can be many L2 tables, depending on how much space has been allocated in the image. Each table is one cluster in size. In order to read or write data to the virtual disk, QEMU needs to read its corresponding L2 table to find out where that data is located. Since reading the table for each I/O operation can be expensive, QEMU keeps a cache of L2 tables in memory to speed up disk access.

The L2 cache can have a dramatic impact on performance. As an example, here’s the number of I/O operations per second that I get with random read requests in a fully populated 20GB disk image:

L2 cache size Average IOPS
1 MB 5100
1,5 MB 7300
2 MB 12700
2,5 MB 63600

If you’re using an older version of QEMU you might have trouble getting the most out of the qcow2 cache because of this bug, so either upgrade to at least QEMU 2.3 or apply this patch.

(in addition to the L2 cache, QEMU also keeps a refcount cache. This is used for cluster allocation and internal snapshots, but I’m not covering it in this post. Please refer to the qcow2 documentation if you want to know more about refcount tables)

Understanding how to choose the right cache size

In order to choose the cache size we need to know how it relates to the amount of allocated space.

The amount of virtual disk that can be mapped by the L2 cache (in bytes) is:

disk_size = l2_cache_size * cluster_size / 8

With the default values for cluster_size (64KB) that is

disk_size = l2_cache_size * 8192

So in order to have a cache that can cover n GB of disk space with the default cluster size we need

l2_cache_size = disk_size_GB * 131072

QEMU has a default L2 cache of 1MB (1048576 bytes) so using the formulas we’ve just seen we have 1048576 / 131072 = 8 GB of virtual disk covered by that cache. This means that if the size of your virtual disk is larger than 8 GB you can speed up disk access by increasing the size of the L2 cache. Otherwise you’ll be fine with the defaults.

How to configure the cache size

Cache sizes can be configured using the -drive option in the command-line, or the ‘blockdev-add‘ QMP command.

There are three options available, and all of them take bytes:

  • l2-cache-size: maximum size of the L2 table cache
  • refcount-cache-size: maximum size of the refcount block cache
  • cache-size: maximum size of both caches combined

There are two things that need to be taken into account:

  1. Both the L2 and refcount block caches must have a size that is a multiple of the cluster size.
  2. If you only set one of the options above, QEMU will automatically adjust the others so that the L2 cache is 4 times bigger than the refcount cache.

This means that these three options are equivalent:

-drive file=hd.qcow2,l2-cache-size=2097152
-drive file=hd.qcow2,refcount-cache-size=524288
-drive file=hd.qcow2,cache-size=2621440

Although I’m not covering the refcount cache here, it’s worth noting that it’s used much less often than the L2 cache, so it’s perfectly reasonable to keep it small:

-drive file=hd.qcow2,l2-cache-size=4194304,refcount-cache-size=262144

Reducing the memory usage

The problem with a large cache size is that it obviously needs more memory. QEMU has a separate L2 cache for each qcow2 file, so if you’re using many big images you might need a considerable amount of memory if you want to have a reasonably sized cache for each one. The problem gets worse if you add backing files and snapshots to the mix.

Consider this scenario:

Here, hd0 is a fully populated disk image, and hd1 a freshly created image as a result of a snapshot operation. Reading data from this virtual disk will fill up the L2 cache of hd0, because that’s where the actual data is read from. However hd0 itself is read-only, and if you write data to the virtual disk it will go to the active image, hd1, filling up its L2 cache as a result. At some point you’ll have in memory cache entries from hd0 that you won’t need anymore because all the data from those clusters is now retrieved from hd1.

Let’s now create a new live snapshot:

Now we have the same problem again. If we write data to the virtual disk it will go to hd2 and its L2 cache will start to fill up. At some point a significant amount of the data from the virtual disk will be in hd2, however the L2 caches of hd0 and hd1 will be full as a result of the previous operations, even if they’re no longer needed.

Imagine now a scenario with several virtual disks and a long chain of qcow2 images for each one of them. See the problem?

I wanted to improve this a bit so I was working on a new setting that allows the user to reduce the memory usage by cleaning unused cache entries when they are not being used.

This new setting is available in QEMU 2.5, and is called ‘cache-clean-interval‘. It defines an interval (in seconds) after which all cache entries that haven’t been accessed are removed from memory.

This example removes all unused cache entries every 15 minutes:

-drive file=hd.qcow2,cache-clean-interval=900

If unset, the default value for this parameter is 0 and it disables this feature.

Further information

In this post I only intended to give a brief summary of the qcow2 L2 cache and how to tune it in order to increase the I/O performance, but it is by no means an exhaustive description of the disk format.

If you want to know more about the qcow2 format here’s a few links:

Acknowledgments

My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the invaluable help of the QEMU development team.

Enjoy QEMU 2.5!

Playing with xdg-app for PrefixSuffix and Glom

xdg-app lets us package applications and their dependencies together for Linux, so a user can just download the application and run it without either the developer or the user worrying about whether the correct versions of the dependencies are on the system. The various “runtimes”, such as the GNOME runtime, get you most of the way there, so you might not need to package many extra dependencies.

I put a lot of work into developing Glom, but I could never get it in front of enough non-technical users. Although it was eventually packaged for the main Linux distros, such as Ubuntu and Fedora, those packages were almost always broken or horribly outdated. I’d eagerly fix bugs reported by users, only to wait 2 years for the fix to get into a Linux distro package.

At this point, I probably couldn’t find the time to work more on Glom even if these problems went away. However, I really want something like xdg-app to succeed so the least I could do is try it out. It was pleasantly straightforward and worked very well for me. Alexander Larsson was patient and clear whenever I needed help.

xdg-app- builder

I first tried creating an xdg-app package for PrefixSuffix, because it’s a very simple app, but one that still needs dependencies that are not in the regular xdg-app GNOME runtime, such as gtkmm, glibmm and libsigc++.

I used xdg-app-builder, which reads a JSON manifest file, which lists your application and its dependencies, telling xdg-app-builder where to get the source tarballs and how to build them. Wisely, it assumes that each dependency can be built with the standard configure/make steps, but it also has support for CMake and lets you add in dummy configure and Makefile files. xdg-app-builder’s documentation is here, though I really wish the built HTML was online so I could link to it instead.

Here is the the manifest.json file for PrefixSuffix. I can run xdg-app-builder with that manifest, like so:

xdg-app-builder --require-changes ../prefixsuffix-xdgapp manifest.json

xdg-app-builder then builds each dependency in the order of its appearance in the manifest file, installing the files in a prefix in that prefixsuffix-xdgapp folder.

You also need to specify contexts in the “finish-args” though they aren’t explicitly called contexts in the manifest file. For instance, you can give your app access to the network subsystem or the host filesystem subsystem.

Creating the manifest.json file feels a lot like creating a build.gradle file for Android apps, where we would also list the base SDK version needed, along with each version of each dependency, and what permissions the app needs (though permissions are partly requested at runtime now in Android).

Here is the far larger xdg-app-builder manifest file for Glom, which I worked on after I had PrefixSuffix working. I had to provide many more build options for the dependencies and cleanup many more installed files that I didn’t need for Glom. For instance, it builds both PostgreSQL and MySQL, as well as avahi, evince, libgda, gtksourceview, goocanvas, and various *mm C++ wrappers. I could have just installed everything but that would have made the package much larger and it doesn’t generally seem safe to install lots of unnecessary binaries and files that I wouldn’t be using. I do wish that JSON allowed comments so I could explain why I’ve used various options.

You can test the app out like so:

$ xdg-app build ../prefixsuffix-xdgapp prefixsuffix
... Use the app ...
$ exit

Or you can start a shell in the xdg-app environment and then run the app, maybe via a debugger:

$ xdg-app build ../prefixsuffix-xdgapp bash
$ prefixsuffix
... Use the app ...
$ exit

Creating or updating an xdg-app repository

xdg-app can install files from online repositories. You can put your built app into a repository like so:

$ xdg-app build-export --gpg-sign="murrayc@murrayc.com" /repos/prefixsuffix ../prefixsuffix-xdgapp
$ xdg-app repo-update /repos/prefixsuffix

You can then copy that directory to a website, so it is available via http(s). You’ll want to make your GPG public key available too, so that xdg-app can check that the packages were really signed by you.

Installing with xdg-app

I uploaded the resulting xdg-app repository for PrefixSuffix to the website, so you should be able to install it like so:

$ wget https://murraycu.github.io/prefixsuffix/keys/prefixsuffix.gpg
$ xdg-app add-remote --user --gpg-import=prefixsuffix.gpg prefixsuffix https://murraycu.github.io/prefixsuffix/repo/
$ xdg-app install-app --user prefixsuffix io.github.murraycu.PrefixSuffix

I imagine that there will be a user interface for this in the future.

Then you can then run it like so, though it will also be available via your desktop menus like a regular application.

$ xdg-app run io.github.murraycu.PrefixSuffix

Here are similar instructions for installing my xdg-app Glom package.

I won’t promise to keep these packages updated, but I probably will if there is demand, and I’ll try to keep up to date on developments with xdg-app.

December 15, 2015

libinput and the Lenovo x220 touchpad - after a firmware update to version 8.1

This post only applies to users of the Lenovo x220 laptop experiencing issues when using the touchpad. Specifically, the touchpad is imprecise and "jumpy" after a firmware update, as outlined in Fedora bug 1264453. The cause is buggy touchpad firmware, identifiable by the string "fw: 8.1" in the dmesg output for the touchpad:


[ +0.005261] psmouse serio1: synaptics: Touchpad model: 1, fw: 8.1,
id: 0x1e2b1, caps: 0xd002a3/0x940300/0x123800, board id: 1611, fw id: 1099905
If you are experiencing these touchpad issues and your dmesg shows the 8.1 firmware version, please read on for a solution. By default, the x220 shipped with version 8.0 so unless you updated the firmware as part of a Lenovo update, you are not affected by this bug.

The touchpad issues seem identical as the ones seen on the Lenovo x230 model which has the same physical hardware and also ships with a firmware version 8.1. The root cause as seen by libinput is that the touchpad only sends events once the finger moves approximately 50 device units in either direction. The touchpad advertises a resolution of 65 units/mm horizontally and 136 units/mm vertically, but the effective resolution is reduced by roughly 75% and 30% This bugzilla attachment 1082925 shows the recording, you can easily see that while the pressure is upgraded with high granularity, the motion coordinates jump from one position to the next. From what we know this was introduced by the touchpad firmware v8.1, presumably as part of a filter to reduce the jitter some x230 users saw.

libinput automatically detects the x230 and enables a custom acceleration function for just that model. That same acceleration function works for the x220 v8.1, but unfortunately we cannot automatically detect it. As of libinput 1.1.3, libinput recognises a special udev tag, LIBINPUT_MODEL_LENOVO_X220_TOUCHPAD_FW81, to mark such an updated x220 and enable a better pointer behaviour. To apply this tag, please do the following:

  1. Create a new file /etc/udev/hwdb.d/90-libinput-x220-fw8.1.hwdb
  2. Look for X220 in the 90-libinput-model-quirks.hwdb file, copy the match and the property assignment into the file. As of the time of writing, the two lines are as below, but make sure you take the latest from your locally installed libinput version or the link above.

    libinput:name:SynPS/2 Synaptics TouchPad:dmi:*svnLENOVO:*:pvrThinkPadX220*
    LIBINPUT_MODEL_LENOVO_X220_TOUCHPAD_FW81=1
  3. Update the udev hwdb with sudo udevadm hwdb --update
  4. Verify the tag shows up with sudo udevadm test /sys/class/input/event4 (adjust the event node if necessary)
  5. Reboot
The touchpad is now marked as requiring special treatment and libinput will apply a different pointer acceleration for this touchpad.

Note that any udev property starting with LIBINPUT_MODEL_ is private API and subject to change at any time. We will never break the meaning of the LIBINPUT_MODEL_LENOVO_X220_TOUCHPAD_FW81 property, but the exact behaviour of the property is implementation-dependent and may change at any time. Do not use it for any other purpose than marking the touchpad on a Lenovo x220 with an updated touchpad firmware version v8.1.

Feeds