GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

April 30, 2016

2016-04-30 Saturday.

  • Breakfast and out to the hack-fest; had some fun helping people with tasks, code reading, etc. Spent some time unwinding why LibreOffice wouldn't compile in tr_TR.UTF-8 locale, a rather fun bug. Interestingly toupper('i') != 'I' in a tr_TR locale, cf. easy hack.

Yet another GTK+ update

GTK+ 3.20 was released a while ago; we’re up to 3.20.3 now.  As I tried to explain in earlier posts here and here, this was a pretty active development cycle for GTK+. We landed a lot of of new stuff, and many things have changed.

I’m using the neutral term changed here for a reason. How you view changes depends a lot on your perspective. Us, who implemented the changes, are of course convinced that they are great improvements. Others who maintain GTK+ themes or applications may have a different take, since changes often imply that they have to do work to adapt.

What changed in GTK+

A big set of changes is related to the inner workings of GTK+ CSS.

The CSS box model is much better supported in widgets. This includes padding, margins, borders, shadows, and the min-width and min-height properties. Since many widgets are complex, they typically have many CSS boxes. Here is how the box tree GtkNotebook looks:

notebook-nodes

In the past (up to and including GTK+ 3.18), we used a mixture of widget class names (like GtkNotebook), style classes (like .button) and widget names (like #cancel_button) for matching styles to widgets. Now, we are using element names for each box (e.g. header, tabs and tab in the screenshot above). Style classes are still used for optional things and variants.

The themes that are included in GTK+ (Adwaita, Adwaita dark, HighContrast, HighContrastInverse and the win32 theme) have of course been updated to follow this new naming scheme. Third-party themes need and application-specific CSS need to be updated for this too.

To help with this, we have expanded both the general documentation about CSS support in GTK+ here and here, and we have documented the element names, style classes and the node hierarchy for each widget. Here, for example, is the notebook documentation.

The documentation is also a good place to learn about style properties that have been deprecated in favor of equivalent CSS properties, like the secondary cursor color property. We warn about deprecated style properties that are used in themes or custom CSS, so it is easy to find and replace them:

(gtk3-demo:14116): Gtk-WARNING **: Theme parsing error: gtk-contained.css:18:37: The style property GtkWidget:secondary-cursor-color is deprecated and shouldn't be used anymore. It will be removed in a future version

There’s also a number of new features in CSS. We do support the CSS syntax for radial gradients, we let you load and recolor symbolic icons, image() and calc() are supported, as well as the rem (‘root em’) unit.

Beyond CSS, the drag-and-drop code as been rearchitected to move the drag cancel animation and most input handling into GDK, thereby dropping most of the platform-dependent code out of GTK+.  The main reason for doing this was to enable a complete DND implementation for Wayland. As a side-effect, we gained the ability to use non-toplevel widgets as drag icons, and we dropped the X11 specific feature to use RGBA cursors as drag icons.

The Wayland backend has grown most features that it was missing compared to X11:  the already mentioned full DND support, kinetic scrolling, startup notification, primary selection, presenting windows, a bell.

Changes in applications

Here is an unsorted list of issues that may show up in applications with GTK+ 3.20, with some advice on how to handle them.

One of the motivations for the changes is to enable animations and transitions. If you use gtk_style_context_save/restore in your draw() function, that prevents GTK+ from keeping the state that is needed to support animations; so you should avoid it when you can.

There is one place where you need to use gtk_style_context_save(), though: when using “theme colors”.  The function gtk_style_context_get_color() will warn when you pass a state other than the current state of the context. To avoid the warning, save the context and set the state:

gtk_style_context_save (context);
gtk_style_context_set_state (context, state);
gtk_style_context_get_color (context, state, &color);
gtk_style_context_restore (context);

And yes, it has been pointed out repeatedly that this change makes the state parameter of gtk_style_context_get_color() and similar functions largely useless – this API has been around sinc e 3.0, when the CSS machinery was much less developed than it is now. Back then, passing in a different state was not a problem (because animations were not really supported).

Another word of caution about  “theme colors”: CSS has no concept of foreground/background color pairs. The CSS background is just an image, which is why gtk_style_context_get_background_color() is deprecated and we cannot generally make it return a useful color. The proper way to have a theme-provided background in a widget is to call gtk_widget_render_background() in your draw() function.

If you are using type names of GTK+ widgets in your CSS, look up the element names in the documentation and use them instead. For your own widgets, use gtk_widget_class_set_css_name() to give them an element name, and use it in the CSS.

A problem that we’ve seen in some applications is the interaction between size_allocate() and draw(). GTK+’s CSS boxes need to know their size before they can draw. If you derive from a GTK+ widget and override size_allocate without chaining up, then GTK+ does not get a chance to assign sizes to the boxes. This will lead to critical warnings from GTK+’s draw() function if you don’t override it. The possible solutions to this problem are either to chain up in size_allocate or to provide your own draw implementation.

If you are using GTK+ just for themed drawing without using GTK+ widgets, you probably need to make some changes in the way you are getting theme information. We have added a foreing drawing example to gtk3-demo that shows how this can be done. The example was written with the help of libreoffice and firefox developers, and we intend to keep it up-to-date to ensure that this use case is not neglected.

A plea

If you are maintaining a GTK+ application (in particular, a big one like, say, inkscape), and you are looking at porting from GTK+ 2 to GTK+ 3, or updating it to keep up with the changes in 3.20, please let us know about the issues you find. Such feedback will be useful input for us when we get together for a GTK+ hackfest in a few weeks.

Whats coming

One of the big incoming changes for 3.22 is a GL-based renderer and scene graph. Emmanuele has been working on this on-and-off for quite a while – you may have seen some of his earlier presentations. Together with the recent merge of (copies of) clutter and cogl into mutter, this will put clutter on the path towards retirement.

 

April 29, 2016

2016-04-29 Friday.

  • Fine breakfast, a day of training together with the great Turkish student team - tiring but worthwhile - still trying to put a dozen new names to new faces.
  • Built a new talk capturing at least my thinking on how to tackle non-trivial problems; sucking eggs for beginners:
    Solving arbitrary problems

    And also updated some older slides to the recent code-base.
    LibreOffice code structure overview

    LibreOffice core classes

  • Back to relax in the evening with some students.

Using bubblewrap in xdg-app

At the core of xdg-app is a small helper binary that uses Linux features like namespaces to set up sandbox for the application. The main difference between this helper and a full-blown container system is that it runs entirely as the user. It does not require root privileges, and can never allow you to get access to things you would not otherwise have.

This is obviously very useful for desktop application sandboxing, but has all sort of other uses. For instance, you can sandbox your builds to avoid them being polluted from the host, or you can run a nonprivileged service with even less privileges.

The current helper was a bit too tied to xdg-app, so as part of Project Atomic we decided to create a separate project based on this code, but more generic and minimal. Thus Bubblewrap was born.

Bubblewrap is a wrapper tool, similar to sudo or chroot. You pass it an executable and its argument on the command line. However, the executable is run in a custom namespace, which starts out completely empty, with just a tmpfs mounted as the root, and nothing else. You can then use commandline arguments to build up the sandbox.

For example, a very simple use of bubblewrap to run a binary with access to everything, but readonly:

$ bwrap --ro-bind / / touch ~/foo
touch: cannot touch ‘/home/alex/foo’: Read-only file system

Or you can use bubblewrap as an regular chroot that doesn’t require you to be root to use it:

$ bwrap --bind /some/chroot / /bin/sh

Here is a more complicated example with a read-only host /usr (and no other host files), a private pid namespace, and no network access:

$ bwrap --ro-bind /usr /usr \
      --dir /tmp \
      --proc /proc \
      --dev /dev \
      --ro-bind /etc/resolv.conf /etc/resolv.conf \
      --symlink usr/lib /lib \
      --symlink usr/lib64 /lib64 \
      --symlink usr/bin /bin \
      --chdir / \
      --unshare-pid \
      --unshare-net \
      /bin/sh

See the bubblewrap manpage for more details.

Today I changed xdg-app to use bubblewrap instead of its own helper, which means if you start an app with xdg-app it is using bubblewrap underneath.

For now it is using its own copy of the code (using git submodules), but as bubblewrap starts to get deployed more widely we can start using the system installed version of it.

Build System Fallbacks

It’s no secret that I focus my build system efforts in Builder around Autotools. I’m happy to include support for other build systems, so long as I’m not the person writing it.

Sometimes the right piece of code falls together so easily you wonder how the hell you didn’t think of it before. Today was such a day.

If you are using Builder from git (such as via jhbuild) or from the gnome-builder-3-20 branch (what will become 3.20.4) you can use Builder with the fallback build system. This is essentially our “NULL” build system and has been around forever. But today, these branches learned something so stupidly obvious I’m ashamed I didn’t do it 6 months ago when implementing Build Configurations.

If you go into the Build Configuration panel (the middle button in the sidebar) you can specify environment flags. Builder will use CFLAGS, CXXFLAGS, and VALAFLAGS now to prime those compilers in the absence of Autotools (where we have discovered these dynamically).

build-flags

And now semantic language features should work.

examples

April 28, 2016

A little update on transit routing in Maps

So, I thought it is high time for a little update about the transit routing project in gnome-maps (thought I should make some post while we're still in April).
I talked a bit with Mattias Bengtsson before, and since he had been contemplating using OpenTripPlanner (OTP) for his GSoC project a couple of year ago and found it didn't scale too well for general turn-based routing, he was quite excited about my idea of combing GraphHopper and OTP, using OTP with just transit data (loaded from GTFS feeds).

The basic idea here (when in transit mode) is to first run a query against OTP and as a further step do a ”refinement“ by re-running the parts of the routes where OTP selects walking between two transit locations.

An additional step has proven nessesary, since OTP, when running without OpenStreetMap ”street data” will approximate the walking ”legs“ of the trip as a straight line, which can be too optimistic in some instances. Therefore we do an extra safe-calculation to see if a particular itinerary seems reasonable with respect to time needed for walking (this is needed when there's walking in the middle of an itinerary, and there's an upcoming transit section that needs to be ”caught”).

Anyway, some screenshots:


Here we can see an itinerary where there's some walking in the start, it also tries to recalculate walking parts in the start and end by using the actual starting points as selected in the routing pane, instead of relying on the ones returned by OTP, which are based on the nearest transit stop.
The reason the transit part of trip (the solid line) is “jagged“ is that the data used here doesn't include shapes for the transit lines, so in this case OTP will interpolate with intermediate stops passed by in between.

A bit more beatiful routes can be obtained using data from the bus company serving the island of Honolulu:


Currenly it is hard-wired to always show the first returned trip. Also, as you can see it still doesn't render any intructions lists and the combobox to select departing/arrival time is not hooked up to anything yet.

As before the code can be found in the wip/mlundblad/transit-routing branch.

Also, there's some bug that sometimes gives a segmentation fault in Clutter (possibly there's some race condition in the code I rewrote to allow dynamically creating routing layers, to allow showing the dashed and solid lines interchangably). Unfortunatly debugging these things from JS is not all that fun…

Cockpit 0.104

Cockpit is the modern Linux admin interface. There’s a new release every week. Here are the highlights from this weeks 0.104 release.

Kubernetes iSCSI Volumes

Peter added support for iSCSI Kubernetes Volumes in the Cockpit Cluster dashboard. When you have container pods that need to store data somewhere, it’s now real easy to configure use an iSCSI initiator. Take a look:

Listing View Expansion

Andreas, Dominik, and I worked on a better listing view pattern. In Cockpit we like to give admins the option to expand data inline, and compare it between multiple entries on the same page. But after feedback from the Patternfly folks we added an explicit expander to do this.

Tagging Docker Images in the Registry

The Atomic Registry and Openshift Registry support mirroring images from another image registry such as the Docker Hub. When the images are mirrored, they are copied and available in your own registry. Cockpit now has support for telling the registry which specific tags you’d like to mirror. And Aaron is adding support for various mirroring options as well.

From the Future

Marius has a working proof of concept that lets you configure where Docker stores container and image data on its host. Take a look at the demo below. Marius adds disks to the container storage pool:

Try it out

Cockpit 0.104 is available now:

Fixing botched migrations with `oc debug`

When using Openshift Origin to deploy software, you often have your containers execute a database migration as part of their deployment, e.g. in your Dockerfile:

CMD ./manage.py migrate --noinput && \
    gunicorn -w 4 -b 0.0.0.0:8000 myapp.wsgi:application

This works great until your migration won’t apply cleanly without intervention, your newly deploying pods are in crashloop backup, and you need to understand why. This is where the `oc debug` command comes in. Using `oc debug` we can ask for a shell on a pod running or newly created.

Assuming we have a deployment config `frontend`:

oc debug dc/frontend

will give us a shell in a running pod for the latest stable deployment (i.e. your currently running instances, not the ones that are crashing).

However let’s say deployment #44 is the one crashing. We can debug a pod from the replication controller for deployment #44.

oc debug rc/frontend-44

will give us a shell in a new pod for that deployment, with our new code, and allows us to manually massage our broken migration in (e.g. by faking the data migration that was retroactively added for production).

April 27, 2016

[Help Needed] FOSS License, CLA Query

I want to start a FOSS project. FOSS Licenses are a grey area. I am trying to seek some public opinion here, to choose a license and a Contributor License Agreement (CLA). The project details are:

  • The project is a database (say, like mongodb, Cassandra etc.). It will have a server piece that users can deploy for storing data. Though it is a hobby personal project as of now, I may offer the database as a paid, hosted solution in future.
  • There are some client libraries too, for providing the ability to connect to the above mentioned server, from a variety of programming languages.
  • The client libraries will all be in Creative Commons Zero License / Public Domain. Basically anyone can do anything with the client library sources. The server license is where I have difficulty choosing.
  • Anyone who contributes any source to the server software should re-assign their copyrights and ownership of the code, to me. By "me", I refer to myself as an individual and not any company.  I should reserve the right to transfer the ownership in future to anyone / any company. I may relicense the software in future to public domain or sell it off to a company like: SUSE, Red Hat, Canonical, (or) a company like: Amazon, Google, Microsoft etc.
  • Anyone who contributes code to my project, should make sure that [s]he has all the necessary copyrights to submit the changes to me and to re-assign the copyrights to me. I should not be liable for someone's contribution. If a contributor's employer has a sudden evil plan and want to take over my personal project to court (unlikely to happen, nevertheless), it should not be possible
  • I or the users of the software, should not be sued for any patent infringement cases, for code that is contributed by someone else. If a patent holder wants to sue me for a code that I have written in the software, that is fine. I will find a way around.
  • Anyone should be free to take the server sources, modify it and deploy it in his/her own hardware/cloud, for their personal and/or commercial needs, without paying me or any of the contributors any money/royalty/acknowledgement.
  • If they choose to either sell the server software or host it and sell it as a service, (basically commercial reasons) they must be enforced to open source their changes in public domain, unless they have a written permission from me, at my discretion. For instance, if coursera wants to use my database source, after modifications, it is fine with me; but I will not want, say Oracle to modify my software and sell the modified software / service, without opensourcing their changes. If someone is hosting and selling a service of my software, with modified sources, there is no easy way for me to prove their modification, but I would still like to have that legal protection.

The best license model that I could come up for the above is: Dual license the source code to AGPLv3 and a proprietary license. Enforce a CLA to accept all contributions only after a copyright reassignment to me, with a guarantee that I have the right to change the license at a future time.

What is not clear to me however, is the patent infringement and ownership violation related constraints and AGPL's protection on such disputes. Another option is: Mozilla Public License 2.0 but that does not seem to cover the hosting-as-a-service-and-selling-the-service aspect clearly imho.

Are you readers of the internet have any better solution ?

Are you aware of any other project using any other license, CLA model that may suit my needs and/or is similar ?

What other things should I be reading to understand more ?

Or, should I lose all faith in licenses and keep the sources private and release the binary as freeware, instead of open sourcing ? That would suck.

Or should I just not bother about someone making proprietary modifications and selling the software/service, by releasing the software to public domain ?

Note: Of course, all these is assuming that my 1 hour a month, hobby project would make it big, be useful to others and someone may sue. In reality, the software may not be tried by even a dozen people, but I'm just romanticizing.

3rd Party Fedora Repositories and AppStream

I was recently asked how to make 3rd party repositories add apps to GNOME Software. This is relevant if you run a internal private repo for employee tools, or are just kind enough to provide a 3rd party repo for Fedora or RHEL users for your free or non-free applications.

In most cases people are already running something like this to generate the repomd metadata files on a directory of RPM files:

createrepo_c --no-database --simple-md-filenames SRPMS/
createrepo_c --no-database --simple-md-filenames x86_64/

So, we need to actually generate the AppStream XML. This works by exploding any interesting .rpm files and merging together the .desktop file, the .appdata.xml file and preprocessing some icons. Only applications installing AppData files will be shown in GNOME Software, so you might need to fix before you start.

appstream-builder			\
	--origin=yourcompanyname	\
	--basename=appstream		\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=1			\
	--min-icon-size=32		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons

This takes a second or two (or 40 minutes if you’re trying to process the entire Fedora archive…) and spits out some files to /tmp/asb-md — you probably want to change some things there to make more sense for your build server.

We then have to take the generated XML and the tarball of icons and add it to the repomd.xml master document so that GNOME Software (via PackageKit) automatically downloads the content for searching. This is as simple as doing:

modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream.xml.gz	\
	x86_64/repodata/
modifyrepo_c				\
	--no-compress			\
	--simple-md-filenames		\
	/tmp/asb-md/appstream-icons.tar.gz	\
	x86_64/repodata/

Any questions, please ask. If you’re using a COPR then all these steps are done for you automatically. If you’re using xdg-app already, then this is all magically done for you as well, and automatically downloaded by GNOME Software.

April 26, 2016

Congratulations, interns!

The Outreachy page has announced the interns for the May-August 2016 internship. There are five interns who will work with GNOME. I look forward to working with Ciarrai, Renata, and Diana on usability testing for GNOME. Congratulations on being accepted to the internship!

This is the first time I'll mentor three interns for usability testing. So for this cycle, I am looking to do things a little differently.

In previous cycles, the interns have pretty much worked in isolation, first learning about usability testing, then doing a usability test on GNOME. We passed those usability test results back to the GNOME Design team.

But in this cycle, I would like Ciarrai, Renata, and Diana to work more closely with others in GNOME. I'll do this in several ways:
First, I'll "launch" each week's topic by writing about it on my blog. I'll also email Ciarrai, Renata, and Diana separately. My intention is to use the comments section on my blog posts as a "discussion forum" for Ciarrai, Renata, and Diana. We might use emails for specific questions, but I think using a "forum" will allow the three interns to learn from each other as they do their own research on each week's topic.

Since my blog also appears in Planet GNOME, I encourage GNOME developers and designers to participate in the discussion. As we examine different topics of usability in open source software, please feel free to join the discussion via the comments.

Second, I want Ciarrai, Renata, and Diana to work more closely with the GNOME Design team. This will help the interns to learn about GNOME usability, including previous work in this area. The Design team can also share their insights to why certain GNOME design decisions were made. I would look for the most involvement from the Design team in the first half of the internship, since that is when Ciarrai, Renata, and Diana will learn about Personas and Scenarios. I think this will be emails and comments on weekly blog posts.

In the second half of the internship, I think the Design team might provide input to what design patterns they would like to have examined in a usability test, and comment on the Scenario Tasks for the usability test. As Ciarrai, Renata, and Diana focus on the Scenario Tasks and building their usability test, they will partner with the Design team to define the Scenario Tasks and comment on drafts of the usability test.

Finally, I'd like to have Ciarrai, Renata, and Diana collaborate to create the Scenario Tasks for their usability tests, so that all three are doing the same usability test with the same methodology. This will allow for interesting comparisons between tests.
Throughout the project, I expect Ciarrai, Renata, and Diana will do a lot of cross-commenting on each others' blog posts every week, as well as doing their own research.

I'm looking forward to working with Ciarrai, Renata, and Diana on usability testing in the May-August cycle!
image: Outreachy

Google Summer of Code 2016

Hello everyone! I am participating in the Google Summer of Code program for the second time with GNOME, this year working on Epiphany. I will be mentored by Michael Catanzaro, who was also my mentor for my last summer’s project. I am one of the two students working on this product, the other person being a friend of mine. We are both excited to leave our mark with some serious contributions.

The goal of my proposal is updating the bookmarks subsystem by refactoring the current code and implementing a new bookmarks menu design proposed by the designers.

Browsers evolve extremely quickly, but if there’s one browsing feature that’s stood the test of time, it’s browser bookmarks. The current dialog has lots of problems, giving the browser an unfinished look and the impression that everything is about to crash at any moment. The code is old and spread amongst too many files, making it hard to manage and refactor.

Upon completion, the current problems should be fixed not by patching different parts of the code for specific bugs, but by having code work properly regardless of the processed data or user requests. The new design is also more minimalistic, making bookmarks easier to use and manage.

Moreover, there are increased benefits to a bookmarks refactoring considering there is the other project that plans to touch on bookmarks by allowing Epiphany to use a storage server to sync bookmarks (and more) between multiple instances of Epiphany on different computers. Having bookmarks work as intended is of great help to Web’s users, gives the browser a more solid look and it’s a step forward towards a complete, modern browser.

Similarly to last year, I plan to start working in the community bonding period (any day now) to account for my exams session that takes place somewhere at the start of the coding period.

These being said, expect another blog post soon where I will talk about the first steps I’m taking towards completing this project successfuly.

Iulian


April 24, 2016

Packaging a Fedora kernel on a GNOME tablet

Fedora 23

Fedora 23

As Pa Bailey might have put it, it’s deep in the race to want to run GNOME on your tablet. At last year’s Libre Graphics Meeting in Toronto, pippin displayed GNOME on his Lenovo MIIX 3 Bay Trail device. A few months later I was able to pick up an ASUS T100TA with similar specs (if not as elegant) for a good discount. Adam Williamson’s latest Fedlet release was days old at the time, and I installed it.

I’ve been through the Fedlet installation three times, the second and third installs necessitated by some newfound inherent skill at rendering the device unusable. The installation is nerve-wracking on account of some missing or invisible buttons in Anaconda, so I’d rather upgrade than re-install. The processor supports 64-bit Linux with 32-bit UEFI, but at this point I’m happy with the 32-bit kernel. (The experience of local LUG members suggests the Debian multiarch installer will address the 32-bit UEFI while installing a 64-bit system, but a brand new Ubuntu installer will not.)

There’s a Google+ Community dedicated to running Ubuntu on the different varieties of T100. After some weeks of reading about further advances specific to the hardware, I wasn’t satisfied with the Fedlet 4.2 kernel and decided to build a newer one. For ease of installation and removal, Fedora kernel packages are worth the small effort required beyond simply compiling a kernel. The procedure on the wiki page is pretty comprehensive; below are the changes I needed to get mine working. First some notes:

  • ‘dnf upgrade’ on the newly-installed Fedlet led directly to the first from-scratch re-install. Since then I’ve installed specific required packages with dnf using ‘–best –allowerasing’. I’m still running Fedora 23 to prevent the loss of any of the customized packages (other than the kernel) from the Fedlet repo, but hopefully it will become possible (if it’s not already) to do without them. I would experiment, but at this point it’s easier to just avoid the risk.
  • the last re-install was required after I built a kernel that was capable of eating the contents of the eMMC hard drive.
  • sometime between 4.2 and 4.6, a dracut upgrade was required to enable the kernel installation to complete.
  • the 60 GB hard drive needs a hefty percentage of free space to perform the build: I usually delete the contents of BUILD and BUILDROOT from the previous build, and keep my music on the removable SD card.

Asus T100 Ubuntu: progress reports, patches and config file, sound configuration (sound works with kernels older than 4.6-rc* but so far 4.6 breaks it). The About this community panel has a Google Drive link called Asus Files with all the good stuff.

These are the sections I follow from Building a custom kernel. Please consult the original page for the details.

Dependencies for building kernels
Yes.

Skip to…
Building a kernel from the source RPM: Get the Source
I grab the kernel source package from Koji that corresponds to the patches in the Google Drive folder.

Prepare the Kernel Source Tree

rpmbuild -bp --target=$(uname -m) kernel.spec

Use ‘target=i686’ if building on my x86_64 laptop (hypothetical at this point since I haven’t successfully done this).

Copy the Source Tree and Generate a Patch
Skip this section: at this point I apply the patches (from the Google+ page) directly to the linux-* folder. I’ve performed the operation of copying the tree to create a single patch, but haven’t been successful automating it with ApplyPatch in the .spec file.

Configure kernel options
Copy in the config file acquired from the Google+ page.
Step 5 is i686.

Prepare Build Files
Customize the buildid in the .spec file (.mdh in my case).

Build the New Kernel
Use ‘noprep’ to prevent refreshing the source tree and re-applying the Fedora patches.

rpmbuild -bb --noprep --with baseonly --without debuginfo --target=`uname -m` kernel.spec

Use ‘target=i686’ if (someday) building on my x86_64 laptop.
On occasion at this point I need to rename the ~/rpmbuild/BUILD/kernel-*/linux-* folder to include the buildid  and fc23 instead of the version from the source rpm (currently fc25).

I usually do the last step with the power connected, but occasionally carry it to the car and back for the trip to work, so as not to interrupt the build. If it finishes, a full set of packages should appear in ~/rpmbuild/RPMS/i686.

Corrections and recommendations welcome.

April 23, 2016

GSoC 2016 at coala

Hi again with another GSoC post – this time about coala. (coala-analyzer.org)

This post is for people generally interested in coala.

coala participates in this GSoC under the PSF umbrella. This year we got a stunning number of 8 GSoC projects just working with us.

So let me give you a short overview what all will happen this summer:

  • Our collection of analysis routines will be decentralized further so you can finally analyze your java code without having to pull all other analysis routines!
  • coala will provide an interface that makes it easy to write routines in other programming languages than python.
  • Multiple people will work on some generic algorithms for documentation correction, spacing correction et cetera that will work generically with just a few lines of language definition.
  • A number of improvements to our linter wrapper simplification and coala in general.
  • A polished coala html output as well as a new website.
  • Some work on our editor plugins.

I’m thrilled about working with every single one of those students and our mentors do feel the same. Those students are the best. We had to reject many other students who would have deserved a slot from my previous experiences as a GSoC administrator because of our limited mentor capabilities. :/ If you’re one of the rejected students, you will know that we’d be more than happy to continue working and learning with you!

We plan to get almost most community members, especially including mentors and students, together at europython (with some stunning financial help from the conference itself!) so if you’re around there you can meet us all!

If you want to be kept up to date about coala, check out our planet: http://planet.coala-analyzer.org/

or subscribe to our rolling development release notes:
https://twitter.com/coala_analyzer

GSoC 2016 is Starting at GNOME

Dear GSoC Students, dear GNOME community – and especially dear rejected students,

Google Summer of Code 2016 is starting. GNOME has accepted 21 students – we are thrilled to work with you people!

For the accepted students, it’s time to get bonded with your mentors, get started on your topics and be welcomed by our community.

For us, the community, it’s time to welcome you warmly. We love you being with us and we’re happy to learn with you together!

And then there’s the students who got a rejection email for this years Google Summer of Code. We all know, that only a limited number of students can be funded. It’s not easy to get a GSoC. Not getting it doesn’t mean that we don’t want to work with you! If you are rejected and are still interested in working with us, we are too! I am recommending the following steps for you:

  • Ask your mentor politely why you weren’t selected – getting better works only iteratively! You’ve put work into your application and we’ve put work into evaluating it – let’s not waste either of this!
  • Stay curious and open!
  • Apply again!

I wish everybody a fun summer – and a warm welcome to all who want to join us, GSoC student or not.

Greetings,

Lasse

Doing things that scale

In the software world, and with internet, we can do a lot of things that scale.

Answering a user question on IRC doesn’t scale, only one person and a few lurkers will benefit from it. Answering a user question on a mailing list scales a little better, since the answer is archived and can be searched for. What really scales is instead to improve the reference manual. Yes, you know, documentation. That way, normally the question will not surface again. You have written the documentation once and N readers benefit from it, regardless of N. It scales!

Another example. Writing an application versus writing a library. Writing an application or end-user software already scales in the sense that the programmer writes the code once, but it can be copied and installed N times for almost the same cost as one time. Writing a library scales more, since it can benefit N programs, and thus N^2 users if we use the asymptotic notation. It’s no longer linear, it’s quadratic! The lower-level the library, the more it scales.

Providing your work under a free/libre license scales more, since it can be re-distributed freely on the internet. There are some start-ups in the world that reinvent the wheel by writing proprietary software and provide technical support in their region only. By writing a free software instead, other start-ups can pop up in other regions of the world, and will start contributing to your codebase (if it’s a copyleft license, that is). For the human being advancement, free software is better since it more easily permits to not reinvent the wheel. Provided that it’s a high-quality software.

This can go on and on. You get the picture. But doing something that scales is generally more difficult. It takes more time in the short-term, but in the long-term everybody wins.

API vs ABI

I repeatedly see other people doing the mistake, so a little reminder doesn’t hurt.

  • API: Application Programming Interface
  • ABI: Application Binary Interface

The difference can be easily explained by knowing what to do for some code when the API or ABI breaks in a library that the code depends on:

  • If only the ABI breaks, then you just need to re-compile the code.
  • If the API breaks, you need to adapt the code.
  • When the code doesn’t compile anymore, you need to adapt it, so it’s both an API and ABI break.
  • When the code still compiles fine but you need to adapt it, it’s only an API break.

Example of an ABI break only: when the size of a public struct changes.

Example of an ABI and API break: when a function is renamed.

Example of an API break only: CSS in GTK+ 3.20, or when a function doesn’t do the same as what was documented before.

That’s it.

LibrePlanet 2016 - Freedom Sympatico

Hello!! I emerge from my blogging slumber as part of some post-hospital-merger / post-Affordable-Care-Act-administration free software activities. You see, I went to LibrePlanet this past weekend, and I was as happy as this guy to be there . . .

A libre snow person

As you may have heard, our keynote speaker had a difficult time getting back to the US from Singapore, but managed to participate via teleconference.

The various LibrePlanet session videos are making their way out to the LibrePlanet MediaGoblin instance, and I encourage you to check them out as they become available. It was a well-run, informationally-dense conference.

This year's event seemed to have a bit of an advocacy and social justice bent. I'm thinking in particular of LittleSis and of the Library Freedom Project. I'm also thinking of Luis Villa's talk, where he drove home how we can't just give people software licensed under the GPL and think that they automatically have their computing freedom - that we need to make that software compelling for users to actually want to use. I found this to be sympatico with some of the recent work and discussions around UserOps.

On the technical project side of things, Ring seems interesting, though in early stages of development. GNU Guix seems to at least be beta quality. GNOME is now available as a desktop in GNU Guix, so I may be checking it out soon.

DRM and the W3C

Something that wasn't part of the conference, but which immediately followed it, was a rally against inclusion of the Encrypted Media Extensions specification. The spec is currently under consideration by the W3C. If you want more info about the protest, Chris Webber gave a thorough recap of it, along with details of the post-protest roundtable discussion at the MIT Media Lab.

The interesting thing about the EME spec is that it doesn't describe DRM - it just seems to describe an intricately shaped hole in which the only thing that will fit is DRM.

Free Time At The MIT Museum

Post conference, I intended to fly back home on Monday morning, but my scheduled flight was canceled. This gave me time to explore more of the city. I made it over to the MIT Museum in Cambridge, which is a great place to learn about robots . . .

Kistmet Robot, MIT Museum

. . . such as these robotic ants.

You can even see what happens to your interactive art installation when you build it with Microsoft Windows, but then forget to activate Microsoft Windows . . .

Interactive art at the MIT Museum, being run on un-activated Microsoft Windows

Think of how much more beautiful your art could be without a Windows Activation watermark in the lower-right corner!

All in all, it was a great weekend. Kudos and thanks to the FSF, to the LibrePlanet volunteers, and to the event sponsors for creating such a great LibrePlanet event.

Reflecting on Feedback

Reflection of David King in Kat's laptop screen

While at last month's Cambridge Hackfest, members of the GNOME Documentation Project team talked with Cosimo Cecchi of Endless Mobile about the user help in their product. As it turns out, they are shipping a modified version of Yelp, the GNOME help browser, along with modified versions of our own Mallard-based user help.

Knowing that they were actually shipping our help, we wanted to get a closer look at what they were doing, and wanted to get some feedback as to how things were working for them. Cosimo was glad to oblige us.

  • In terms of modifying the appearance of our help, their technical writers found it difficult to modify the CSS that we use. Cosimo noted that the CSS for our help is not stored in a single file, nor even a single directory - it's partly embedded in Yelp's XSLT.

    While not always ideal, there are reasons for this. For example, if a person's visual impairment requires that they use GNOME's High Contrast GTK theme, Yelp will pick up the theme change and will use a corresponding color scheme when it renders the help for that user. Similarly, if a user sets their default system language to a Right-to-Left-based language (such as Arabic), Yelp will also pick up that change, and will display the help in the appropriate Right-to-Left manner automatically.

    These are both useful features, but it is good to get feedback on this. Creating a custom documentation "look" is important for a downstream distributor, so there's room for us to improve here.

  • The technical writing team at Endless Mobile customized the HTML output to feature broadly-grouped sections as html-buttons along the left-hand side of the help. I wish I had gotten a screenshot of this, because we were impressed with how they grouped and formatted the help. This may be an approach that we look to use in the future.

I talked with Cosimo about incorporating some of their updates into our help, and he was very receptive to it. While we'll mostly be focusing on content-related updates for our 3.16 release, we'll consider how we can improve our help based on their feedback in the future.

April 22, 2016

Another GTK+ ABI Break

It is a familiar situation: a distribution updates Gtk+ to a supposedly-compatible version and applications, here Gnumeric, break.

This time I am guessing that it is incompatible changes to widget theming that renders Gnumeric impossible to use.

I would estimate that this has happened 15-20 times within the GTK+ 3.x series. One or two of those are probably Gnumeric accidentally relying on a GTK+ bug that got fixed, but the vast majority of cases is simply that perfectly fine, existing code stops working.

Imagine the C library changing the behaviour of a handful of functions every release. I suspect GTK+ maintainers would be somewhat upset over that. Nevertheless, that is what is presented to GTK+ application writers.

The question of whether GTK+ applications can be written remains open with a somewhat negative outlook.

Circumventing Ubuntu Snap confinement

Ubuntu 16.04 was released today, with one of the highlights being the new Snap package format. Snaps are intended to make it easier to distribute applications for Ubuntu - they include their dependencies rather than relying on the archive, they can be updated on a schedule that's separate from the distribution itself and they're confined by a strong security policy that makes it impossible for an app to steal your data.

At least, that's what Canonical assert. It's true in a sense - if you're using Snap packages on Mir (ie, Ubuntu mobile) then there's a genuine improvement in security. But if you're using X11 (ie, Ubuntu desktop) it's horribly, awfully misleading. Any Snap package you install is completely capable of copying all your private data to wherever it wants with very little difficulty.

The problem here is the X11 windowing system. X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window. An application that has no access to any of your private data can wait until your session is idle, open an unconfined terminal and then use curl to send your data to a remote site. As long as Ubuntu desktop still uses X11, the Snap format provides you with very little meaningful security. Mir and Wayland both fix this, which is why Wayland is a prerequisite for the sandboxed xdg-app design.

I've produced a quick proof of concept of this. Grab XEvilTeddy from git, install Snapcraft (it's in 16.04), snapcraft snap, sudo snap install xevilteddy*.snap, /snap/bin/xevilteddy.xteddy . An adorable teddy bear! How cute. Now open Firefox and start typing, then check back in your terminal window. Oh no! All my secrets. Open another terminal window and give it focus. Oh no! An injected command that could instead have been a curl session that uploaded your private SSH keys to somewhere that's not going to respect your privacy.

The Snap format provides a lot of underlying technology that is a great step towards being able to protect systems against untrustworthy third-party applications, and once Ubuntu shifts to using Mir by default it'll be much better than the status quo. But right now the protections it provides are easily circumvented, and it's disingenuous to claim that it currently gives desktop users any real security.

comment count unavailable comments

April 21, 2016

Peer review, FOSS, and packaging/containers etc

Lately whenever I give a presentation, I often at least briefly mention one of my primary motivations for doing what I do:  I really like working in global community of people on Free Software.

A concrete artifact of that work is the code landing in git repositories.  But I believe it’s not just about landing code – peer review is a fundamental ingredient.

Many projects of course start out as just one person scratching an itch or having fun.  And it’s completely fine for many to stay that way.  But once a project reaches a certain level of maturity and widespread usage, I think it’s generally best for the original author to “step down” and become a peer.  That’s what I’ve now done for the OSTree project.

In other words, landing code in git master for a mature project should require at least one other person to look at it.  This may sound obvious, but you’d be surprised…there are some very critical projects that don’t have much the way of peer review.

To call out probably the most egregious example, the bash shell.  I’m deliberately linking to their “git log” because it violates all modern standards for git commit messages.  Now,  I don’t want to overly fault Chet for the years and years he’s put into maintaining the Bash project on his own time.  His contribution to Free Software is great and deserves recognition and applause.  But I believe that getting code into bash should involve more than just him replying to a mail message and running git push.  Bash isn’t the only example of this in what I would call the “Linux distribution core”.

Another major area where there are gaps are the “language ecosystems like Node.js, Rust’s cargo, Python’s pip etc.  Many projects on there are “one person scratching an itch” that other people mostly just consume.

There’s no magical solution to this – but in e.g. the language ecosystem case, if you happen to maintain a library which depends on another one, maybe consider spending a bit of your time looking at open pull requests and jumping in with review?

A vast topic related to this is “who is qualified to review” and “how intensively do I review”, but I think some qualified people are too timid about this – basically it’s much better to have a lightweight but shallow process than none at all.

Now finally, I included “packaging” in the title of this blog, so how does that relate?  It’s pretty simple, I also claim that most people doing what is today known as “packaging” should sign up to participate in upstream peer review.  Things like build fixes should go upstream rather than being kept downstream.  And if upstream doesn’t have peer review, reconsider packaging it – or help ensure peer review happens upstream!

 

 


You and NetworkManager 1.2 Can Still Ride Together

You don’t need an Uber, you don’t need a cab (via Casey Bisson CC BY-NC-SA 2.0)

NetworkManager 1.2 was released yesterday, and it’s already built for Fedora (24 and rawhide), a release candidate is in Ubuntu 16.04, and it should appear in other distros soon too.  Lubo wrote a great post on many of the new features, but there’s too many to highlight in one post for our ADD social media 140-character tap-tap generation to handle.  Ready for more?

indicator menus

appletWayland is coming, and it doesn’t support the XEmbed status icons like nm-applet creates.  Desktop environments also want more control over how these status menus appear.  While KDE and GNOME both provide their own network status menus Ubuntu, XFCE, and LXDE use nm-applet.  How do they deal with lack of XEmbed and status icons?

Ubuntu has long patched nm-applet to add App Indicator support, which exposes the applet’s menu structure as D-Bus objects to allow the desktop environment to draw the menu just like it wants.  We enhanced the GTK3 support in libdbusmenu-gtk to handle nm-applet’s icons and then added an indicator mode to nm-applet based off Ubuntu’s work.  We’ve made packager’s lives easier by building both modes into the applet simultaneously and allowing them to be switched at runtime.

IP reconfiguration

Want to add a second IP address or change your DNS servers right away?  With NetworkManager 1.2 you can now change the IP configuration of a device through the D-Bus interface or nmcli without triggering a reconnect.  This lets the network UIs like KDE or GNOME control-center apply changes you make to network configuration immediately without interrupting your network connection.  That might take a cycle  or two to show up in your favorite desktop environment, but the basis is there.

802.1x/WPA Enterprise authentication

An oft-requested feature was the ability to use certificate domain suffix checking to validate an authentication server.  While NetworkManager has supported certificate subject checking for years, this has limitations and isn’t as secure as domain suffix checking.  Both these options help prevent man-in-the-middle attacks where a rogue access point could masquerade as as your normal secure network.  802.1x authentication is still too complicated, and we hope to greatly simplify it in upcoming releases.

Interface stacking

While NM has always been architected to allow bridges-on-bonds-on-VLANs, there were some internal issues that prevented these more complicated configurations from working.  We’ve fixed those bugs, so now layer-cake network setups work in a flash!  Hopefully somebody will come up with a fancy drag-n-drop UI based off Minecraft or CandyCrush with arbitrary interface trees.  Maybe it’ll even have trophies when you finally get a Level 48 active-backup bond.

Old Stable Series

Now that 1.2 is out, the 1.0 series is in maintenance mode.  We’ll fix bugs and any security issues that come up, but typically don’t add new features.  Backporting from 1.2 to 1.0 will be even more difficult due to the removal of dbus-glib, a major feature in 1.2 release.  If you’re on 1.0, 0.9.10, or (gasp!) 0.9.8 I’d urge you to upgrade, and I think you’ll like what you see!

Validating changes to KMS drivers with IGT

New DRM drivers are being added to almost each new kernel release, and because the mode setting API is so rich and complex, bugs do slip in that translate to differences in behaviour between drivers.

There have been previous attempts at writing test suites for validating changes and preventing regressions, but they have typically happened downstream and focused on the specific needs of specific products and limited to one or at most a few of different hardware platforms.

Writing these tests from scratch would have been an enormous amount of work, and gathering previous efforts and joining them wouldn't be much worth it because they were written using different test frameworks and in different programming languages. Also, there would be great overlap on the basic tests, and little would remain of the trickier stuff.

Of the existing test suites, the one with most coverage is intel-gpu-tools, used by the Intel graphics team. Though a big part is specific to the i915 driver, what uses the generic APIs is pretty much driver-independent and can be made to work with the other drivers without much effort. Also, Broadcom's Eric Anholt has already started adding tests for IOCTLs specific to the VideoCore-IV driver.

Collabora's Micah Fedke and Daniel Stone had added a facility for selecting DRM device files other than i915's and I improved the abstraction for creating buffers so it works for drivers without GEM buffers. Next I removed a bunch of superfluous dependencies on i915-only stuff and got a useful subset of tests to run on a Radxa Rock2 board (with the Rockchip 3288 SoC). Around half of these patches have been merged already and the other half are awaiting review. Meanwhile, Collabora's Robert Foss is running the ported tests on a Raspberry Pi 2 and has started sending patches to account for its peculiarities.

The next two big chunks of work are abstracting CRC checksums of frames (on drivers other than i915 this could be done with Google's Chamelium or with a board similar to Numato Opsis), and the buffer management API from libdrm that is currently i915-only (bufmgr). Something that will have to be dealt with in the future is abstracting the submittal of specific loads on the GPU as that's currently very much driver-specific.

Additionally, I will be scheduling jobs in our LAVA instance to run these tests on the boards we have in there.

Thanks to Google for sponsoring my time, to the Intel OTC folks for their support and reviews, and to Collabora for sponsoring Robert's, Micah's and Daniel's time.

April 20, 2016

NetworkManager 1.2 is here!

The NetworkManager team just released NetworkManager 1.2, and it is the biggest update in over a year. With almost 3500 commits since the previous major release (1.0), this release  delivers many new key features:

  • Less dependencies
  • Improved Wi-Fi and IPv6 privacy
  • Wider support for software devices
  • Improved command line tool
  • Better documentation
  • Support for multiple concurrent VPN sessions

Let’s have a closer look!

Improved privacy

We take everyone’s privacy very seriously. That is why we’re among the first adopters of RFC7217 that addresses the problem of tracking a host that moves between IPv6 networks. Users  can read more about this in a separate article.

The identity of a mobile host can also leak via Wi-Fi hardware addresses. A common way to solve this is to use random addresses when scanning for available access points, which is what NetworkManager now does (with wpa_supplicant 2.4 or newer). The actual hardware address is used only after the device is associated to an access point.

For further privacy, users can enable Wi-Fi hardware address randomization while connected to untrusted access points, though this is not the default behavior as it may cause issues with access control policies and captive portals.

Better Wi-Fi

In addition to Wi-Fi privacy improvements, Wi-Fi scanning is much smarter and more responsive. The access point list is now maintained by wpa_supplicant and doesn’t grow insanely large when the device is moving, and the currently associated access point is more accurately detected. Dan’s blog covers the change extensively.

Mobile users will appreciate that we’ve added the possibility to enable Wi-Fi power saving globally or on a per-connection basis.

Support for software devices

NetworkManager already supported creation of bond, team and bridge devices. With version 1.2 users  can also manage tun, tap, macvlan, vxlan and IP tunnel devices.

Improved command-line experience

Our command line client is now friendlier and more flexible than ever before. It uses colors to match the status of a device or a connection and sorts the output for better clarity.

Users  can specify arbitrary connection properties at creation time, without the need to create a connection first and edit it afterwards.

We also simplified creation of master-slave relationships between devices, making it easy to enslave any kind of device to bridges, bonds or teams. Creating multi-level stacking of devices is now very easy.

Use of VPN connections with nmcli is now a lot better too; see below.

We have a very good command completion rules for Bash and excellent manuals with examples too.

Slimming down

With NetworkManager 1.0 we’ve split some hardware support into loadable modules. This makes sense on server or minimal installations — e.g don’t need containers to support Wi-Fi,  or servers to run Bluetooth. For NetworkManager 1.2 we’ve cut down on external libraries.

The use of dbus-glib has been replaced with gio’s native D-Bus support and libnl-route-3 is no longer used.  Dependency on avahi-autoipd has been dropped.

Native IPv4 link-local addressing configuration based on systemd network library is now used instead.

Users running NetworkManager from minimal images, such as in small systems or containers, are going to benefit from this release too: NetworkManager runs just fine in LXC containers or even Docker. For further details please  take a look at readily made Docker images with NetworkManager.

Finally, we don’t manage the hostname by ourselves anymore on systemd-based systems — if anyone uses our, now deprecated, API for hostname management, we just forward it to systemd-hostnamed which is a lot better at the job.

More flexible VPN support

The VPN support has been improved considerably too. Before NetworkManager 1.2 users could only run one instance of a particular VPN plugin that would service exactly one connection. This limitation is now gone.

It is now also possible to connect to a VPN from the command line using nmcli. If the  VPN needs a password, nmcli will ask when the user  use the –ask option.

Finally users can now import and export the VPN connection settings of most types of VPNs in the  VPN’s native format from the command line using the nmcli connection export and nmcli connection import commands.

…and a lot more

NetworkManager gained a lot more than could be reasonably described here. There’s support for configuring the Wake-on-LAN capability of Ethernet hardware, a LLDP listener, better resolv.conf management and more. Take a look at our NEWS file for details.

This release wouldn’t be possible without community contributions. Over 50 people contributed to the NetworkManager code base and a lot more contributed bug reports. Without them we’d have hard time figuring out which parts of NetworkManager needs our attention and care.

Thanks to Beniamino Galvani, Thomas Haller, Dan Williams, Francesco Giudici and Rashid Khan who contributed major parts of this article and corrected many mistakes.

Wed 2016/Apr/20

  • A Cycling Map

    Este post en español

    There are no good, paper cycling maps for my region. There are 1:20,000 street maps for car navigation within the city, but they have absolutely no detail in the rural areas. There are 1:200,000 maps for long trips by car, but that's too big of a scale.

    Ideally there would be high-quality printed maps at 1:50,000 scale (i.e. 1 Km in the real world is 2 cm in the map), with enough detail and some features:

    • Contour lines. Xalapa is in the middle of the mountains, so it's useful to plan for (often ridiculously steep) uphills/downhills.

    • Where can I replenish my water/food? Convenience stores, roadside food stands.

    • What's the quality and surface of the roads? This region is full of rural tracks that go through coffee and sugarcane plantations. The most-transited tracks can be ridden with reasonable "street" tyres. Others require fatter tyres, or a lot of skill, or a mountain bike, as they have rocks and roots and lots of fist-sized debris.

    • Any interesting sights or places? It's nice to have a visual "prize" when you reach your destination, apart from the mountainous landscape itself. Any good viewpoints? Interesting ruins? Waterfalls?

    • As many references as possible. The rural roads tend to look all the same — coffee plants, bananas, sugarcane, dirt roads. Is there an especially big tree at the junction of two trails so you know when to turn? Is there one of the ubiquitous roadside shrines or crosses? Did I just see the high-voltage power lines overhead?

    Make the map yourself, damnit

    For a couple of years now, I have been mapping the rural roads around here in OpenStreetMap. This has been an interesting process.

    For example, this is the satellite view that gets shown in iD, the web editor for OpenStreetMap:

    Satellite view        of rural area

    One can make out rural roads there between fields (here, between the blue river and the yellow highway). They are hard to see where there are many trees, and sometimes they just disappear in the foliage. When these roads are just not visible or 100% unambiguous in the satellite view, there's little else to do but go out and actually ride them while recording a GPS track with my phone.

    These are two typical rural roads here:

    Rural road between plantations Rural road with     view to the mountains

    Once I get back home, I'll load the GPS track in the OpenStreetMap editor, trace the roads, and add some things by inference (the road crosses a stream, so there must be a bridge) or by memory (oh, I remember that especially pretty stone bridge!). Behold, a bridge in an unpaved road:

    Bridge in the editor Bridge in the real        world

    It is also possible to print a map quickly, say, out of Field Papers, annotate it while riding, and later add the data to the map when on the computer. After you've fed the dog.

    Field papers in use

    Now, that's all about creating map data. Visualization (or rendering for printing) is another matter.

    Visualization

    Here are some interesting map renderers that work from OpenStreetMap data:

    OpenTopoMap

    OpenTopoMap. It has contour lines. It is high contrast! Paved roads are light-colored with black casing (cartography jargon for the outline), like on a traditional printed map; unpaved rural tracks are black. Rivers have a dark blue outline. Rivers have little arrows that indicate the flow direction (that means downhill!) — here, look for the little blue arrow where the river forks in two. The map shows things that are interesting in hiking/biking maps: fences, gates, viewpoints, wayside crosses, shelters. Wooded areas, or farmland and orchards, are shaded/patterned nicely. The map doesn't show convenience stores and the like.

    GPSies with Sigma Cycle layer

    GPSies with its Sigma Cycle layer. It has contour lines. It tells you the mountain biking difficulty of each trail, which is a nice touch. It doesn't include things like convenience stores unless you go into much higher zoom levels. It is a low-contrast map as is common for on-screen viewing — when printed, this just makes a washed-out mess.

    Cycle.Travel

    Cycle.Travel. The map is very pretty onscreen, not suitable for printing, but the bicycle routing is incredibly good. It gives preference to small, quiet roads instead of motorways. It looks for routes where slopes are not ridiculous. It gives you elevation profiles for routes... if you are in the first world. That part doesn't work in Mexico. Hopefully that will change — worldwide elevation data is available, but there are some epic computations that need to happen before routing works on a continent-level scale (see the end of that blog post).

    Why don't you take your phone with maps on the bike?

    I do this all the time, and the following gets tedious:

    1. Stop the bike.
    2. Take out the phone from my pocket.
    3. Unlock the phone. Remove gloves beforehand if it's cold.
    4. Wait for maps app to wake up.
    5. Wipe sweat from phone. Wait until moisture evaporates so touch screen works again.
    6. Be ever mindful of the battery, since the GPS chews through it.
    7. Be ever mindful of my credit, since 3G data chews through it.
    8. Etc.

    I *love* having map data on my phone, and I've gone through a few applications that can save map data without an internet connection.

    City Maps 2 Go is nice. It has been made more complex than before with the last few versions. Maps for Mexico don't seem to be updated frequently at all, which is frustrating since I add a lot of data to the base OpenStreetMap myself and can't see it in the app. On the plus side, it uses vector maps.

    MotionX GPS is pretty great. It tries extra-hard not to stop recording when you are creating a GPS track (unlike, ahem, Strava). It lets you save offline maps. It only downloads raster maps from OpenStreetMap and OpenCycleMap — the former is nominally good; the latter is getting very dated these days.

    Maps.Me is very nice! It has offline, vector maps. Maps seem to be updated reasonably frequently. It has routing.

    Go Map!! is a full editor for OpenStreetMap. It can more or less store offline maps. I use it all the time to add little details to the map while out riding. This is a fantastic app.

    Those apps are fine for trips of a few hours (i.e. while the phone's battery lasts), and not good for a full-day trip. I've started carrying an external battery, but that's cumbersome and heavy.

    So, I want a printed map. Since time immemorial there has been hardware to attach printed maps to a bike's handlebar, or even a convenient handlebar bag with a map sleeve on it.

    Render the map yourself, damnit

    The easiest thing would be to download a section of the map from OpenTopoMap, at a zoom level that is useful, and print it. This works in a pinch, but has several problems.

    Maps rendered from OpenStreetMap are generally designed for web consumption, or for moderately high-resolution mobile screens. Both are far from the size and resolution of a good printed map. A laptop or desktop has a reasonably-sized screen, but is low resolution: even a 21" 4K display is only slightly above 200 DPI. A phone is denser, at something between 300 and 400 DPI, but it is a tiny screen... compared to a nice, map-sized sheet of paper — easily 50x50 cm at 1200 DPI.

    ... and you can fold a map into the size of a phablet, and it's still higher-rez and lighter and doesn't eat batteries and OMG I'm a retrogrouch, ain't I.

    Also, web maps are zoomable, while paper maps are at a fixed scale. 1:50,000 works well for a few hours' worth of cycling — in this mountainous region, it's too tiring for me to go much further than what fits in such a map.

    So, my line of thinking was something like:

    1. How big is the sheet of paper for my map? Depends on the printer.

    2. What printed resolution will it have? Depends on the printer.

    3. What map scale do I want? 1:50,000

    4. What level of detail do I want? At zoom=15 there is a nice level of detail; at z=16 it is even more clear. However, it is only until z=17 that very small things like convenience stores start appearing... at least for "normal" OpenStreetMap renderers.

    Zoom levels?

    Web maps are funny. OpenStreetMap normally gets rendered with square tiles; each tile is 256x256 pixels. At zoom=0, the whole world fits in a single tile.

    Whole        world, single tile, zoom=0

    The URL for that (generated) image is http://opentopomap.org/0/0/0.png.

    If we go in one zoom level, to zoom=1, that uber-tile gets divided into 2x2 sub-tiles. Look at the URLs, which end in zoom/x/y.png:

    1/0/0
    http://opentopomap.org/1/0/0.png

    1/1/0
    http://opentopomap.org/1/1/0.png

    1/0/1
    http://opentopomap.org/1/0/1.png

    1/1/1
    http://opentopomap.org/1/1/1.png

    Let's go in one level, to zoom=2, and just focus on the four sub-sub-tiles for the top-left tile above (the one with North America and Central America):

    2/0/0
    http://opentopomap.org/2/0/0.png

    2/1/0
    http://opentopomap.org/2/1/0.png

    2/0/1
    http://opentopomap.org/2/0/1.png

    2/1/1
    http://opentopomap.org/2/1/1.png

    So the question generally is, what zoom level do I want, for the level of detail I want in a particular map scale, considering the printed resolution of the printer I'll use?

    For reference:

    After some playing around with numbers, I came up with a related formula. What map scale will I get, given a printed resolution and a zoom level?

    (defun get-map-scale (dpi tile-size zoom latitude)
      (let* ((circumference-at-equator 40075016.686)
    	 (meridian-length (* circumference-at-equator
    			     (cos (degrees-to-radians latitude))))
    	 (tiles-around-the-earth (exp (* (log 2) zoom)))
    	 (pixels-around-the-earth (* tiles-around-the-earth tile-size))
    
    	 (meters-per-pixel (/ meridian-length pixels-around-the-earth))
    
    	 (meters-in-inch-of-pixels (* meters-per-pixel dpi))
    	 (meters-in-cm-of-pixels (/ meters-in-inch-of-pixels 2.54)))
        (* meters-in-cm-of-pixels 100)))
    
    (get-map-scale 600      ; dpi
    	       256      ; tile-size
    	       16       ; zoom
    	       19.533)  ; latitude of my town
    53177.66240054532 ; pretty close to 1:50,000

    All right: zoom=16 has a useful level of detail, and it gives me a printed map scale close to 1:50,000. I can probably take the tile data and downsample it a bit to really get the scale I want (from 53177 to 50000).

    Why a tile-size argument (in pixels)? Aren't tiles always 256 pixels square? Read on.

    Print ≠ display

    A 1-pixel outline ("hairline") is nice and visible onscreen, but on a 600 DPI or 1200 DPI printout it's pretty hard to see, especially if it is against a background of contour lines, crop markers, and assorted cartographic garbage.

    A 16x16-pixel icon that shows the location of a convenience store, or a viewpoint, or some other marker, is perfectly visible on a screen. However, it is just a speck on paper.

    And text... 10-pixel text is probably readable even on a high-resolution phone, but definitely not on paper at printed resolutions.

    If I just take OpenTopoMap and print it, I get tiny text, lines and outlines that are way too thin, and markers that are practically invisible. I need something that lets me tweak the thickness of lines and outlines, the size of markers and icons, and the size and position of text labels, so that printing the results will give me a legible map.

    Look at these maps and zoom in. They are designed for printing. They are full of detail, but on a screen the text looks way too big. If printed, they would be pretty nice.

    The default openstreetmap.org uses Mapnik as a renderer, which in turn uses a toolchain to produce stylesheets that determine how the map gets rendered. Stylesheets say stuff like, "a motorway gets rendered in red, 20 pixels thick, with a 4-pixel black outline, and with highway markers such and such pixels apart, using this icon", or "graveyards are rendered as solid polygons, using a green background with this repeating pattern of little crosses at 40% opacity". For a zoomable map, that whole process needs to be done at the different zoom levels (since the thicknesses and sizes change, and just linearly scaling things looks terrible). It's a pain in the ass to define a stylesheet — or rather, it's meticulous work to be done in an obscure styling language.

    Recently there has been an explosion of map renderers that work from OpenStreetMap data. I have been using Mapbox Studio, which has the big feature of not requiring you to learn a styling language. Studio is a web app that lets you define map layers and a style for each layer: "the streets layer comes from residential roads; render that as white lines with a black outline". It lets you use specific values for different zoom levels, with an interesting user interface that would be so much better without all the focus issues of a web browser.

    Screenshot of        Mapbox Studio

    I've been learning how to use this beast — initially there's an airplane-cockpit aspect to it. Things went much easier once I understood the following:

    The main OpenStreetMap database is an enormous bag of points, lines, and "relations". Each of those may have a number of key/value pairs. For example, a point may have "shop=bakery" and "name=Bready McBreadface", while a street may have "highway=residential" and "name=Baker Street".

    A very, uh, interesting toolchain slices that data and puts it into vector tiles. A vector tile is just a square which contains layers of drawing-like instructions. For example, the "streets" layer has a bunch of "moveto lineto lineto lineto". However, the tiles don't actually contain styling information. You get the line data, but not the colors or the thicknesses.

    There are many providers of vector tiles and renderers. Mapzen supplies vector tiles and a nifty OpenGL-based renderer. Mapbox supplies vector tiles and a bunch of libraries for using them from mobile platforms. Each provider of vector tiles decides which map features to put into which map layers.

    Layers have two purposes: styling, and z-ordering. Styling is what you expect: the layer for residential roads gets rendered as lines with a certain color/thickness/outline. Z-ordering more or less depends on the purpose of your map. There's the background, based on landcover information (desert=yellow, forest=green, water=blue). Above that there are contour lines. Above those there are roads. Above those there are points of interest.

    In terms of styling, there are some tricks to achieve common visual styles. For example, each kind of road (motorway, secondary road, residential road) gets two layers: one for the casing (outline), and one for the line fill. This is to avoid complicated geometry at intersections: to have red lines with a black outline, you have a layer with black wide lines, and above it a layer with red narrow lines, both from the same data.

    Styling lines in map        layers

    A vector tileset may have not all the data in the main OpenStreetMap database. For example, Mapbox creates and supplies a tileset called mapbox-streets-v7 (introduction, reference). It has streets, buildings, points of interest like shops, fences, etc. It does not have some things that I'm interested in, like high-voltage power lines and towers (they are good landmarks!), wayside shrines, and the extents of industrial areas.

    In theory I could create a tileset with the missing features I want, but I don't want to waste too much time with the scary toolchain. Instead, Mapbox lets one add custom data layers; in particular, they have a nice tutorial on extracting specific data from the map with the Overpass Turbo tool and adding that to your own map as a new layer. For example, with Overpass Turbo I can make a query for "find me all the power lines in this region" and export that as a GeoJSON blob. Later I can take that file, upload it to Mapbox Studio, and tell it how to style the high-voltage power lines and towers. It's sort of manual work, but maybe I can automate it with the magic of Makefiles and the Mapbox API.

    Oh, before I forget: Mapbox uses 512-pixel tiles. I don't know why; maybe it is to reduce the number of HTTP requests? In any case, that's why my little chunk of code above has a tile-size argument.

    So what does it look like?

    My map

    This is a work in progress. What is missing:

    • Styling suitable for printing. I've been tweaking the colors and line styles so that the map is high-contrast and legible enough. I have not figured out the right thicknesses, nor text sizes, for prints yet.

    • Adding data that I care about but that is not in mapbox-streets-v7: shrines, power lines, industrial areas, municipal boundaries, farms, gates, ruins, waterfalls... these are available in the main OpenStreetMap database, fortunately.

    • Add styling for things that are in the vector tiles, but don't have a visible-enough style by default. Crops could get icons like sugarcane or coffee; sports fields could get a little icon for football/baseball.

    • Figure out how to do pattern-like styling for line data. I want cliffs shown somewhat like this (a line with little triangles), but I don't know how to do that in Mapbox yet. I want little arrows to show the direction in which rivers flow.

    • Do a semi-exhaustive ride of all the rural roads in the area for which I'll generate the map, to ensure that I haven't missed useful landmarks. That's supposed to be the fun part, right?

    References

    The design of the Mapbox Outdoors style. For my own map, I started with this style as a base and then started to tweak it to make it high-contrast and have better colors for printing.

    Technical discussion of generating a printable city map — a bit old; uses TileMill and CartoCSS (the precursors to Mapbox Studio). Talks about dealing with SVG maps, large posters, overview pages.

    An ingenious vintage German cycle map, which manages to cram an elevation profile on each road (!).

    The great lost map scale argues that 1:100,000 is the best for long-distance, multi-day cyclists, to avoid carrying so many folded maps. Excellent map pr0n here (look at the Snowdonia map — those hand-drawn cliffs!). I'm just a half-a-day cycling dilettante, so for now 1:50,000 is good for me.

    How to make a bike map focuses on city-scale maps, and on whether roads are safe or not for commuters.

    Rendering the World — how tiling makes it possible to render little chunks of the world on demand.

    Introducing Tilemaker: vector tiles without the stack. Instead of dealing with Postgres bullshit and a toolchain, this is a single command-line utility (... with a hand-written configuration file) to slice OpenStreetMap data into layers which you define.

    My cycling map in Mapbox Studio.

Cockpit Ubuntu PPA

Cockpit is the modern Linux admin interface. There’s a new release every week. Here are the highlights from this weeks 0.103 release.

Upload each Release to an Ubuntu PPA

Each weekly release of Cockpit is now uploaded to an Ubuntu PPA. Here’s how to make use of it:

sudo add-apt-repository ppa:cockpit-project/cockpit
sudo apt-get update
sudo apt-get install cockpit

Kubernetes Connection Configuration

When a Kubernetes client wants to access the API of the cluster, it looks for a “kubeconfig” file to tell it how to find the cluster and how to authenticate when accessing the API. The usual location for this file is in the current user’s home directory at the ~/.kube/config file path. If that doesn’t exist, then usually the cluster isn’t available. This applies to both clients like the kubectl command as well as Cockpit’s cluster dashboard.

Cockpit can now prompt for this information, and build this file for you. If it doesn’t exist, then there’s a helpful “Troubleshoot” button to help get this configuration in place.

Remove jQuery Usage from cockpit.js API

As part of stabilizing the internals of Cockpit, we removed jQuery usage from the cockpit.js file. The javascript API itself hasn’t changed, but this change helps to help keep a stable API in the future.

Try it out

Cockpit 0.103 is available now:

Upgrading Fedora 23 to 24 using GNOME Software

I’ve spent the last couple of days fixing up all the upgrade bugs in GNOME Software and backporting them to gnome-3-20. The idea is that we backport gnome-software plus a couple of the deps into Fedora 23 so that we can offer a 100% GUI upgrade experience. It’s the first time we’ve officially transplanted a n+1 GNOME component into an older release (ignoring my unofficial Fedora 20 whole-desktop backport COPR) and so we’re carefully testing for regressions and new bugs.

If you do want to test upgrading from F23 to F24, first make sure you’ve backed up your system. Then, install and enable this COPR and update gnome-software. This should also install a new libhif, libappstream-glib, json-glib and PackageKit and a few other bits. If you’ve not done the update offline using [the old] GNOME Software, you’ll need to reboot at this stage as well.

Fire up the new gnome-software and look at the new UI. Actually, there’s not a lot new to see as we’ve left new features like the ODRS reviewing service and xdg-app as F24-only features, so it should be mostly the same as before but with better search results. Now go to the Updates page which will show any updates you have pending, and it will also download the list of possible distro upgrades to your home directory.

As we’re testing upgrading to a pre-release, we have to convince gnome-software that we’re living in the future. First, open ~/.cache/gnome-software/3.20/upgrades/fedora.json and search for f24. Carefully change the Under Development string to Active then save the file. Log out, back in and the launch gnome-software again or wait for the notification from the shell. If all has gone well you should see a banner telling you about the new upgrade. If you click Download go and get a coffee and start baking a cake, as it’s going to take a long time to download all that new goodness. Once complete just click Install, which prompts a reboot where the packages will be installed. For this step you’ll probably want to bake another cake. We’re not quite in an atomic instant-apply world yet, although I’ll be talking a lot more about that for Fedora 25.

With a bit of luck, after 30 minutes staring at a progressbar the computer should reboot itself into a fresh new Fedora 24 beta installation. Success!

Screenshot_Fedora23-Upgrade_2016-04-20_15:23:27

If you spot any problems or encounter any bugs, please let me know either in bugzilla, email or or IRC. I’ve not backported all the custom CSS for the upgrade banner just yet, but this should be working soon. Thanks!

April 19, 2016

How to Sysprof

So now that a new Sysprof release is shipped, lets pick on an unsuspecting library to see what it is like to improve performance in a real-world scenario. Today we’ll pick on GtkSourceView. They shouldn’t feel bad though, GtkSourceView is an absolutely wonderful library and like any piece of software, it can be improved.

GtkSourceView has a lovely helper program to test things out in the tests/ directory. If you are an app or library developer, please do this! It makes things much easier.

So lets run ./test-widget in our jhbuild environment, and start-up Sysprof. Often times, you’ll want to see how your program affects the whole system. But for this test, I want to focus on test-widget, so we will limit our capture to samples in that process. Do this by turning off the Profile my entire system switch and then selecting your target process from the list.

1-setup-profiler

Next, click record. You might be prompted to authorize your user to access performance counters based on your system configuration and user permissions.

2-click-record

After your profiling session has started, switch back to the test application and exercise the crap out of it. In this case, I turned on some features in the test widget like line numbers (something I always have enabled in Gedit and Builder) and started scrolling like crazy. I did this until I had about 30,000 samples recorded. Sysprof will tell you how many callchains have been recorded in the upper right corner of the window.

Then press the Stop button. Depending on the size of your capture, it might take a couple seconds, but the callgraph will then be generated. It has to crack open all of the linked libraries and extract symbol information from them, so it can take a second or two.

3-view-callgraph

Now the mysterious part. Start diving into the descendants tree following the most expensive cumulative times. We want to find something that looks “out of place”. Getting good at that takes practice. If your callchain gets too deep, just hit enter on the row and it will focus in on that item.

In the image below, you’ll see I jumped past main, various main loop junk until i got to gtk_main_do_event(). This is the crux of event dispatch in GTK+. If we keep diving down by the most expensive callchain, we get to a peculiar function, center_on(). It seems to be calling into gtk_text_view_get_iter_location() a bunch, I wonder why.

4-center_on

So lets go find the code. It is clearly called by GtkSourceGutterRendererText, so that is where we will start.

In the code below, it looks like the text gutter renderer (what draws line numbers next to your code) needs to either place the text in the middle of the row, the top of the row (in the case of line wrapping), or the bottom of the row (also in case of line wrapping).

5-find-relevant-code

In Builder, shamefully, we don’t even allow line wrapping today. So clearly a shortcut can be taken. If wrapping is disabled, we know that we will always be centering our text to the entire height of the cell. So lets cook up a quick patch to avoid the center_on() calls altogether.

5.5-patch-the-code

Now we build, and repeat our profiling session to compare the results. Originally the gutter_renderer_text_draw() was in about 33% of our collected callchains. Now, if you look below we are down to less than 20% of our collected callchains, and center_on() is nowhere to be seen!

6-compare-results

So the moral of the story is that in about half an hour, you can profile, learn something about a code-base, and make measurable improvements. So go ye forth and make the F in Free Software stand for Fast.

ZeMarmot and GIMP at GNOME.Asia!

While Libre Graphics Meeting 2016 barely ended, we had to say Goodbye to London. But this is not over for us since we are leaving directly to India for GNOME.Asia Summit 2016. We will be presenting both ZeMarmot, our animation film project made with Free Software, under Libre Art licenses, and the software GIMP (in particular the work in progress, not current releases), as part of the team. See the » schedule « for accurate dates and times.

GNOME.Asia Summit 2016, April 21-24, Delhi, India

GNOME.Asia is hosted in Manav Rachna International University in Delhi, India, this year. If anyone is interested to meet us as well as other awesome projects around the Free Software and in particular GNOME desktop ecosystem, we’d be happy to see you there!

GNOME Foundation is sponsoring us to travel to the event. We are very grateful for this!

ZeMarmot sponsored by GNOME

P.S.: I will write a report of Libre Graphics Meeting in a few days. As you can imagine, we barely have the time to stop and breathe right now as we are taking the plane in a few hours!

April 18, 2016

Announcing systemd.conf 2016

Announcing systemd.conf 2016

We are happy to announce the 2016 installment of systemd.conf, the conference of the systemd project!

After our successful first conference 2015 we’d like to repeat the event in 2016 for the second time. The conference will take place on September 28th until October 1st, 2016 at betahaus in Berlin, Germany. The event is a few days before LinuxCon Europe, which also is located in Berlin this year. This year, the conference will consist of two days of presentations, a one-day hackfest and one day of hands-on training sessions.

The website is online now, please visit https://conf.systemd.io/.

Tickets at early-bird prices are available already. Purchase them at https://ti.to/systemdconf/systemdconf-2016.

The Call for Presentations will open soon, we are looking forward to your submissions! A separate announcement will be published as soon as the CfP is open.

systemd.conf 2016 is a organized jointly by the systemd community and kinvolk.io.

We are looking for sponsors! We’ve got early commitments from some of last year’s sponsors: Collabora, Pengutronix & Red Hat. Please see the web site for details about how your company may become a sponsor, too.

If you have any questions, please contact us at info@systemd.io.

Fedora Workstation Phase 1 – Homestretch

When we set of to do the Fedora Workstation we had some clear idea about where we wanted to take it, but we also realized there was a lot of cleaning up needed in our stack to make our vision viable. The biggest change we felt was needed to enable us was the move towards using application bundles as the primary shipping method for applications as opposed to the fine grained package universe that RPMS represent. That said we also saw the many advantages the packages brought in terms of easing security updates and allowing people to fine tune their system, so we didn’t want to throw the proverbial baby out with the bathwater. So we started investigating the various technologies out there, as we where of course not alone in thinking about these things. Unfortunately nothing clearly fit the bill of what we wanted and trying to modify for instance Docker to be a good technology for running desktop applications turned out to be nonviable. So we tasked Alex Larsson with designing and creating what today is known as xdg-app. Our requirements list looked something like this (in random order):

a) Easy bundling of needed libraries
b) A runtime system to reduce the application sizes to something more manageable and ease providing security updates.
c) A system designed to be managed by a desktop session as opposed to managed by sysadmin style tools
d) A security model that would let us gradually move towards sandboxing applications and alleviate the downsides of library bundling
e) An ability to reliably offer online updates of applications
f) Reuse as much of the technology created by others as possible to lower maintenance overhead
g) Design it in a way that makes supporting the applications cross multiple distributions possible and easy
h) Provide a top notch developer story so that this becomes a big positive for application developers and not another pain point.

As we investigated what we needed other requirements become obvious, like the need to migrate from X to Wayland in order to build a modern composited windowing system that renders using GL, instead of an aging one that has a rendering interface that is no longer used for the most part, and to be able to provide the level of system security we wanted. There was also the need to design new systems like Pinos for handling video and add new functionality to PulseAudio for dealing with sandboxed applications, creating libinput to have great input handling in Wayland and also let us share the input subsystem between X and Wayland. And of course we wanted our new packaging system tightly integrated into GNOME Software so that install, updating and running these applications became smooth and transparent to the user.

This would be a big undertaking and it turned out to be an even bigger effort than we initially thought, as there was a lot of swamp draining needed here and I am really happy that we have a team capable of pulling these things off. For instance there is not really many other people in the Linux community other than Peter Hutterer who could have created libinput, and without libinput there is no way Wayland would be a viable alternative to X (and without libinput neither would Mir which is a bit ironic for a system that was created because they couldn’t wait for Wayland :).

So going back to the title of this blog entry I feel that we are now finally exiting what I think of as Phase 1, even if we never formally designated it as such, of our development roadmap. For instance we initially hoped to have Wayland feature complete in a Fedora 22 timeframe, but it has taken us extra time to get all the smaller details right, so instead we are now having what we consider the first feature complete Wayland ready with Fedora Workstation 24. And if things go as we expect and hope that should become our default system starting from Fedora Workstation 25. The X Window session will be shipping and available for a long time yet I am sure, but not defaulting to it will mark a clear line in the sand for where the development focus is going forward.

At the same time Xdg-app has started to come together nicely over the last few Months with a lot of projects experimenting with it and bugs and issues being quickly resolved. We also taking major steps towards bringing xdg-app into the mainstream by Alex now working on making Xdg-apps OCI compliant, basically meaning that xdg-apps conform to the Open Container Initative requirements defined by Opencontainers.org. Our expectation is that the next Xdg-app development release will include the needed bits to be OCI compliant. Another nice milestone for Xdg-app was that it recently got added to Debian, meaning that Xdg-apps should be more easily runable in both Fedora its downstreams and in Debian and its downstreams.

Another major piece of engineering that is coming to a close is moving major applications such as Firefox, LibreOffice and Eclipse to GTK3. This was needed both to get these applications able to run natively on Wayland, but it also enabled us to make them work nicely for HiDPI. This has also played out into how GTK3 have positioned itself which to be a toolkit dedicated to pushing the Linux desktop forward and helping that quickly adapt and adopt to changes in the technology landscape. This is why GTK3 is that toolkit that has been leading the charge on things like HiDPI support and Wayland support. At the same time we know some of the changes in GTK3 have been painful for application developers, especially the changes in how theming works, but with the bulk of the migration to using CSS for theming now being complete we expect that even for applications that use GTK3 in ‘weird ways’ like Firefox, LibreOffice and Eclipse, things should be stable.

Another piece of the puzzle we have wanted to improve is the general Linux hardware story. So since Red Hat joined Khronos last year the Red Hat Graphics team, with Dave Airlie and Adam Jackson leading the charge, has been able to participate in preparing the launch of Vulkan through doing review and we even managed to sneak in a bit of code contribution by Adam Jackson ensuring that there was a vendor neutral Vulkan loader available so that we didn’t end up in a situation where every vendor had to provide their own.

We have also been contributing to the vendor neutral OpenGL dispatcher. The dispatcher is basically a layer that routes an applications OpenGL rendering to the correct implementation, so if you have a system with a discrete GPU system you can for instance control which of your two GPUs handle a certain application or game. Adam Jackson has also been collaborating closely with NVidia on getting such a dispatch system complete for OpenGL, so that the age old problem of the Mesa OpenGL library and the proprietary NVidia OpenGL library conflicting can finally be resolved. So NVidia has of course handled the part in their driver and they where also the ones designing this, but Adam has been working on getting the Mesa parts completed. We think that this will make the GPU story on Linux a lot nicer going forward. There are still a few missing pieces before we have the discrete graphics card scenario handled in a smooth way, but we are getting there quickly.

The other thing we have been working on in terms of hardware support, which is still ongoing is improving the Red Hat certification process to cover more desktop hardware. You might ask what that has to do with Fedora Workstation, but it actually is a quite efficient way of improving the quality of Linux support for desktop hardware in general as most of the major vendors submit some of their laptops and desktops to Red Hat for certification. So the more issues the Red Hat certification process can detect the better Linux support on such hardware can become.

Another major effort where we have managed to cover a lot of our goals and targets is GNOME Software. Since the inception of Fedora Workstation we taken that tool and added functionality like UEFI firmware updates, codecs and font handling, GNOME Extensions handling, System upgrades, Xdg-app handling, users reviews, improved application metadata, improved handling of 3rd party repositories and improved general performance with the move from yum to hawkeye. And I think that the Software store has become a crucial part of what you expect of a desktop these days, with things like the Google Play store, the Apple Store and the Microsoft store to some degree defining their respective products more than the heuristics of the shell of Android, iPhone, MacOS or Windows. And I take it as an clear recognition of the great work Richard Hughes had done with GNOME Software that this week there is a special GNOME Software hackfest in London with participants from Fedora/Red Hat, Ubuntu/Canonical, Codethink and Endless.

So I am very happy with where we are at, and I want to say thank you to all long term Fedora users who been with us through the years and also say thank you and welcome to all the new Fedora Workstation users who has seen all the cool stuff we been doing and decided to join us over the last two years; seeing the strong growth in our userbase during this time has been a great source of joy for us and been a verification that we are on the right track.

I am also very happy about how the culmination of these efforts will be on display with the upcoming Fedora Workstation 24 release! Of course it also means it is time for the Fedora Workstation Working group to start planning what Phase 2 of reaching the Fedora Workstation vision should be :)

libinput and graphics tablet pad support

When we released graphics tablet support in libinput earlier this year, only tablet tools were supported. So while you could use the pen normally, the buttons, rings and strips on the physical tablet itself (the "pad") weren't detected by libinput and did not work. I have now merged the patches for pad support into libinput.

The reason for the delay was simple: we wanted to get it right [1]. Pads have a couple of properties that tools don't have and we always considered pads to be different to pens and initially focused on a more generic interface (the "buttonset" interface) to accommodate for those. After some coding, we have now arrived at a tablet pad-specific interface instead. This post is a high-level overview of the new tablet pad interface and how we intend it do be used.

The basic sign that a pad is present is when a device has the tablet pad capability. Unlike tools, pads don't have proximity events, they are always considered in proximity and it is up to the compositor to handle the focus accordingly. In most cases, this means tying it to the keyboard focus. Usually a pad is available as soon as a tablet is plugged in, but note that the Wacom ExpressKey Remote (EKR) is a separate, wireless device and may be connected after the physical pad. It is up to the compositor to link the EKR with the correct tablet (if there is more than one).

Pads have three sources of events: buttons, rings and strips. Rings and strips are touch-sensitive surfaces and provide absolute values - rings in degrees, strips in normalized [0.0, 1.0] coordinates. Similar to pointer axis sources we provide a source notification. If that source is "finger", then we send a terminating out-of-range event so that the caller can trigger things like kinetic scrolling.

Buttons on a pad are ... different. libinput usually re-uses the Linux kernel's include/input.h event codes for buttons and keys. But for the pad we decided to use plain sequential button numbering, starting at index 0. So rather than a semantic code like BTN_LEFT, you'd simply get a button 0 event. The reasoning behind this is a caveat in the kernel evdev API: event codes have semantic meaning (e.g. BTN_LEFT) but buttons on a tablet pad don't those meanings. There are some generic event ranges (e.g. BTN_0 through to BTN_9) and the Wacom tablets use those but once you have more than 10 buttons you leak into other ranges. The ranges are simply too narrow so we end up with seemingly different buttons even though all buttons are effectively the same. libinput's pad support undoes that split and combines the buttons into a simple sequential range and leaves any semantic mapping of buttons to the caller. Together with libwacom which describes the location of the buttons a caller can get a relatively good idea of how the layout looks like.

Mode switching is a commonly expected feature on tablet. One button is designated as mode switch button and toggles all other buttons between the available modes. On the Intuos Pro series tablets, that button is usually the button inside the ring. Button mapping and thus mode switching is however a feature we leave up to the caller, if you're working on a compositor you will have to implemented mode switching there.

Other than that, pad support is relatively simple and straightforward and should not cause any big troubles.

[1] or at least less wrong than in the past
[2] They're actually linux/input-event-codes.h in recent kernels

One more attempt at SATA power management

Around a year ago I wrote some patches in an attempt to improve power management on Haswell and Broadwell systems by configuring Serial ATA power management appropriately. I got a couple of reports of them triggering SATA errors for some users, couldn't reproduce them myself and so didn't have a lot of confidence in them. Time passed.

I've been working on power management stuff again this week, so it seemed like a good opportunity to revisit these. I've made a few changes and pushed a couple of trees - one against master and one against 4.5.

First, these probably only have relevance to users of mobile Intel parts in the U or S range (/proc/cpuinfo will tell you - you're looking for a four-digit number that starts with 4 (Haswell), 5 (Broadwell) or 6 (Skylake) and ends with U or S), and won't do anything unless you have SATA drives (including PCI-based SATA). To test them, first disable anything like TLP that might alter your SATA link power management policy. Then check powertop - you should only be getting to PC3 at best. Build a kernel with these patches and boot it. /sys/class/scsi_host/*/link_power_management_policy should read "firmware". Check powertop and see whether you're getting into deeper PC states. Now run your system for a while and check the kernel log for any SATA errors that you didn't see before.

Let me know if you see SATA errors and are willing to help debug this, and leave a comment if you don't see any improvement in PC states.

comment count unavailable comments

Writing a plugin for GNOME To Do – Introduction

I’m starting a small series of posts describing a general how-to on writing plugins for GNOME To Do. The good news: GNOME To Do has a very small API and it’s very easy to build plugins.

Note: I’ll show examples in Python 3, since this might lower the barrier for contributors and it’s a language that I know (and LibPeas supports). If you don’t know what LibPeas is, click here.

A Brief Explanation

Before jumping into code, let’s talk about the internals. GNOME To Do allows developers to extend it in 3 ways: adding UI elements, adding new panels and implementing new data sources.

A data source is pretty much anything that can feed GNOME To Do with data. For example, Evolution-Data-Server is a data source (the default and omnipresent one). A panel is a view displayed by the main window. See below:

gtdThe extensible elements of GNOME To Do

Every plugin has a single entry point: GtdActivatable. Plugins must provide an implementation of this interface. I’ll write more about this interface below.

GtdActivatable

As explained, the entry point of the plugin is GtdActivatable, which will provide the data sources and UI widgets. The definition of this interface can be found here. The following methods should be implemented:

void       activate              (GtdActivatable *activatable);

void       deactivate            (GtdActivatable *activatable);

GList*     get_header_widgets    (GtdActivatable *activatable);

GtkWidget* get_preferences_panel (GtdActivatable *activatable);

GList*     get_panels            (GtdActivatable *activatable);

GList*     get_providers         (GtdActivatable *activatable);

A quick explanation of each one of them:

  • activate: called after the plugin is loaded and everything is set up.
  • deactivate: called after the plugins is unloaded and all the registered data sources and UI elements are removed.
  • get_header_widgets: retrieves a list of widgets, usually buttons, to be added to the headerbar. This can be the hook point for your functionality, for example.
  • get_preferences_panel: if your plugin needs further configuration, you can pass the preferences panel through this method.
  • get_panels: retrieve the list of panels of the plugin. A dedicated blog post will be written about it in the near future.
  • get_providers: retrieves the list of data sources your plugin may provide. GtdProvider is the interface a data source should implement. A dedicated blog post will be written about it in the nearby future.

The next posts will be about the various ways to add functionalities to GNOME To Do. Also, the next posts will demonstrate plugins through examples of various kinds. Stay tuned!

April 17, 2016

Updates on Timezone extension

Hey, this is a quick post to show the improvements the Timezone extension for GNOME Shell has received since it was born a couple of weeks ago.

  • Support avatars from Gravatar.com and Libravatar.
  • Support GitHub profile. We fetch people’s name, city and avatar from there.
  • Ability to specify a location for people.json file. We support even remote files (e.g.: http://example.com/people.json). This way a whole team can share this file and use a single copy of it!

Here’s a simple people.json file showing the new features:

[
  {
    "github": "jwendell",
    "tz": "America/Sao_Paulo"
  },
  {
    "name": "Niel",
    "gravatar": "niel@example.com",
    "city": "Cape Town",
    "tz": "Africa/Johannesburg"
  }
]

Screenshots:

Summary at the bottom. Clicking will open the preferences dialog

Preferences dialog

April 16, 2016

Reflections on Starting a Local FOSS Group

Last Wednesday was no less than the third time the local FOSS group in Aalborg met. Today I’m looking back at how it all started so I thought I would share some thoughts that may help others who would like to spread free and open source software in their local area.

Create the first piece of basic infrastructure

..whether it means collecting e-mail adresses or creating a group on social media. In my case I resorted to creating a public group on Facebook called “Open Source Aalborg”.

Find someone who knows someone

You’ll need some way to get in touch with others who live in your area and is interested in this topic. In my case I happened to get in touch with my local IT union PROSA who helped arranging a free event called “IT X: Open Source” and reaching out to many members locally (in particular students). Note that the extensive use of “Open Source” rather than “Free Software” was simply because the term is less ambiguous and more familiar (ie used more in media) to people.

Reach out to them

IT X was a great springboard to do this. IT X was arranged as a talk show. First talk would explain to the audience what open source was, since the audience might be familiar with the term at different levels or not at all. Secondly we ran talks on how open source is useful. For me that meant giving a talk at IT X where I talked about how and why I spend my free time contributing to GNOME. The audience was primarily students and software developers so I designed my talk to largely concern how open source can benefit your skills and experience with large-scale collaborative software development. At the end of my talked I promoted my local initiative “Open Source Aalborg” and afterwards Hal9K also promoted their local hackerspace which also is located in Aalborg.

Follow up to maintain the interest

The next days after IT X, Open Source Aalborg expanded from 6-7 members to 40 members. I followed up by making the first “Open Source Night” 14 days after the talk show had happened. Looking back I should probably have made the first event even 7 days after. When scheduling I tried making it as convenient for people as possible. We would met on a Wednesday at Hal9k from 5 o clock (after work/university) to whenever people wanted. To make it further convenient for attendees we arranged pizza so attendees wouldn’t need to concern themselves about food either.

Things you can do on the first meetup

At the first meetup we ended up being around 10, mostly students. This is what we ended up doing:

  • Sightseeing in the local hackerspace.
  • Talked about each other’s individual interests and areas of expertises.
  • Discussed various news and upcoming conferences that we knew of.

For the following meetings I usually picked up what was previously discussed or coded on and used it to write a description that teases the possible topics we might discuss at the next meetup. It makes for a nice motivation I think, plus we keep each other up to date on how we are progressing.

Another fun thing we have done is that last meeting I arranged a video conference call with Johan Thelin from FOSS GBG. We talked with Johan about the history behind the Gothenburg FOSS group, how they run their meetups and about their upcoming conference foss-north. The video conference was definitely a success – we even talked about making a video conference meeting between two FOSS groups sometime. What I particularly like about doing this, is that it gives a taste of the impression that this little local group is part of huge worldwide community. This is a feeling which I think can really benefit the motivation among individuals in any local FOSS group out there.

Some other fun ideas for the future

  • Send and receive greetings with other FOSS groups.
  • Have video calls with members of GNOME or someone experienced in open source could be insightful.
  • Arrange a follow-up talk event on open source in the fall where members can do lighting talks on the small projects they have worked on throughout the year.
  • Find local companies or initiatives related to open source and have them come around and present what they are all about.
  • Go on a trip to a FOSS conference together.

Are you in a local FOSS group? Trying to get one started? Let me know! I’d be more than happy to listen to your suggestions too.

My first xdg-app

A few days ago, I set out to get some experience with building an application as an xdg-app.  In this post, I’m collecting some of the lessons I learned.

Since I didn’t want pick a too easy test case, I chose terminix, a promising terminal emulator for GNOME. Terminix uses GTK+ and vte, which means that most dependencies are already present in the GNOME runtime.

Terminix

However, terminix is written in D, and the GNOME sdk does not include D support.  So, the challenge here is to build a compiler and runtime for this language, and any other required language-specific utilities. The DMD D compiler is written in D (naturally), so some bootstrapping was required.

Build tools

xdg-app comes with low-level build support in the form of various xdg-app build commands. But you really want to use the newer xdg-app-builder tool. It is really nice.

xdg-app-builder downloads and builds the application and all its dependencies, according to a JSON manifest.  Thats par for the course for modern build tools, of course. But xdg-app-builder also has smart caching: It keeps git checkouts  of all sources (if they are in git), and only rebuilds them when they change. It also keeps the results of each modules’ build in an ostree repository, so reusing a previous build is really fast.

All the caches are kept in .xdg-app-builder/ in the directory where you run the build. If you have many dependencies, this hidden directory can grow large, so you might want to keep an eye on it and clean it out every now and then (remember, it is just a cache).

You can take a look at the JSON file I came up with.

Build API

Similar to the GNOME Continuous build system, xdg-app-builder assumes that each module in your JSON supports the ‘build api’ which consists of configure & make & make install. The world is of course more diverse than that, and rolling your own build system is irresistable for some.

Here is a way to quickly add the required build api support to almost any module (I’ve stolen this setup from the pitivi xdg-app build scripts):

Place a foo-configure script next to your JSON recipe that looks like this (note that Makefile syntax requires tabs that were eaten when I pasted this content in here):

#!/bin/sh

cat <<EOF >Makefile
all:
        ...do whatever is needed to build foo

install:
        ...commands to install foo go here

EOF

In the JSON  fragment for the foo module, you add this file as an extra source (we are taking advantage of the fact that xdg-app-builder allows multiple sources for a module):

"modules": [
    {
        "name": "foo",
        "sources": [
            {
                "type": "git",
                 "url": "http://foo.org/foo.git",
                 "branch": "master"
            },
            {
                "type": "file",
                "path": "foo-configure",
                "dest-filename": "configure"
            }
        ]
    }

I guess you could just as well have the Makefile as yet another source; this approach is following the traditional role of configure scripts to produce Makefiles.

Network access

As I mentioned already, the first step in my build was to build a D compiler written in D. Thankfully, the build script of the dmd compiler is prepared for handling this kind of bootstrapping. It does so by downloading a pre-built D compiler that is used to build the sources.

Downloading things during the build is not great for trusted and repeatable builds. And xdg-app’s build support is set up to produce such builds by running the build in a controlled, sandboxed environment, which doesn’t have network access.

So, in order to get the D compiler built, had to weaken the sandboxing for this module, and grant it network access.  It took me a little while to find out that the build-args field in the build-options does this job:

"modules": [
    {
        "name": "dmd",
        "build-options":
            {
                "build-args": ["--share=network"]
            },
        ...

Shedding weight

After navigating around other hurdles, I eventually succeeded in having a build of my entire JSON recipe run through the end. Yay! But I quickly discovered that the build directory was quite heavy. It came to over 200M, a bit much for a terminal.

xdg-app-builder creates the final build by combining the build results from all the modules in the JSON recipe. That means my terminix build included a D compiler, static and shared libraries for the D runtime, build utilties, etc.

To fix this, I added a couple of cleanup commands to the JSON. These are run after all the modules have been built, and can remove things that are no longer needed.

"cleanup-commands": ["rm -rf /app/lib",
                     "rm -rf /app/src",
                      rm -rf /app/share/nautilus-python",
                      "rm /app/bin/dmd",
                      ...

Note that the paths start with /app, which is the prefix that xdg-app apps are put in (to avoid interference with /usr).

After these cleanups, my build weighed less than a megabyte, which is more acceptable.

Trying it out

The best way to distribute an xdg-app is via an OSTree repository. Since I don’t have a good place to put one, and Terminix is developed on github, I decided to turn my xdg-app into a bundle, using this command:

xdg-app build-bundle ~/xdg-app-repos/terminix \
                     terminix.x86_64.xdgapp \
                     com.gexperts.Terminix \ 
                     master

Since github has a concept of releases, I’ve just put the bundle there:

https://github.com/matthiasclasen/terminix/releases/tag/2016-04-15

Enjoy!

April 15, 2016

Modernizing AbiWord code

When you work on a 18 year old code base like AbiWord, you encounter stuff from another age. This is the way it is in the lifecycle of software where the requirement and the tooling evolve.

Nonetheless, when AbiWord started in 1998, it was meant as a cross-platform code base written in C++ that had to compile on both Windows and Linux. C++ compiler where not as standard compliant as today so a lot of things where excluded: no template, so not standard C++ library (it was called STL at the time). Over the years, things have evolved, Mac support was added, gcc 4 got released (with much better C++ support), and in 2003 we started using template for the containers (not necessarily in that oder, BTW). Still no standard library. This came later. I just flipped the switch to make C++11 mandatory, more on that later.

As I was looking for some bugs I found it that with all that hodge podge of coding standard there wasn't any, and this caused some serious ownership problems where we'd be using freed memory. The worse is this lead to file corruption where we write garbage memory into files as are supposed to be valid XML. This is bad.

The core of the problem is the way we pass attributes / properties around. They are passed as a NULL terminated array of pointer to strings. Even index are keys, odd are string values. While keys are always considered static, values are not always. Sometime they are taken out of a std::string or a one of the custom string containers from the code base (more on that one later), sometime they are just strdup() and free() later (uh oh, memory leaks).

Maybe this is the good time to do a cleanup and modernize the code base and make sure we have safer code rather that trying to figure out one by one all the corner cases. And shall I add that there is virtually no tests on AbiWord? So it is gonna be epic.

As I'm writing this I have 8 patches with a couple very big, amounting to the following stats (from git):

134 files changed, 5673 insertions(+), 7730 deletions(-)

These numbers just show how broad the changes are, and it seems to work. The bugs I was seeing with valgrind are gone, no more access to freed memory. That's a good start.

Some of the 2000+ lines deleted are redundant code that could have been refactored (there are still a few places I marked for that), but a lot have to do with what I'm fixing. Also some changes are purely whitespace / indentation where it was relevant usually around an actual change.

Now, instead of passing around const char ** pointers, we pass around a const PP_PropertyVector & which is, currently, a typedef to std::vector<std::string>. To make things nice the main storage for these properties is now also a std::map<std::string, std::string> (possibly I will switch it to an unordered map) so that assignments are transparent to the std::string implementation. Before that it was a one of the custom containers.

Patterns like this:

 const char *props[] = { NULL, NULL, NULL, NULL };
 int i = 0;
 std::string value = object->getValue();
 props[i++] = "key";
 const char *s = strdup(value.c_str());
 props[i++] = s;
 thing->setProperties(props);
 free(s);

Turns to

 PP_PropertyValue props = {
   "key", object->getValue()
 };
 thing->setProperties(props);

Shorter, readable, less error prone. This uses C++11 initializer list. This explain some of the line removal.

Use C++ 11!

Something I can't recommend enough if you have a C++ code base is to switch to C++ 11. Amongst the new features, let me list the few that I find important:

  • auto for automatic type deduction. Make life easier in typing and also in code changes. I mostly always use it whe declaring an iterator from a container.
  • unique_ptr<> and shared_ptr<>. Smart pointer inherited from boost. But without the need for boost. unique_ptr<> replaces the dreaded auto_ptr<> that is now deprecated.
  • unordered_map<> and unordered_set<>: hash based map and set in the standard library.
  • lambda functions. Not need to explain, it was one of the big missing feature of C++ in the age of JavaScript popularity
  • move semantics: transfer the ownership of an object. Not easy to use in C++ but clearly beneficial for when you always ended up copying. This is a key part of the unique_ptr<> implementation to be usable in a container where auto_ptr<> didn't. The move semantic is the default behaviour of Rust while C++ copies.
  • initializer list allow construction of object by passing a list of initial values. I use this one a lot in this patch set for property vectors.

Don't implement your own containers.

Don't implement vector, map, set, associative container, string, lists. Use the standard C++ library instead. It is portable, it works and it likely does a better job than your own. I have another set of patches to properly remove these UT_Vector, UT_String, etc. from the AbiWord codebase. Some have been removed progressively, but it is still ongoing.

Also write tests.

This is something that is missing on AbiWord that I have tried to tackle a few time.

One more thing.

I could have mechanised these code changes to some extent, but then I wouldn't have had to review all that code in which I found issues that I addressed. Eyeball mark II is still good for that.

The patch (in progress)

xdg-app 0.5.2 is out

In sync with the new stable runtime release, there is a new xdg-app release, version 0.5.2.

This has some major changes in how we build language/locale extension, which makes it important to update if you’re building an application or runtime.

Of course, there are some new features, fixes and updates, so regular users should update to!

Feeds