GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

February 28, 2015

Infographic: How Mallard helps cross-stream documentation workflows

Infographic: How Mallard helps cross-stream documentation workflows

Tapped on phone 7 times. Now an Android developer! :D

Nothing has been really tough so far! Just some little tweaks in the build.gradle files I had to make to the sunshine app to make it run, all have been listed in the documentation of the course.

Enabled USB Debugging on my android device and to check if my computer detected the device, ran the following command to obtain some result.

$adb devices


February 27, 2015

GTK+: Aligning / Justification in text widgets

There are 3 GTK+ text widgets: GtkEntry, GtkTextView, and GtkLabel. I noticed recently that they are a little inconsistent in how they offer alignment of their text. This is highly unlikely to change, because that would be disruptive to existing code, but maybe this is helpful to people trying to use the API as it is now.

I’ve put some test_gtk_text_widgets_align.c example code in github. It shows two GtkLabels (single-line and multi-line) two GtkTextViews (single line and multi-line) and a GtkEntry (single-line only). I tried this with GTK+ 3.15.9. I’ve mostly ignored deprecated API, such as GtkAlignment, to simplify things.

Default

This is how it looks by default. It looks fine, though the GtkLabel, and only the GtkLabel, defaults to center alignment – not a very useful default.

screenshot_test_gtk_text_widgets_default

Justification

Now, if you want to justify the text to the right, you might call gtk_label_set_justify() on the GtkLabels and call gtk_text_view_set_justification() on the GtkTextViews, though you’ll need to call gtk_text_view_set_wrap_mode() for that to take effect. Then you’d see something like the screenshot below.

screenshot_test_gtk_text_widgets_justify

You’d notice

  • The justification had no effect on the single-line GtkLabel (as per the gtk_label_set_justify() documentation) but it did have an effect on the single-line GtkTreeView.
  • The text in the GtkLabel remains aligned in the center (the GtkLabel default) even though it’s justified within that central part.
  • We didn’t call any justification method on the GtkEntry, because it has none.

Justification and Alignment

To fix that center-alignment of the right-justified text in the GtkLabel, you could call gtk_widget_set_halign(). Then you’d see this:

screenshot_test_gtk_text_widgets_halign_and_justify

But, strangely:

  • The center-aligned right-justified text in the multi-line GtkLabel is unchanged. It remains center-aligned. I have no fix for this. But see below about gtk_misc_set_alignment().
  • The single-line GtkLabel now appears to be justified/aligned to the right, which is nice. But see below.
  • The GtkTextViews have changed their word wrapping for some reason. I have no idea why.
  • The GtkTextViews and GtkEntry now use only just enough space at the right for their entire widgets, not just the text that they contain. This shows us that gtk_widget_set_halign() affects the whole widget (as per the halign documentation), not just the text,

That last point about gtk_widget_set_halign() is more obvious if we set a background color. Then we see that that’s why the single-line GtkLabel now looks justified – it’s really just the whole widget that has been aligned to the right:

screenshot_test_gtk_text_widgets_halign_and_justify_with_background

Obviously, I don’t want this effect with the GtkTextViews, so in a real app I would only use gtk_widget_set_alignment() to achieve quasi-justification for a single-line GtkLabel, never for a multi-line GtkLabel, and never for a GtkTextView.

gtk_misc_set_alignment (deprecated)

Interestingly, calling the deprecated gtk_misc_set_alignment() function on GtkLabels does not have quite the same effect, though the gtk_misc_set_alignment() deprecation documentation tells you to use GtkWidget’s halign property instead. However,  you won’t notice unless you set a background color (ignore the GtkTextViews here – they don’t derive from GtkMisc). Also, calling (deprecated) gtk_misc_set_alignment() on the multi-line GtkLabel actually aligns the text to the right, instead of leaving it in center as gtk_widget_set_halign() does.

screenshot_test_gtk_text_widgets_set_alignment_and_justify_with_background

Maybe I’m missing something but it doesn’t seem like gtk_misc_set_alignment() has a true replacement. Still, I try to avoid using deprecated API because I cannot expect any help if I do.

Conclusion

So, if I wanted to justify my widget’s text, without changing the size of the widget itself, I’d decide what to do like this:

  • For a GtkLabel: Use both gtk_widget_set_halign() and gtk_label_set_justify().
  • For a GtkTextView: Use gtk_text_view_set_justification() with gtk_text_view_set_wrap_mode().
  • For a GtkEntry: You cant.

I could be wrong, so do please tell me how.

February 26, 2015

Vote Karen Sandler for Red Hat's Women In Open Source Award

I know this decision is tough, as all the candidates in the list deserve an award. However, I hope that you'll chose to vote for my friend and colleague, Karen Sandler, for the 2015 Red Hat Women in Open Source Community Award. Admittedly, most of Karen's work has been for software freedom, not Open Source (i.e., her work has been community and charity-oriented, not for-profit oriented). However, giving her an “Open Source” award is a great way to spread the message of software freedom to the for-profit corporate Open Source world.

I realize that there are some amazingly good candidates, and I admit I'd be posting a blog post to endorse someone else (No, I won't say who :) if Karen wasn't on the ballot for the Community Award. So, I wouldn't say you backed the wrong candidate you if you vote for someone else. And, I'm imminently biased since Karen and I have worked together on Conservancy since its inception. But, if you can see your way through to it, I hope you'll give Karen your vote.

(BTW, I'm not endorsing a candidate in the Academic Award race. I am just not familiar enough with the work of the candidates involved to make an endorsement. I even abstained from voting in that race myself because I didn't want to make an uninformed vote.)

Developing Android Apps: Installing Android Studio on Ubuntu 14.04 LTS

I followed the link: http://forums.udacity.com/questions/100238155/android-studio-and-ubuntu-1404-lts#ud853 with some modifications since the process described there seems expired.

  1. I had OpenJDK pre-installed so I continued using it.
  2. Installed Android Studio using these set of commands:
           
              $sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
              $sudo apt-get update
              $sudo apt-get install ubuntu-make
                        //Note: ubuntu-make is the new name for ubuntu-developer-tools-center
              $umake android
  3. Installed Android Debug Bridge (adb)

              $sudo apt-get install android-tools-adb

              
  4. Installed Sqlite3

              $sudo apt-get install sqlite3

Then I launched the installed Android Studio and followed the steps for installing the SDK and accompanying software... This took a while.


We're going to build the "Sunshine" app in this course and I'll be only talking the highlights (which are meant to remove obstructions/induce motivation and to be taken along with the class videos, links to which I've provided in my previous post).

Sunshine is basically: https://github.com/udacity/Sunshine
and the branches are named and made chapter-wise.

In case you didn’t notice…

This happened.

Time and Date Menu

It was a big job, and Florian deserves major congratulations for getting it ready (just!) in time.

Credit also needs to go to Jon McCann. If you’ve read my previous posts on the notifications redesign, you’ll know that we’ve gone through a number of different design concepts. In the end, it was Jon who was instrumental in helping us to identify the best approach. He has an amazing clarity of vision, which proved crucial in helping us to give the design a solid conceptual foundation.

The various concepts we have explored as a part of the notifications redesign makes me confident that the approach we have settled on is the right one. It was only with the final design that everything clicked into place, and we were left with a design which effectively avoided the problems found in each of the others.

In the rest of this post, I’ll outline the design’s key features.

“What’s happening?”

One of the key goals for the notifications redesign was to provide an effective way to review previous notifications. In this way, notifications can be used to find out about what has happened in the past, as well as what is happening right now.

With this ability, notifications can be used in an exploratory manner, to get an overview of recent events. This is particularly useful as a way to inform decisions about what to do next. As such, notifications can be a tool for directing our own personal activity.

This recognition informs the new calendar and notifications design. It indicates that notifications have an affinity with other activity-related information – at the moment we ask “what’s happening?”, we are also often implicitly asking, “what should I be doing?”

The new calendar design aims to answer that question in the most effective way possible. It provides a single place where you can find out what has happened, what is happening, and what is about to happen. In concrete terms, this means showing notifications, event and birthday reminders, world clocks and weather information. Notifications take centre stage, while the other information is available for you to fortuitously stumble upon.

All this information is contained in a single view, which means that it is immediately accessible and gives an effective overview.

It’s a time machine

Since they tell us about what has happened in the past, and help us decide what to do in the future, notifications are closely related to time. In accordance with this framing, the new design makes notifications accessible through the time and day indicator in the top bar [1]. All the other information found in the new date and time menu also relates to time – event reminders, birthdays, world times, and the calendar.

The world times section of the date and time menu works through integration with the GNOME Clocks application.

Taking action

Notification Banner

Just like the current GNOME 3 notifications design, the new design incorporates pop up banners that include notification actions. These enable you to quickly respond to a notification in a convenient manner. Built in chat replies have been retained.

The position and appearance of banners has changed though – they now appear towards the top of the screen. This gives them a close relationship with the date and time menu. It also helps to ensure that pop up banners don’t get in the way, which was an issue when banners where at the bottom of the screen.

Keeping it simple

A major goal for the notifications redesign has been to simplify as much as possible. There’s quite a bit of complexity to notifications right now, including different notification types (such as resident and transient notifications), special cases (like music and hot plug notifications), notification properties (including images, action buttons), as well as stacking of notifications within different sources.

This complexity comes at a cost, increasing both the number of bugs and the maintenance burden. With the new design, we are clearing the decks and trying to stick to a single, simple API for virtually all notifications. This will lead to a much lower bug count, as well as easier maintenance.

Plans for the future

The main thing that has still to land for 3.16 is status icon support. We’re still finalising our plans for this, but it will definitely happen, and I’m sure that it will be better than what was in 3.14.

Looking forward to 3.18, there are a number of elements to the design that didn’t make it for 3.16, including music controls (probably based on MPRIS), Weather integration and birthday reminders. These additional elements will make the new design increasingly useful and cohesive, so there is a lot to look forward to. If you want to work on any of these features, just get in touch.

A mockup of the date and time menu, featuring birthdays and weather information.

A mockup of the date and time menu, featuring birthdays and weather information.

How you can help

The new notifications and calender design landed late in the development cycle, which makes testing vital. If you want to help, the best thing you can do is use GNOME Shell master, and report any bugs that you find.

This is also a good moment to pay attention to the actual notifications themselves. There’s a lot of poor notification usage out there right now [2], as well as obvious examples of things that should have notifications and don’t (completed downloads, anyone?).

If you are responsible for a module or application, take a moment to read over the notifications guidelines, and take a second to check if there are any untapped opportunities for effective notification usage in your software.

The new notifications design will be released next month, as a part of GNOME 3.16. If you want more details about the design or future plans, check out the design page.

[1] This also has some practical advantages, such as communicating that notifications are provided by the system, clearly delineating each section of the top bar, and providing an effective click-target.

[2] Things to look out for: notifications not being dismissed when they are replaced by a new one, application notifications that don’t use the application’s icon, default actions that don’t raise the sender application, notification actions that duplicate the default action (“View” or “Open” buttons are a warning sign of this), notifications being shown by applications while you are using them.

Another fake flash story

I recently purchased a 64GB mini SD card to slot in to my laptop and/or tablet, keeping media separate from my home directory pretty full of kernel sources.

This Samsung card looked fast enough, and at 25€ include shipping, seemed good enough value.


Hmm, no mention of the SD card size?

The packaging looked rather bare, and with no mention of the card's size. I opened up the packaging, and looked over the card.

Made in Taiwan?

What made it weirder is that it says "made in Taiwan", rather than "Made in Korea" or "Made in China/PRC". Samsung apparently makes some cards in Taiwan, I've learnt, but I didn't know that before getting suspicious.

After modifying gnome-multiwriter's fake flash checker, I tested the card, and sure enough, it's an 8GB card, with its firmware modified to show up as 67GB (67GB!). The device (identified through the serial number) is apparently well-known in swindler realms.

Buyer beware, do not buy from "carte sd" on Amazon.fr, and always check for fake flash memory using F3 or h2testw, until udisks gets support for this.

Amazon were prompt in reimbursing me, but the Comité national anti-contrefaçon and Samsung were completely uninterested in pursuing this further.

In short:

  • Test the storage hardware you receive
  • Don't buy hardware from Damien Racaud from Chaumont, the person behind the "carte sd" seller account

February 25, 2015

2015-02-25 Wednesday

  • Up early, quick mail chew; set off for Cambridge; into the office to see Tracie; read a great report. Train on to Edinburgh, worked on budgets. Extraordinarily frustrating experience with intermittent connectivity and Evolution on the train for some hours.
  • Enjoyed some of the talks at the Open Source Awards, and a great meal mid-stream.
  • Extraordinarily honoured to recieve from Karen Sandler, on behalf of Collabora Productivity, the UK Open Source Awards 2015 - Best Organisation; gave a small acceptance spiel:
    • It is an honour: in a Cloud obsessed world to have a Commodity Client Software company represented. In a world obsessed by Tablets: to encourage Free Software that makes your PC/Mac keyboard really useful. Naturally, we do have a Tablet & phone version making good progress now (for the paper-haters).
    • LibreOffice 80+ million users: more than the UK's population. A brief correction - Collabora is only the 2nd largest contributor to the code - the 1st is volunteers in whose debt we all are. Everything we produce is Free Software.
    • Collabora - has a mission we believe in: To make Open Source rock (ono). We're betting our Productivity subsidiary on ODF and LibreOffice.
    • We're here to kill the old reasons not to deploy Free Software: with long term maintenance for three to five years; rich support options - backing our partner/resellers with a fast code-fix capability; and finally killing complaints - we can implement any feature, fix any bug, and integrate with any Line Of Business app for you.
    • In the productivity space - innovation is long overdue; Free Software can provide that. Thanks for coming & your support

The Best Thing I've Done for My Productivity Lately

But seriously. The best thing I’ve done for my productivity lately (besides blocking Facebook entirely on my work machine) is disabling my Facebook newsfeed (this is a Chrome extension, I’m sure there are various others for other browsers). Facebook is still a time-suck, but not the ENDLESS VORTEX OF DISTRACTION AND OOH A BUZZFEED ARTICLE that it once was. Pretty cool stuff!

February 24, 2015

ConsoleKit in GNOME 3.16 and beyond

Copy pasting from https://wiki.gnome.org/Projects/ConsoleKit. We announced this as well on distributor-list, which we expect any distributor of GNOME to be subscribed to (please do so!). Discussion was held on desktop-devel-list.

ConsoleKit is a framework for registering and enumerating login and user sessions. It is currently deprecated and unmaintained, though the project was recently forked into a backward compatible ConsoleKit2 project. that is getting limited maintenance.

Alternative options

The functionality of ConsoleKit has been superseded by logind which is a systemd component. logind provides nicer APIs and better integration with the system. It supports multiple seats per-machine, and has a mechanism for provisioning devices to unprivileged programs. Although systemd is not available for all systems, there have been a number of initiatives to fill the gap left by ConsoleKit, including:

  • LoginKit (logind compatible api on top of ConsoleKit2);
  • systemd-shim (limited support for some of the systemd apis);
  • systembsd (a reimplementation of the systemd apis portable to BSD distributions).

GNOME 3.16

Some GNOME components still support ConsoleKit in a best-effort, last-ditch-fallback sense, though, the ConsoleKit codepaths aren’t as widely tested. Some components now require logind to function properly. Distributions that wish to ship without logind in GNOME 3.16 need to patch ConsoleKit support back in to those components:

GNOME 3.18 onwards

For GNOME 3.18 we expect anyone not being able to use logind to make use of LoginKit, systemd-shim or systembsd. Likely more modules will remove ConsoleKit codepaths.

2015-02-24 Tuesday

  • Mail chew, built ESC stats; mail; lunch. Customer call. Reviewed the LibreOffice 4.4 feature set to write a LXF column, rather encouraged.
  • Booked train tickets to the great Open Source Awards tomorrow in Edinburgh.

Minglish - New input method for Marathi Language

Why new input method?

For Marathi language, we have phonetic,inscript and itrans input methods but each input method has designed for particular users. Minglish is nothing but combination of Marathi + English.
Users who already familiar with English, often want same key sequence in their own language and Minglish solves this problem.

One of the problem with Marathi language is to type Complex words or Joint words and half characters which turn into full characters after combination of other characters.
To solve this problem user need to hold ALTGR and press required key to get half characters and complex characters.

For more information on key binding can be found on  https://github.com/gnuman/m17n-inglish-mims/blob/master/minglish/minglish.mim

Minglish is been accepted as feature for Fedora 22 and if you would like to have any suggestions then please feel free to reach out.
If you want to have similar input method for your own language, we would like to help you out.


February 23, 2015

Reflecting on Feedback

Reflection of David King in Kat's laptop screen

While at last month's Cambridge Hackfest, members of the GNOME Documentation Project team talked with Cosimo Cecchi of Endless Mobile about the user help in their product. As it turns out, they are shipping a modified version of Yelp, the GNOME help browser, along with modified versions of our own Mallard-based user help.

Knowing that they were actually shipping our help, we wanted to get a closer look at what they were doing, and wanted to get some feedback as to how things were working for them. Cosimo was glad to oblige us.

  • With regards to the help content, he noted that they include more basic help topics than what is in standard GNOME help. The users of their product are often very early, beginning users. They may not know what a computer mouse is, and may not know how to use it.

    We talked about it, but feel that this level of experience is different from that of a typical GNOME user. We're okay with not going into that level of detail in our user help.

  • In terms of modifying the appearance of our help, their technical writers found it difficult to modify the CSS that we use. Cosimo noted that the CSS for our help is not stored in a single file, nor even a single directory - it's partly embedded in Yelp's XSLT.

    While not always ideal, there are reasons for this. For example, if a person's visual impairment requires that they use GNOME's High Contrast GTK theme, Yelp will pick up the theme change and will use a corresponding color scheme when it renders the help for that user. Similarly, if a user sets their default system language to a Right-to-Left-based language (such as Arabic), Yelp will also pick up that change, and will display the help in the appropriate Right-to-Left manner automatically.

    These are both useful features, but it is good to get feedback on this. Creating a custom documentation "look" is important for a downstream distributor, so there's room for us to improve here.

  • The technical writing team at Endless Mobile customized the HTML output to feature broadly-grouped sections as html-buttons along the left-hand side of the help. I wish I had gotten a screenshot of this, because we were impressed with how they grouped and formatted the help. This may be an approach that we look to use in the future.

I talked with Cosimo about incorporating some of their updates into our help, and he was very receptive to it. While we'll mostly be focusing on content-related updates for our 3.16 release, we'll consider how we can improve our help based on their feedback in the future.

Reflecting on Feedback

Reflection of David King in Kat's laptop screen

While at last month's Cambridge Hackfest, members of the GNOME Documentation Project team talked with Cosimo Cecchi of Endless Mobile about the user help in their product. As it turns out, they are shipping a modified version of Yelp, the GNOME help browser, along with modified versions of our own Mallard-based user help.

Knowing that they were actually shipping our help, we wanted to get a closer look at what they were doing, and wanted to get some feedback as to how things were working for them. Cosimo was glad to oblige us.

  • With regards to the help content, he noted that they include more basic help topics than what is in standard GNOME help. The users of their product are often very early, beginning users. They may not know what a computer mouse is, and may not know how to use it.

    We talked about it, but feel that this level of experience is different from that of a typical GNOME user. We're okay with not going into that level of detail in our user help.

  • In terms of modifying the appearance of our help, their technical writers found it difficult to modify the CSS that we use. Cosimo noted that the CSS for our help is not stored in a single file, nor even a single directory - it's partly embedded in Yelp's XSLT.

    While not always ideal, there are reasons for this. For example, if a person's visual impairment requires that they use GNOME's High Contrast GTK theme, Yelp will pick up the theme change and will use a corresponding color scheme when it renders the help for that user. Similarly, if a user sets their default system language to a Right-to-Left-based language (such as Arabic), Yelp will also pick up that change, and will display the help in the appropriate Right-to-Left manner automatically.

    These are both useful features, but it is good to get feedback on this. Creating a custom documentation "look" is important for a downstream distributor, so there's room for us to improve here.

  • The technical writing team at Endless Mobile customized the HTML output to feature broadly-grouped sections as html-buttons along the left-hand side of the help. I wish I had gotten a screenshot of this, because we were impressed with how they grouped and formatted the help. This may be an approach that we look to use in the future.

I talked with Cosimo about incorporating some of their updates into our help, and he was very receptive to it. While we'll mostly be focusing on content-related updates for our 3.16 release, we'll consider how we can improve our help based on their feedback in the future.

GNOME 3.16 sightings

As is my habit, I’ve taken some screenshots of new things that showed up in my smoketesting of the GNOME 3.15.90 release.Since we are entering feature freeze with the .90 release,  these pictures give some impression of whats in store for GNOME 3.16.

The GNOME shell theme has seen the first major refresh in a while. As part of this refresh, the theme has been rewritten in sass, and is now sharing much more code with the Adwaita GTK+ theme. Window decorations are now also sharing code between client-side and server-side.

New Shell ThemeA long-anticipated redesign of notifications has landed just-in-time for 3.15.90. This is a major change in the user interaction. Notifications are now appearing at the top of the screen. The message tray is gone, old notifications can now be found in the calendar popup.

New notificationsNew NotificationsSystem integration has been improved, e.g in the area of privacy. We now have  a privacy page in gnome-initial-setup, which offers you to opt out of geolocation and automatic bug reporting:

Privacy Outside of the initial setup, the same settings are also available in the control-center privacy panel.

The nautilus UI has received a lot of love. The ‘gear’ menu has been replaced by a popover, the list appearance is improved,
and file deletion can now be undone from a notification.

Improved NautilusNautilus ImprovementsOther applications have received a fresh look as well, for example evince and eog:

EvinceEye of GNOMEThere will also be a number of new applications, here are a few:

New applicationsYou can try GNOME 3.15.90, for example in Fedora 22 today. Or you can wait for GNOME 3.16, which will arrive on March 25.

Reliable BIOS updates in Fedora

Some years ago I bought myself a new laptop, deleted the windows partition and installed Fedora on the system. Only to later realize that the system had a bug that required a BIOS update to fix and that the only tool for doing such updates was available for Windows only. And while some tools and methods have been available from a subset of vendors, BIOS updates on Linux has always been somewhat of hit and miss situation. Well luckily it seems that we will finally get a proper solution to this problem.
Peter Jones, who is Red Hat’s representative to the UEFI working group and who is working on making sure we got everything needed to support this on Linux, approached me some time ago to let me know of the latest incoming update to the UEFI standard which provides a mechanism for doing BIOS updates. Which means that any system that supports UEFI 2.5 will in theory be one where we can initiate the BIOS update from Linux. So systems supporting this version of the UEFI specs is expected to become available through the course of this year and if you are lucky your hardware vendor might even provide a BIOS update bringing UEFI 2.5 support to your existing hardware, although you would of course need to do that one BIOS update in the old way.

So with Peter’s help we got hold of some prototype hardware from our friends at Intel which already got UEFI 2.5 support. This hardware is currently in the hands of Richard Hughes. Richard will be working on incorporating the use of this functionality into GNOME Software, so that you can do any needed BIOS updates through GNOME Software along with all your other software update needs.

Peter and Richard will as part of this be working to define a specification/guideline for hardware vendors for how they can make their BIOS updates available in a manner we can consume and automatically check for updates. We will try to align ourselves with the requirements from Microsoft in this area to allow the vendors to either use the exact same package for both Windows and Linux or at least only need small changes to them. We can hopefully get this specification up on freedesktop.org for wider consumption once its done.

I am also already speaking with a couple of hardware vendors to see if we can pilot this functionality with them, to both encourage them to support UEFI 2.5 as quickly as possible and also work with them to figure out the finer details of how to make the updates available in a easily consumable fashion.

Our hope here is that you eventually can get almost any hardware and know that if you ever need a BIOS update you can just fire up Software and it will tell you what if any BIOS updates are available for your hardware, and then let you download and install them. For people running Fedora servers we have had some initial discussions about doing BIOS updates through Cockpit, in addition of course to the command line tools that Peter is writing for this.

I mentioned in an earlier blog post that one of our goals with the Fedora Workstation is to drain the swamp in terms of fixing the real issues making using a Linux desktop challenging, well this is another piece of that puzzle and I am really glad we had Peter working with the UEFI standards group to ensure the final specification was useful also for Linux users.

Anyway as soon as I got some data on concrete hardware that will support this I will make sure to let you know.

GNOME Bugzilla: Your Bugs and your Product Overview.

GNOME Logo
Bugzilla logo by Dave Shea

  • The UNCONFIRMED bug status (which can be disabled per product) will be removed as it is mostly not used and confuses reporters. Tickets with UNCONFIRMED status will be merged into NEW status as announced on desktop-devel-list and at the top of every GNOME Bugzilla page.
    If you are a GNOME project maintainer and want to keep the UNCONFIRMED status for your GNOME Bugzilla product you can opt out until the end of February.
  • I encourage you to give the product pages and your user page a try!
    They have become more useful:
    Browse Product

    Browse Product

    • Target Milestones (if existing), Priorities (if actually used and not just kept as the “normal” default) and Severities are now listed first, to get a quick overview for those projects who might use Bugzilla to do some project planning (so far I have not seen that much though). Static information like components or versions now comes after.
    • Versions now sorts by newest versions first. You should be more interested in bugs reported in recent versions.
    • The previously shown needless NEEDINFO queries by date were moved to the side bar and merged into one query. The query results are sorted by last change.
    • In the side bar, “Bugs without a response” is a link again (though it is not exactly what it says, it’s rather a query for reports with only a single comment, hence also a mismatch between the displayed number and the search result number when you click it)
    • Moved “New Patches” into its own new “Patches” section under “New and unreviewed”. All patch related information for a project is now in a single place. Click it and review a patch!
    • The “GNOME-love bugs” query shows reports that have been recently changed first – more likely that recently changed ones are not outdated and something new contributors could work on.
    • The “activity” link next to “Git repository” could give contributors a quick idea how maintained the project is (only works if the project is hosted in GNOME Git).
    Describe User

    Describe User

    • Your patches are now listed right after “What bugs are assigned to me” and “On which bugs do I need to provide input” and above the long list of “Open issues” that you once upon a time filed. Patches now also display the summary of the corresponding bug report. (Still have to remove the useless patch ID.)
  • With above changes I’d like to make the “Browse Product” and “Describe User” pages give answers to “What is important?” and “What should I work on / review?”. I’d appreciate feedback if that’s considered helpful. For bugzilla.gnome.org itself this works pretty well, as you can see on its product page.
  • Not yet available: I want the frontpage to look like this (thanks to jimmac for the custom icons; ignore the wrong color of the top bar):
    Potential future frontpage

    Potential future frontpage


    This was already live for a few minutes but I reverted this change as the server started cycling through cached older versions (no idea what happened), plus I have the feeling that the skin handling is still problematic (my fearless guinea pig Shaun still had Bugzilla’s “Dust” skin enabled in his user settings so things looked pretty broken). Need to make sure that everybody uses the GNOME skin before doing further custom skin changes.
  • 18 months ago I blogged a series of Bugzilla related posts. Though this focused on Wikimedia Bugzilla, there might be some helpful information or tweaks you are not aware of:

February 22, 2015

Willing to talk about Czech history? I'm willing to hear what you have to say!

Last Friday, when I was leaving Šelepka, I was approached by a small group of Czech people that, among other things, gave me a (though "non requested") quite nice lesson of Czech history. One of the guys, who seemed to be a History teacher, told us about Pod Kaštany, a concentration camp where over 600 people were executed or tortured to death ... and it was just in the place where we were. Although I have to admit that stopping people in the streets talking about concentration camps may not be the best approach to start a conversation, I was more than happy to learn a bit about what happened there.

An interesting thing is that they have a monument there that, even I'm passing by in front of it almost every day, I've never stopped and tried to understand what was written there. Quite sad, right? I do agree and that's the reason I'm trying to start a "project" (if I can call it in this way ...). Every Wednesday I'll try to go to Alterna to meet with Czech people willing to share histories about Brno (or Czech Republic in general), people that are willing to sit, drink a beer and talk about the history of their own Country. What can I offer for these people? Not much, actually. I can be a partner for beer, a good listener and, if agreed beforehand, I can share a bit of the Brazilian history (I'm kinda into social movements, women rights, gay rights, drugs' decriminalization ... and I can talk a bit about interesting stuff we had [or not] in all of these topics in the last centuries in Brazil). :-)

So, interested? Drop me a message and we can arrange something!

I wrote some more apps for Ubuntu Phone

In a previous post I talked about the experiences of writing my first five applications for Ubuntu Phone. Since then, I've written three more.

As before, all these apps are GPL 3 licensed and available on Launchpad. What's new is now you can browse them online due to a great unofficial web appstore made by Brian Douglass. This solves one of my previous gripes about not being able to find new applications.

One thing I have changed is I've started to use the standard page header. This makes it easy to add more pages (e.g. help, high scores). I initially wasn't a fan of the header due to the amount of space it took up but for simple apps it works well and it makes them predictable to use. I also went back and updated Dotty to use this style.

Let me introduce...


It's a simple utility to simulate dice rolling. Not much more to say. 431 lines of QML.


I was playing with uTorch and watching a spy movie and thinking "hey, wouldn't it be cool to flash the camera and send morse code messages. I wonder if the phone responds fast enough to do that." Apparently it does. 584 lines of QML.


So I thought, now I have this nice dice code from Dice Roller, it would be cool to make a full dice game with it. I've played a fair bit of Yahtzee in my time and searching Wikipedia I found there was a similar public domain game called Yatzy. 999 lines of QML.

February 20, 2015

RC4 vs. BEAST: which is worse?

RFC 7465 has been published, and in a perfect world it would spell doom for the use of RC4 in TLS. But, spoiler alert, the theme of this blog is that there are tons of problems with TLS that your browser either cannot or willfully will not protect you against — major browser vendors love nothing more than sacrificing your security in the name of compatibility with lousy servers — so it’s too soon for optimism.

This guy who sounds like he knows what he’s talking about and who I’ve blindly decided to trust says that PCI-compliant sites must disable CBC-based block ciphers so that they’re not vulnerable to the BEAST attack against TLS 1.0. But CBC is the only mode for block ciphers that provides a reasonable level of security in TLS 1.0, so these servers are limited to negotiating only stream ciphers. And RC4 is the only stream cipher in TLS, so that’s the only thing these poor servers are left with. But nobody is actually vulnerable to BEAST anymore — web browsers have been able to prevent the BEAST attack for several years — so this makes no sense.

So what it a PCI-compliant site? In theory, it’s any site that processes credit card data. For instance, check out the SSL Labs report for www.bankofamerica.com. (In case you’re not yet thoroughly convinced of the truth of the second sentence in this post, take note of the eight bold WEAK warnings and also the bold DANGER. Even major banks don’t care.) Scroll down to the handshake simulations and note how AES is only sometimes used with TLS 1.2, and RC4 is always picked with TLS 1.0. In practice, I’ve checked SSL Labs results for sites that do use AES with TLS 1.0, like www.amazon.com, that do take credit card data, so I’m not sure if guy-who-sounds-like-he-knows-what-he’s-talking-about has the full story, but maybe audits come less frequently than I would expect.

Hopefully browser vendors will push forward and disable RC4 anyway, but that doesn’t seem sufficiently probable, and these poor sites are hardly going to disable RC4 if it means they will fail their next security audit. So what better way to spend a Friday afternoon than write a letter to NIST?

Hi,

The CVSS score for CVE-2011-3389 (BEAST) [1] relative to the score for CVE-2013-2566 [2] may discourage efforts to implement RFC 7465 [3], which prohibits use of RC4-based ciphersuites with TLS. Delays in the implementation of this RFC will harm the overall security of the TLS ecosystem.

The issue is described succinctly at [4]: PCI-compliant servers may not enable CBC-based ciphersuites because CVE-2011-3389 has a base score of 4.3, leaving RC4-based ciphersuites as the only possible options for the server to use with TLS 1.0. CVE-2013-2566, the RC4 vulnerability, has a lower CVSS score. However, CVE-2013-2566 is a much more serious issue in practice. CVE-2011-3389 has been long-since mitigated on the client side in major browsers using the 1/n-1 split technique [5], allowing CBC-based ciphersuites to be used safely. In contrast, no client-side mitigation for CVE-2013-2566 is available short of disabling RC4. Note also that a more serious attack against RC4 will be published next month [6].

In summary, a properly-configured TLS server *should not* attempt to mitigate CVE-2011-3389, as this discourages clients from mitigating CVE-2013-2566, and clients already mitigate CVE-2011-3389. Please reconsider the relative ratings for these vulnerabilities to allow PCI-compliant servers to re-enable CBC-based ciphersuites, so that browser vendors can more comfortably disable support for RC4 as required by RFC 7465 [4] [7] [8].

Thank you for your consideration,

Michael Catanzaro

[1] https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-3389
[2] https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-2566
[3] http://www.rfc-editor.org/rfc/rfc7465.txt
[4] https://code.google.com/p/chromium/issues/detail?id=375342#c17
[5] https://bugzilla.mozilla.org/show_bug.cgi?id=665814#c59
[6] https://www.blackhat.com/asia-15/briefings.html#bar-mitzva-attack-breaking-ssl-with-13-year-old-rc4-weakness
[7] https://bugzilla.mozilla.org/show_bug.cgi?id=999544
[8] https://bugs.webkit.org/show_bug.cgi?id=140014

Now, will this actually work? Will I even get a response? I have no clue. Let’s find out!

2014 in review

I haven’t blogged much in recent months, so this will be a pretty short and boring post, it’s meant as a pulse check anyway. All in all, 2014 has been a pretty intense year:

  • Started my own company (providing branding & management consulting services)
    • Got various branding, design, marketing and research contracts. Also, management consulting assignments.
    • Had to solve a multi-month stalemate with a Research Ethics committee. Twisting Freud’s words a little: I can heartily recommend the Ethics Committee to anyone.
    • Some clients needed me to do peripheral “IT work”, so I provided sizable infrastructure & data management improvements, saved mission-critical systems from certain death, rescued kittens, etc.
  • Joined the board of directors of the GNOME Foundation to help with a non-trivial set of tasks (fighting off Groupon trying to steamroll the GNOME trademark, biz dev, ED search, and lots of legal/financial/issue handling/tough decisions)
  • Attended GUADEC 2014
  • Went to defend a (simple) case in court (not GNOME :) and won. A big relief after months of angst.
  • Attended the GSoC Mentors Summit 10-years ultimate reunion limited collectors edition, to represent GNOME and Pitivi.
  • Moved to downtown Montréal in the middle of December.
    • Great for business!
    • Not great for the sleep cycle (living in a construction site means you get hammering and drilling starting at 6-7 AM). Also, got a bit of a shining moment when the fire alarm went off at -27°C and tons of water from the 8th floor started pouring down and out from the lifts on the floors below. Reminds me of what happened to Bronzemurder (it can be said that this is a metaphor for software development in general).
    • Had to split off a bunch of public utilities and personal infrastructure (including my offsite backup system and the ATSC+VoIP setup I made for 2-3 houses in the family – that blog post is in French but it seems to have gone viral, for some reason).
    • I can now happily host GNOME (and other FLOSS) contributors and friends when they’re in town!
  • Fixed the economy (nope, I’m just kidding on that one)

My accountants/tax folks are going to hate me.

With all that going on, I had to withdraw somewhat from hands-on involvement in Pitivi. Nonetheless, I’m hoping to collect information and write a blog post update soonish.

Here’s hoping to a great 2015! I will surely see y’all at LGM and GUADEC 2015.

February 19, 2015

It has been 0 days since the last significant security failure. It always will be.

So blah blah Superfish blah blah trivial MITM everything's broken.

Lenovo deserve criticism. The level of incompetence involved here is so staggering that it wouldn't be a gross injustice for the company to go under as a result[1]. But let's not pretend that this is some sort of isolated incident. As an industry, we don't care about user security. We will gladly ship products with known security failings and no plans to update them. We will produce devices that are locked down such that it's impossible for anybody else to fix our failures. We will hide behind vague denials, we will obfuscate the impact of flaws and we will deflect criticisms with announcements of new and shinier products that will make everything better.

It'd be wonderful to say that this is limited to the proprietary software industry. I would love to be able to argue that we respect users more in the free software world. But there are too many cases that demonstrate otherwise, even where we should have the opportunity to prove the benefits of open development. An obvious example is the smartphone market. Hardware vendors will frequently fail to provide timely security updates, and will cease to update devices entirely after a very short period of time. Fortunately there's a huge community of people willing to produce updated firmware. Phone manufacturer is never going to fix the latest OpenSSL flaw? As long as your phone can be unlocked, there's a reasonable chance that there's an updated version on the internet.

But this is let down by a kind of callous disregard for any deeper level of security. Almost every single third-party Android image is either unsigned or signed with the "test keys", a set of keys distributed with the Android source code. These keys are publicly available, and as such anybody can sign anything with them. If you configure your phone to allow you to install these images, anybody with physical access to your phone can replace your operating system. You've gained some level of security at the application level by giving up any real ability to trust your operating system.

This is symptomatic of our entire ecosystem. We're happy to tell people to disable security features in order to install third-party software. We're happy to tell people to download and build source code without providing any meaningful way to verify that it hasn't been tampered with. Install methods for popular utilities often still start "curl | sudo bash". This isn't good enough.

We can laugh at proprietary vendors engaging in dreadful security practices. We can feel smug about giving users the tools to choose their own level of security. But until we're actually making it straightforward for users to choose freedom without giving up security, we're not providing something meaningfully better - we're just providing the same shit sandwich on different bread.

[1] I don't see any way that they will, but it wouldn't upset me

comment count unavailable comments

Building GTK 3 with MSVC 2013

As part of my work at NICE I had the opportunity to research a bit how to build the GTK stack using MSVC. The reason for this is that even if we are currently using the msys2 binaries and we are quite happy with them, it becomes a pain to debug any issue you might have in GLib or GTK from MSVC.

The hexchat people took the instructions from Fan and made them easy to build GLib and GTK2 using a powershell script. After learning and fighting powershell (which BTW is actually quite nice) I made a fork of hexchat‘s repository and added GTK 3 to the build system. It is actually quite impressive how you manage to build everything with this script in a windows 7 VM in less than 6 minutes.

I am sharing this with you because those who had to fight GTK on windows I am pretty sure didn’t have it very easy. Also I think there are a lot of things to improve here:

  • Make it a bit more like jhbuild, where you choose what you want to build without dealing with the dependencies if you do not really want
  • Stop getting custom sources and use the upstream ones. This is one of the things that I worked on, supporting tar.xz on the powershell script.
  • Try to get closer on the setup to upstream msvc projects. This means that right now for each project we keep a copy of the msvc projects to just modify the place where we want to build and install the binaries. This is bad, we should just use upstream’s setup which is good enough.
  • Remove the dependency on mozilla-build. I’d rather use msys2 instead.
  • Make it build with GObject introspection support.
  • Have something like jhbuild modules so we could support different versions of the sources.
  • Make it build from git: this would be great for continuous integration, but unfortunately at the moment some this is not possible due to the files generated at the dist process.

The current solution we managed to get working is actually good enough for our purposes for what we use glib and gtk on our product but if someone wants to improve this I think it could be really good for the future of GTK and GLib.

 

Wikimedia’s migration to Phabricator: FOSDEM talk.

Earlier this month, Quim Gil and I gave a talk at FOSDEM:

Wikimedia adopts Phabricator, deprecates seven infrastructure tools:
First hand experiences from a big free software project on a complex migration

The talk description and the presentation slides are available. At some point there will likely also be a video (probably under 2015 and H1308).

The talk tried to cover three aspects: The decision driving process in a large distributed project, the technical and planning aspects of such a complex migration that nobody has tried before, and describing some of the functionality of the new tool (Phabricator) itself.

February 18, 2015

Wed 2015/Feb/18

  • Integer overflow in librsvg

    Another bug that showed up through fuzz-testing in librsvg was due to an overflow during integer multiplication.

    SVG supports using a convolution matrix for its pixel-based filters. Within the feConvolveMatrix element, one can use the order attribute to specify the size of the convolution matrix. This is usually a small value, like 3 or 5. But what did fuzz-testing generate?

    <feConvolveMatrix order="65536">

    That would be an evil, slow convolution matrix in itself, but in librsvg it caused trouble not because of its size, but because C sucks.

    The code had something like this:

    struct _RsvgFilterPrimitiveConvolveMatrix {
        ...
        double *KernelMatrix;
        ...
        gint orderx, ordery;
        ...
    };
    	      

    The values for the convolution matrix are stored in KernelMatrix, which is just a flattened rectangular array of orderx × ordery elements.

    The code tries to be careful in ensuring that the array with the convolution matrix is of the correct size. In the code below, filter->orderx and filter->ordery have both been set to the dimensions of the array, in this case, both 65536:

    guint listlen = 0;
    
    ...
    
    if ((value = rsvg_property_bag_lookup (atts, "kernelMatrix")))
        filter->KernelMatrix = rsvg_css_parse_number_list (value, &listlen);
    
    ...
    
    if ((gint) listlen != filter->orderx * filter->ordery)
        filter->orderx = filter->ordery = 0;
    	    

    Here, the code first parses the kernelMatrix number list and stores its length in listlen. Later, it compares listlen to orderx * ordery to see if KernelMatrix array has the correct length. Both filter->orderx and ordery are of type int. Later, the code iterates through the values in the filter>KernelMatrix when doing the convolution, and doesn't touch anything if orderx or ordery are zero. Effectively, when those values are zero it means that the array is not to be touched at all — maybe because the SVG is invalid, as in this case.

    But in the bug, the orderx and ordery are not being sanitized to be zero; they remain at 65536, and the KernelMatrix gets accessed incorrectly as a result. Let's see what happens when you mutiply 65536 by itself with ints.

    (gdb) p (int) 65536 * (int) 65536
    $1 = 0
    	    

    Well, of course — the result doesn't fit in 32-bit ints. Let's use 64-bit ints instead:

    (gdb) p (long long) 65536 * 65536
    $2 = 4294967296
    	    

    Which is what one expects.

    What is happening with C? We'll go back to the faulty code and get a disassembly (I recompiled this without optimizations so the code is easy):

    $ objdump --disassemble --source .libs/librsvg_2_la-rsvg-filter.o
    ...
        if ((gint) listlen != filter->orderx * filter->ordery)
        4018:       8b 45 cc                mov    -0x34(%rbp),%eax    
        401b:       89 c2                   mov    %eax,%edx           %edx = listlen
        401d:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4021:       8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx     %ecx = filter->orderx
        4027:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        402b:       8b 80 ac 00 00 00       mov    0xac(%rax),%eax     %eax = filter->ordery
        4031:       0f af c1                imul   %ecx,%eax
        4034:       39 c2                   cmp    %eax,%edx
        4036:       74 22                   je     405a <rsvg_filter_primitive_convolve_matrix_set_atts+0x4c6>
            filter->orderx = filter->ordery = 0;
        4038:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        403c:       c7 80 ac 00 00 00 00    movl   $0x0,0xac(%rax)
        4043:       00 00 00 
        4046:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        404a:       8b 90 ac 00 00 00       mov    0xac(%rax),%edx
        4050:       48 8b 45 d8             mov    -0x28(%rbp),%rax
        4054:       89 90 a8 00 00 00       mov    %edx,0xa8(%rax)
    	    

    The highligted lines do the multiplication of filter->orderx * filter->ordery and the comparison against listlen. The imul operation overflows and gives us 0 as a result, which is of course wrong.

    Let's look at the overflow in slow motion. We'll set a breakpoint in the offending line, disassemble, and look at each instruction.

    Breakpoint 3, rsvg_filter_primitive_convolve_matrix_set_atts (self=0x69dc50, ctx=0x7b80d0, atts=0x83f980) at rsvg-filter.c:1276
    1276        if ((gint) listlen != filter->orderx * filter->ordery)
    (gdb) set disassemble-next-line 1
    (gdb) stepi
    
    ...
    
    (gdb) stepi
    0x00007ffff7baf055      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
    => 0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
       0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x10000  65536
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    ...
    eflags         0x206    [ PF IF ]
    	    

    Okay! So, right there, the code is about to do the multiplication. Both eax and ecx, which are 32-bit registers, have 65536 in them — you can see the 64-bit "big" registers that contain them in rax and rcx.

    Type "stepi" and the multiplication gets executed:

    (gdb) stepi
    0x00007ffff7baf058      1276        if ((gint) listlen != filter->orderx * filter->ordery)
       0x00007ffff7baf03c <rsvg_filter_primitive_convolve_matrix_set_atts+1156>:    8b 45 cc        mov    -0x34(%rbp),%eax
       0x00007ffff7baf03f <rsvg_filter_primitive_convolve_matrix_set_atts+1159>:    89 c2   mov    %eax,%edx
       0x00007ffff7baf041 <rsvg_filter_primitive_convolve_matrix_set_atts+1161>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf045 <rsvg_filter_primitive_convolve_matrix_set_atts+1165>:    8b 88 a8 00 00 00       mov    0xa8(%rax),%ecx
       0x00007ffff7baf04b <rsvg_filter_primitive_convolve_matrix_set_atts+1171>:    48 8b 45 d8     mov    -0x28(%rbp),%rax
       0x00007ffff7baf04f <rsvg_filter_primitive_convolve_matrix_set_atts+1175>:    8b 80 ac 00 00 00       mov    0xac(%rax),%eax
       0x00007ffff7baf055 <rsvg_filter_primitive_convolve_matrix_set_atts+1181>:    0f af c1        imul   %ecx,%eax
    => 0x00007ffff7baf058 <rsvg_filter_primitive_convolve_matrix_set_atts+1184>:    39 c2   cmp    %eax,%edx
       0x00007ffff7baf05a <rsvg_filter_primitive_convolve_matrix_set_atts+1186>:    74 22   je     0x7ffff7baf07e <rsvg_filter_primitive_convolve_matrix_set_atts+1222>
    (gdb) info registers
    rax            0x0      0
    rbx            0x69dc50 6937680
    rcx            0x10000  65536
    rdx            0x0      0
    eflags         0xa07    [ CF PF IF OF ]
    	    

    Kaboom. The register eax (inside rax) now is 0, which is the (wrong) result of the multiplication. But look at the flags! There is a big fat OF flag, the overflow flag! The processor knows! And it tries to tell us... with a single bit... that the C language doesn't bother to check!

    Handover

    (The solution in the code, at least for now, is simple enough — use gint64 for the actual operations so the values fit. It should probably set a reasonable limit for the size of convolution matrices, too.)

    So, could anything do better?

    Scheme uses exact arithmetic if possible, so (* MAXLONG MAXLONG) doesn't overflow, but gives you a bignum without you doing anything special. Subsequent code may go into the slow case for bignums when it happens to use that value, but at least you won't get garbage.

    I think Python does the same, at least for integer values (Scheme goes further and uses exact arithmetic for all rational numbers, not just integers).

    C# lets you use checked operations, which will throw an exception if something overflows. This is not the default — the default is "everything gets clipped to the operand size", like in C. I'm not sure if this is a mistake or not. The rest of the language has very nice safety properties, and it lets you "go fast" if you know what you are doing. Operations that overflow by default, with opt-in safety, seem contrary to this philosophy. On the other hand, the language will protect you if you try to do something stupid like accessing an array element with a negative index (... that you got from an overflowed operation), so maybe it's not that bad in the end.

Reviewing moved files with git

This might be a well-known trick already, but just in case it’s not…

Reviewing a patch can be a bit painful when a file that has been changed and moved or renamed at one go (and there can be perfectly valid reasons for doing this). A nice thing about git is that you can reference files in an arbitrary tree while using git diff, so reviewing such changes can become easier if you do something like this:

$ git am 0001-the-thing-I-need-to-review.patch
$ git diff HEAD^:old/path/to/file.c new/path/to/file.c

This just references file.c in its old path, which is available in the commit before HEAD, and compares it to the file at the new path in the patch you just merged.

Of course, you can also use this to diff a file at some arbitrary point in the past, or in some arbitrary branch, with the same file at the current HEAD or any other point.

Hopefully this is helpful to someone out there!

Update: As Alex Elsayed points out in the comments, git diff -M/-C can be used to similar effect. The above example, for example, could be written as:

$ git am 0001-the-thing-I-need-to-review.patch
$ git show -C

Builder Update

Hi Everyone!

For those in the Los Angeles region, I'll be speaking at Scale13x on Saturday about Builder. Come see some demos.

I've been hard at work since getting back from the Cambridge Hackfest and FOSDEM. Thank you everyone that went out of their way to find me at FOSDEM and send me warm wishes. That means a lot to me.

I just published an update on the mailing list about where we are with LibIDE. Things are coming together quickly, and I'm excited about the quality of the library.

For those of you that are writing libraries in GObject/C, you must check out the combination of g_autoptr(), G_DECLARE_DERIVABLE_TYPE() and G_DECLARE_FINAL_TYPE(). These are major improvements going into GLib/GObject 2.44. LibIDE is built using these features and it has really lowered the number of lines of code. Even better, g_autoptr() has almost completely removed my need for goto failure; scenarios. It still happens when interacting with older code, but for new libraries, this is pure magic.

I'm going to keep sprinting to add features and merge the wip/libide branch of Builder up until 3.16. Since we are such an early project, I think getting code into the release is more important than adhering to code freeze. My sincere apologies to translators.

One neat process of building LibIDE is that it has made it much easier to unit test things. Most of the development happens by creating little programs that exercise a particular feature. This means that you'll be able to take advantage of Builder functionality in scripts and the command line.

IDE Scripting landed this week in the wip/libide branch. That means you'll be able to create things like search engines, device providers, and build system extensions all from the scripting engine. Once document management is pushed into LibIDE, you'll be able to hook the file saving and loading process.

Anyway, I have so much to write about... and never enough time.

February 17, 2015

First fully sandboxed Linux desktop app

Its not a secret that I’ve been working on sandboxed desktop applications recently. In fact, I recently gave a talk at devconf.cz about it. However, up until now I’ve mainly been focusing on the bundling and deployment aspects of the problem. I’ve been running applications in their own environment, but having pretty open access to the system.

Now that the basics are working it’s time to start looking at how to create a real sandbox. This is going to require a lot of changes to the Linux stack. For instance, we have to use Wayland instead of X11, because X11 is impossible to secure. We also need to use kdbus to allow desktop integration that is properly filtered at the kernel level.

Recently Wayland has made some pretty big strides though, and we now have working Wayland sessions in Fedora 21. This means we can start testing real sandboxing for simple applications. To get something running I chose to focus on a game, because they require very little interaction with the system. Here is a video I made of Neverball, running in a minimal sandbox:

Click here to view the embedded video.

In this example we’re running a regular build of neverball in an environment which:

  • Is independent of the host distribution
  • Has no access to any system or user files other than the ones from the runtime and application itself
  • Has no access to any hardware devices, other than DRI (for GL rendering)
  • Has no network access
  • Can’t see any other processes in the system
  • Can only get input via Wayland
  • Can only show graphics via Wayland
  • Can only output audio via PulseAudio
  • … plus more sandboxing details

Yet the application is still simple to install and integrates nicely with the desktop. If you want to test it yourself, just follow the instructions on the project page and install org.neverball.Neverball.

Of course, there are still a lot to do here. For instance, PulseAudio doesn’t protect clients from each other, and for more complex applications we need to add new APIs to safely grant access to things like user files and devices. The sandbox details page has a more detailed list of what has to be done.

The road is long, but at least we have now started our journey!

Using OpenGL with GTK+

let’s say you are on a bleeding edge distribution, or have access to bleeding edge GTK+.

let’s say you want to use OpenGL inside your GTK+ application.

let’s say you don’t know where to start, except for the API reference of the GtkGLArea widget, and all you can find are examples and tutorials on how to use OpenGL with GLUT, or SDL, or worse some random toolkit that nobody heard about.

this clearly won’t do, so I decided to write down how to use OpenGL with GTK+ using the newly added API available with GTK+ 3.16.

disclaimer: I am not going to write an OpenGL tutorial; there are other resources for it, especially for modern API, like Anton’s OpenGL 4 tutorials.

I’ll spend just a minimal amount of time explaining the details of some of the API when it intersects with GTK+’s own. this blog will also form the template for the documentation in the API reference.

Things to know before we start

the OpenGL support inside GTK+ requires core GL profiles, and thus it won’t work with the fixed pipeline API that was common until OpenGL 3.2 and later versions. this means that you won’t be able to use API like glRotatef(), or glBegin()/glEnd() pairs, or any of that stuff.

the dependency on non-legacy profiles has various advantages, mostly in performance and requirements from a toolkit perspective; it adds minimum requirements on your application — even if modern OpenGL is supported by Mesa, MacOS, and Windows. before you ask: no, we won’t add legacy profiles support to GTK. it’s basically impossible to support both core and legacy profiles at the same time: you have to choose either one, which also means duplicating all the code paths. yes, some old hardware does not support OpenGL 3 and later, but there are software fallbacks in place. to be fair, everyone using GL will tell you to stop using legacy profiles, and get on with the times.

Using the GtkGLArea widget

let’s start with an example application that embeds a GtkGLArea widget and uses it to render a simple triangle. I’ll be using the example code that comes with the GTK+ API reference as a template, so you can also look at that section of the documentation. the code is also available in my GitHub repository.

we start by building the main application’s UI from a GtkBuilder template file its corresponding GObject class, called GlareaAppWindow:

glarea-app-window.h download
#ifndef __GLAREA_APP_WINDOW_H__
#define __GLAREA_APP_WINDOW_H__

#include <gtk/gtk.h>
#include "glarea-app.h"

G_BEGIN_DECLS

#define GLAREA_TYPE_APP_WINDOW (glarea_app_window_get_type ())

G_DECLARE_FINAL_TYPE (GlareaAppWindow, glarea_app_window, GLAREA, APP_WINDOW, GtkApplicationWindow)

GtkWidget *glarea_app_window_new (GlareaApp *app);

G_END_DECLS

#endif /* __GLAREA_APP_WINDOW_H__ */

this class is used by the GlareaApp class, which in turn holds the application state:

glarea-app.h download
#ifndef __GLAREA_APP_H__
#define __GLAREA_APP_H__

#include <gtk/gtk.h>

G_BEGIN_DECLS

#define GLAREA_ERROR (glarea_error_quark ())

typedef enum {
  GLAREA_ERROR_SHADER_COMPILATION,
  GLAREA_ERROR_SHADER_LINK
} GlareaError;

GQuark glarea_error_quark (void);

#define GLAREA_TYPE_APP (glarea_app_get_type ())

G_DECLARE_FINAL_TYPE (GlareaApp, glarea_app, GLAREA, APP, GtkApplication)

GtkApplication *glarea_app_new (void);

G_END_DECLS

#endif /* __GLAREA_APP_H__ */

the GtkBuilder template file contains a bunch of widgets, but the most interesting section (at least for us) is this one:

glarea-app-window.ui [Lines 33-43] download
        <child>
          <object class="GtkGLArea" id="gl_drawing_area">
            <signal name="realize" handler="gl_init" object="GlareaAppWindow" swapped="yes"/>
            <signal name="unrealize" handler="gl_fini" object="GlareaAppWindow" swapped="yes"/>
            <signal name="render" handler="gl_draw" object="GlareaAppWindow" swapped="yes"/>
            <property name="visible">True</property>
            <property name="can_focus">False</property>
            <property name="hexpand">True</property>
            <property name="vexpand">True</property>
          </object>
        </child>

which contains the definition for adding a GtkGLArea and connecting to its signals.

once we connect all elements of the template to the GlareaAppWindow class:

glarea-app-window.c [Lines 354-370] download
static void
glarea_app_window_class_init (GlareaAppWindowClass *klass)
{
  GtkWidgetClass *widget_class = GTK_WIDGET_CLASS (klass);

  gtk_widget_class_set_template_from_resource (widget_class, "/io/bassi/glarea/glarea-app-window.ui");

  gtk_widget_class_bind_template_child (widget_class, GlareaAppWindow, gl_drawing_area);
  gtk_widget_class_bind_template_child (widget_class, GlareaAppWindow, x_adjustment);
  gtk_widget_class_bind_template_child (widget_class, GlareaAppWindow, y_adjustment);
  gtk_widget_class_bind_template_child (widget_class, GlareaAppWindow, z_adjustment);

  gtk_widget_class_bind_template_callback (widget_class, adjustment_changed);
  gtk_widget_class_bind_template_callback (widget_class, gl_init);
  gtk_widget_class_bind_template_callback (widget_class, gl_draw);
  gtk_widget_class_bind_template_callback (widget_class, gl_fini);
}

and we compile and run the whole thing, we should get something like this:

the empty GL drawing area

if there were any errors while initializing the GL context, you would see them inside the GtkGLArea widget itself; you can control this behaviour, just like you can control the creation of the GdkGLContext yourself.

as you saw in the code above, we use the GtkWidget signals to set up, draw, and tear down the OpenGL state. in the old days of the fixed pipeline API, we could have simply connected to the GtkGLArea::render signal, called some OpenGL API, and have something appear on the screen. those days are long gone. OpenGL requires more code to get going with the programmable pipeline. while this means that you have access to a much leaner (and powerful) API, some of the convenience went out of the window.

in order to get things going, we need to start by setting up the OpenGL state; we use the GtkWidget::realize signal, as that allows our code to be called after the GtkGLArea widget has created a GdkGLContext, so that we can use it:

glarea-app-window.c [Lines 206-227] download
static void
gl_init (GlareaAppWindow *self)
{
  /* we need to ensure that the GdkGLContext is set before calling GL API */
  gtk_gl_area_make_current (GTK_GL_AREA (self->gl_drawing_area));

  /* initialize the shaders and retrieve the program data */
  GError *error = NULL;
  if (!init_shaders (&self->program,
                     &self->mvp_location,
                     &self->position_index,
                     &self->color_index,
                     &error))
    {
      gtk_gl_area_set_error (GTK_GL_AREA (self->gl_drawing_area), error);
      g_error_free (error);
      return;
    }

  /* initialize the vertex buffers */
  init_buffers (self->position_index, self->color_index, &self->vao);
}

in the same way, we use GtkWidget::unrealize to free the resources we created inside the gl_init callback:

glarea-app-window.c [Lines 229-238] download
static void
gl_fini (GlareaAppWindow *self)
{
  /* we need to ensure that the GdkGLContext is set before calling GL API */
  gtk_gl_area_make_current (GTK_GL_AREA (self->gl_drawing_area));

  /* destroy all the resources we created */
  glDeleteVertexArrays (1, &self->vao);
  glDeleteProgram (self->program);
}

at this point, the code to draw the context of the GtkGLArea is:

glarea-app-window.c [Lines 263-279] download
static gboolean
gl_draw (GlareaAppWindow *self)
{
  /* clear the viewport; the viewport is automatically resized when
   * the GtkGLArea gets an allocation
   */
  glClearColor (0.5, 0.5, 0.5, 1.0);
  glClear (GL_COLOR_BUFFER_BIT);

  /* draw our object */
  draw_triangle (self);

  /* flush the contents of the pipeline */
  glFlush ();

  return FALSE;
}

and voilà:

Houston, we have a triangle

obviously, it’s a bit more complicated that.

let’s start with the code that initializes the resources inside gl_init(); what do init_buffers() and init_shaders() do? the former creates the vertex buffers on the graphics pipeline, and populates it with the per-vertex data that we want to use later on, namely: the position of each vertex, and its color:

glarea-app-window.c [Lines 34-44] download
/* the vertex data is constant */
struct vertex_info {
  float position[3];
  float color[3];
};

static const struct vertex_info vertex_data[] = {
  { {  0.0f,  0.500f, 0.0f }, { 1.f, 0.f, 0.f } },
  { {  0.5f, -0.366f, 0.0f }, { 0.f, 1.f, 0.f } },
  { { -0.5f, -0.366f, 0.0f }, { 0.f, 0.f, 1.f } },
};

it does that by creating two buffers:

  • a Vertex Array Object, which holds all the subsequent vertex buffers
  • a Vertex Buffer Object, which holds the vertex data
glarea-app-window.c [Lines 53-72] download
  /* we need to create a VAO to store the other buffers */
  glGenVertexArrays (1, &vao);
  glBindVertexArray (vao);

  /* this is the VBO that holds the vertex data */
  glGenBuffers (1, &buffer);
  glBindBuffer (GL_ARRAY_BUFFER, buffer);
  glBufferData (GL_ARRAY_BUFFER, sizeof (vertex_data), vertex_data, GL_STATIC_DRAW);

  /* enable and set the position attribute */
  glEnableVertexAttribArray (position_index);
  glVertexAttribPointer (position_index, 3, GL_FLOAT, GL_FALSE,
                         sizeof (struct vertex_info),
                         (GLvoid *) (G_STRUCT_OFFSET (struct vertex_info, position)));

  /* enable and set the color attribute */
  glEnableVertexAttribArray (color_index);
  glVertexAttribPointer (color_index, 3, GL_FLOAT, GL_FALSE,
                         sizeof (struct vertex_info),
                         (GLvoid *) (G_STRUCT_OFFSET (struct vertex_info, color)));

the init_shaders() function is a bit more complex, as it needs to

  • compile a vertex shader
  • compile a fragment shader
  • link both the vertex and the fragment shaders together into a program
  • extract the location of the attributes and uniforms

the vertex shader is executed once for each vertex, and establishes the location of each vertex:

glarea-vertex.glsl download
#version 150

in vec3 position;
in vec3 color;

uniform mat4 mvp;

smooth out vec4 vertexColor;

void main() {
  gl_Position = mvp * vec4(position, 1.0);
  vertexColor = vec4(color, 1.0);
}

it also has access to the vertex data that we stored inside the vertex buffer object, which we pass to the fragment shader:

glarea-fragment.glsl download
#version 150

smooth in vec4 vertexColor;

out vec4 outputColor;

void main() {
  outputColor = vertexColor;
}

the fragment shader is executed once for each fragment, or pixel-sized space between vertices.

once both the vertex buffers and the program are uploaded into the graphics pipeline, the GPU will render the result of the program operating over the vertex and fragment data — in our case, a triangle with colors interpolating between each vertex:

glarea-app-window.c [Lines 240-261] download
static void
draw_triangle (GlareaAppWindow *self)
{
  if (self->program == 0 || self->vao == 0)
    return;

  /* load our program */
  glUseProgram (self->program);

  /* update the "mvp" matrix we use in the shader */
  glUniformMatrix4fv (self->mvp_location, 1, GL_FALSE, &(self->mvp[0]));

  /* use the buffers in the VAO */
  glBindVertexArray (self->vao);

  /* draw the three vertices as a triangle */
  glDrawArrays (GL_TRIANGLES, 0, 3);

  /* we finished using the buffers and program */
  glBindVertexArray (0);
  glUseProgram (0);
}

now that we have a static triangle, we should connect the UI controls and transform it around each axis. in order to do that, we compute the transformation matrix using the value from the three GtkScale widgets as the rotation angle around each axis. first of all, we connect to the GtkAdjustment::value-changed signal, update the rotation angles, and use them to generate the rotation matrix:

glarea-app-window.c [Lines 328-352] download
static void
adjustment_changed (GlareaAppWindow *self,
                    GtkAdjustment   *adj)
{
  double value = gtk_adjustment_get_value (adj);

  /* update the rotation angles */
  if (adj == self->x_adjustment)
    self->rotation_angles[X_AXIS] = value;

  if (adj == self->y_adjustment)
    self->rotation_angles[Y_AXIS] = value;

  if (adj == self->z_adjustment)
    self->rotation_angles[Z_AXIS] = value;

  /* recompute the mvp matrix */
  compute_mvp (self->mvp,
               self->rotation_angles[X_AXIS],
               self->rotation_angles[Y_AXIS],
               self->rotation_angles[Z_AXIS]);

  /* queue a redraw on the GtkGLArea */
  gtk_widget_queue_draw (self->gl_drawing_area);
}

then we queue a redraw on the GtkGLArea widget, and that’s it; the draw_triangle() code will take the matrix, place it inside the vertex shader, and we’ll use it to transform the location of each vertex.

add a few more triangles and you get Quake

there is obviously a lot more that you can do, but this should cover the basics.

Porting from older libraries

back in the GTK+ 2.x days, there were two external libraries used to render OpenGL pipelines into GTK+ widgets:

  • GtkGLExt
  • GtkGLArea

the GDK drawing model was simpler, in those days, so these libraries just took a native windowing system surface, bound it to a GL context, and expected everything to work. it goes without saying that it is not the case any more when it comes to GTK+ 3, and with modern graphics architectures.

both libraries also had the unfortunate idea of abusing the GDK and GTK namespaces, which means that, if ported to integrate with GTK+ 3, they would collide with GTK’s own symbols. this means that these two libraries are forever tied to GTK+ 2.x, and as they are unmaintained already, you should not be using them to write new code.

Porting from GtkGLExt

GtkGLExt is the library GTK+ 2.x applications used to integrate OpenGL rendering inside GTK+ widgets. it is currently unmaintained, and there is no GTK+ 3.x port. not only GtkGLExt is targeting outdated API inside GTK+, it’s also fairly tied to the old OpenGL 2.1 API and rendering model. this means that if you are using it, you’re also using one legacy API on top of another legacy API.

if you were using GtkGLExt you will likely remove most of the code dealing with initialization and the creation of the GL context. you won’t be able to use any random widget with a GdkGLContext, but you’ll be limited to using GtkGLArea. while there isn’t anything specific about the GtkGLArea widget, GtkGLArea will handle context creation for you, as well as creating the offscreen framebuffer and the various ancillary buffers that you can use to render on.

Porting from GtkGLArea

GtkGLArea is another GTK+ 2.x library that was used to integrate OpenGL rendering with GTK+. it has seen a GTK+ 3.x port, as well as a namespace change that avoids the collision with the GTK namespace, but the internal implementation is also pretty much tied to OpenGL 2.1 API and rendering model.

unlike GtkGLExt, GtkGLArea only provides you with the API to create a GL context, a widget to render into. it is not tied to the GDK drawing model, so you’re essentially bypassing GDK’s internals, which means that changes inside GTK+ may break your existing code.

Resources

February 16, 2015

Intel Boot Guard, Coreboot and user freedom

PC World wrote an article on how the use of Intel Boot Guard by PC manufacturers is making it impossible for end-users to install replacement firmware such as Coreboot on their hardware. It's easy to interpret this as Intel acting to restrict competition in the firmware market, but the reality is actually a little more subtle than that.

UEFI Secure Boot as a specification is still unbroken, which makes attacking the underlying firmware much more attractive. We've seen several presentations at security conferences lately that have demonstrated vulnerabilities that permit modification of the firmware itself. Once you can insert arbitrary code in the firmware, Secure Boot doesn't do a great deal to protect you - the firmware could be modified to boot unsigned code, or even to modify your signed bootloader such that it backdoors the kernel on the fly.

But that's not all. Someone with physical access to your system could reflash your system. Even if you're paranoid enough that you X-ray your machine after every border crossing and verify that no additional components have been inserted, modified firmware could still be grabbing your disk encryption passphrase and stashing it somewhere for later examination.

Intel Boot Guard is intended to protect against this scenario. When your CPU starts up, it reads some code out of flash and executes it. With Intel Boot Guard, the CPU verifies a signature on that code before executing it[1]. The hash of the public half of the signing key is flashed into fuses on the CPU. It is the system vendor that owns this key and chooses to flash it into the CPU, not Intel.

This has genuine security benefits. It's no longer possible for an attacker to simply modify or replace the firmware - they have to find some other way to trick it into executing arbitrary code, and over time these will be closed off. But in the process, the system vendor has prevented the user from being able to make an informed choice to replace their system firmware.

The usual argument here is that in an increasingly hostile environment, opt-in security isn't sufficient - it's the role of the vendor to ensure that users are as protected as possible by default, and in this case all that's sacrificed is the ability for a few hobbyists to replace their system firmware. But this is a false dichotomy - UEFI Secure Boot demonstrated that it was entirely possible to produce a security solution that provided security benefits and still gave the user ultimate control over the code that their machine would execute.

To an extent the market will provide solutions to this. Vendors such as Purism will sell modern hardware without enabling Boot Guard. However, many people will buy hardware without consideration of this feature and only later become aware of what they've given up. It should never be necessary for someone to spend more money to purchase new hardware in order to obtain the freedom to run their choice of software. A future where users are obliged to run proprietary code because they can't afford another laptop is a dystopian one.

Intel should be congratulated for taking steps to make it more difficult for attackers to compromise system firmware, but criticised for doing so in such a way that vendors are forced to choose between security and freedom. The ability to control the software that your system runs is fundamental to Free Software, and we must reject solutions that provide security at the expense of that ability. As an industry we should endeavour to identify solutions that provide both freedom and security and work with vendors to make those solutions available, and as a movement we should be doing a better job of articulating why this freedom is a fundamental part of users being able to place trust in their property.

[1] It's slightly more complicated than that in reality, but the specifics really aren't that interesting.

comment count unavailable comments

NetworkManager for Administrators Part 1

4870003098_26ba44a08a_b(via scobleizer, CC BY 2.0)

NetworkManager is a system service that manages network interfaces and connections based on user or automatic configuration. It supports Ethernet, Bridge, Bond, VLAN, team, InfiniBand, Wi-Fi, mobile broadband (WWAN), PPPoE and other devices, and supports a variety of different VPN services.  You can manage it a couple different ways, from config files to a rich command-line client, a curses-like client for non-GUI systems, graphical clients for the major desktop environments, and even web-based management consoles like Cockpit.

There’s an old perception that NetworkManager is only useful on laptops for controlling Wi-Fi, but nothing could be further from the truth.  No laptop I know of has InfiniBand ports.  We recently released NetworkManager 1.0 with a whole load of improvements for workstations, servers, containers, and tiny systems from embedded to RaspberryPi.  In the spirit of making double-plus sure that everyone knows how capable and useful NetworkManager is, let’s take a magical journey into Administrator-land and start at the very bottom…

Daemon Configuration Files

Basic configuration is stored in /etc/NetworkManager/NetworkManager.conf in a standard key/value ini-style format.  The sections and values are well-described by ‘man NetworkManager.conf’.  A standard default configuration looks like this:

[main]
plugins=ifcfg-rh

You can override default configuration through either command-line switches or by dropping “configuration snippets” into /etc/NetworkManager/conf.d.  These snippets use the same configuration options from ‘man NetworkManager.conf’ but are much easier to distribute among larger numbers of machines though packages or tools like Puppet, or even just to install features through your favorite package manager.  For example, in Fedora, there is a NetworkManager-config-connectivity-fedora RPM package that installs a snippet that enables connectivity checking to Fedora Project servers.  If you don’t care about connectivity checking, you simply ‘rpm -e NetworkManager-config-connectivity-fedora’ instead of tracking down and deleting /etc/NetworkManager/conf.d/20-connectivity-fedora.conf.

Just for kicks, let’s take a walk through the various configuration options, what they do, and why you might care about them in a server, datacenter, or minimal environment…

Configuration Snippets

First, each configuration “snippet” in /etc/NetworkManager/conf.d can override values set in earlier snippets, or even the default configuration (but not command-line options).  So the same option specified in 50-foobar.conf will override that option specified in 10-barfoo.conf.  Many options also support the “+” modifier, which allows their value to be added to earlier ones instead of replacing.  So “plugins+=something-else” will add “something-else” to the list, instead of overwriting any earlier values.  You’ll see why this is quite useful in a minute…

Dive Deep

[main]
plugins=ifcfg-rh | ifupdown | ifnet | ifcfg-suse | ibft (default empty)

This option enables or disables certain settings plugins, which are small loadable libraries that read and write distribution-specific network configuration.  For example, Fedora/RHEL would specify ‘plugins=ifcfg-rh’ for reading and writing the ifcfg file format, while Debian/Ubuntu would use ‘plugins=ifupdown’ for reading /etc/network/interfaces, and Gentoo would use ‘plugins=ifnet’.  If you know your distro’s config format like the back of your hand, NetworkManager doesn’t make you change it.

There is one default plugin though, ‘keyfile’, which NetworkManager uses to read and write configurations that the distro-specific plugins can’t handle.  These files go into /etc/NetworkManager/system-connections and are standard .ini-style key/value files.  If you’re interested in the key and value definitions, you can check out ‘man nm-settings’ and ‘man nm-settings-keyfiles’, or even look at some examples.

[main]
monitor-connection-files=yes | no (default no)

By popular demand, NetworkManager no longer watches configuration files for changes.  Instead, you make all the changes you want, and then explicitly tell NetworkManager when you’re done with “nmcli con reload” or “nmcli con load <filename>”.  This prevents reading partial configuration and allows you to double-check that everything is correct before making the configuration update.  Note that changes made through the D-Bus interface (instead of the filesystem) always happen immediately.

However, if you want the old behavior back, you can set this option to “yes”.

[main]
auth-polkit=yes | no (default yes)

If built with support for it, NetworkManager uses PolicyKit for fine-grained authorization of network actions.  This will be the subject of another article in this series, but the TLDR is that PolicyKit easily allows user A the permission to use WiFi while denying user B WiFi but allowing WWAN.  These things can be done with Unix groups, but that quickly gets unwieldy and isn’t fine-grained enough for some organizations.  In any case, PolicyKit is often unecessary on small, single-user systems or in datacenters with controlled access.  So even if your distribution builds NetworkManager with PolicyKit enabled, you can turn it off for simpler root-only operation.

[main]
dhcp=dhclient | dhcpcd | internal (default determined at build time, dhclient preferred if enabled)

With NetworkManager 1.0 we’ve added a new internal DHCP client (based off systemd code which was based off ConnMan code) which is smaller, faster, and lighter than dhclient or dhcpcd.  It doesn’t do DHCPv6 yet, but we’re working on that.  We think you’ll like it, and it’s certainly much less of a resource hog than a dhclient process for every interface. To use it, set this option to “internal” and restart NetworkManager.

If NetworkManager was built with support for dhclient or dhcpcd, you can use either of these clients by setting this option to the client’s name.  Note that if you enable both dhclient and dhcpcd, dhclient will be preferred for maximum compatibility.

[main]
no-auto-default= (default empty)

By default, NetworkManager will create an in-memory DHCP connection for every Ethernet interface on your system, which ensures that you have connectivity when bringing a new system up or booting a live DVD.  But that’s not ideal on large systems with many NICs, or on systems where you’d like to control initial network bring-up yourself.  In that case, you should set this option to “*” to disable the auto-Ethernet behavior for all interfaces, indicating that you’d like to create explicit configuration instead.  You can also use MAC addresses or interface names here too!  On Fedora we’ve created a package called NetworkManager-config-server that sets this option to “*” by default.

[main]
ignore-carrier= (default empty)

Trip over a cable?  Want to make sure a critical interface stays configured if the switch port goes down?  This option is for you!  Setting it to “*” (all interfaces) or using MAC addresses or interface names here will tell NetworkManager to ignore carrier events after the interface is configured.  For DHCP connections a carrier is obviously required for initial configuration, while static connections can start regardless of carrier status.  After that, feel free to unplug the cable every time Apple sells an iPhone!

[main]
configure-and-quit=yes | no (default no)

New with 1.0 is the “configure and quit” mode where NetworkManager configures interfaces (including, if desired, blocking startup until networking is active) and then quits, spawning small helpers to maintain DHCP leases and IPv6 address lifetimes if required.  In a datacenter or cloud where cycles are money, this can save you some cash and deliver a more stable setup with known behavior.

[main]
dns=dnsmasq | unbound | none | default (default empty, equivalent to “default”)

Want to control DNS yourself?  NetworkManager makes it easy!  Don’t want to?  NetworkManager makes that easy too! When you set this option to ‘dnsmasq’ NetworkManager will configure dnsmasq as a local caching nameserver, including split DNS for VPN tunnels.  If you set it to ‘none’ then NetworkManager won’t touch /etc/resolv.conf and you can use dispatcher scripts that NetworkManager calls at various points to set up DNS any way you choose.

Leaving the option empty or setting it to “default” asks NetworkManager to own resolv.conf, updating system DNS with any information from your explicit network settings or those received from automatic means like DHCP.

In the upcoming NetworkManager 1.2, DNS information is written to /var/lib/NetworkManager/resolv.conf and, if NM is allowed to manage /etc/resolv.conf, that file will be a symlink to the one in /var similar to systemd-resolvd.  This makes it easier for external tools to incorporate the DNS information that NetworkManager combines from multiple sources like DHCP, PPP, IPv6, VPNs, and more.

[keyfile]
unmanaged-devices= (default empty)

Want to keep NetworkManager’s hands off a specific device?  That’s what this option is for, where you can use “interface-name:eth0″ or “mac:00:22:68:1c:59:b1″ to prevent automatic management of a device.  While there are some situations that require this, by default NetworkManager doesn’t touch virtual interfaces that it didn’t create, like bridges, bonds, VLANs, teams, macvlan, tun, tap, etc.  So while it’s unusual to need this option, we realize that NetworkManager can be used in concert with other tools, so it’s here if you do.

[connectivity]
uri=  (default empty = disabled)
interval=(default 0 = disabled)
response=  (default “NetworkManager is online”)

Connectivity checking helps users log into captive ports and hotspots, while also providing information about whether or not the Internet is reachable.  When NetworkManager connects a network interface, it sends an HTTP request to the given URI and waits for the specified response.  If you’re connected to the Internet and the connectivity server isn’t down, the response should match and NetworkManager will change state from CONNECTED_SITE to CONNECTED.  It will also check connectivity every ‘interval’ seconds so that clients can report status to the user.

If you’re instead connected to a WiFi hotspot or some kind of captive portal like a hotel network, your DNS will be hijacked and the request will be redirected to an authentication server.  The response will be unexpected and NetworkManager will know that you’re behind a captive portal.  Clients like GNOME Shell will then indicate that you must authenticate before you can access the real Internet, and could provide an embedded web browser for this purpose.

Upstream connectivity checking is disabled by default, but some distribution variants (like Fedora Workstation) are now enabling it for desktops, laptops, and workstations.  On a server or embedded system, or where traffic costs a lot of money, you probably don’t want this feature enabled.  To turn it off you can either remove your distro-provided connectivity package (which just drops a file in /etc/NetworkManager/conf.d) or you can remove the options from NetworkManager.conf.

Special NetworkManager data files

In the normal course of network management sometimes non-configuration data needs to persist.  NetworkManager does this in the /var/lib/NetworkManager directory, which contains a few different files of interest:

seen-bssids

This file contains the BSSIDs (MAC addresses) of WiFi access points that NetworkManager has connected to for each configured WiFi network.  NetworkManager doesn’t do this to spy on you (and the file is readable only by root), but instead to automatically connect to WiFi networks that do not broadcast their SSID.  You almost never need to touch this file, but if you are concerned about privacy feel free to delete this file periodically.

timestamps

Each time you connect to a network, whether wired, WiFi, etc, NetworkManager updates the timestamp in this file.  This allows NetworkManager to determine which network you last used, which can be used to automatically connect you to more preferred networks.  NetworkManager also uses the timestamp as an indicator that you have successfully connected to the network before, which it uses when deciding whether or not to ask for your WiFi password when you get randomly disconnected or the driver fails.

NetworkManager.state

This file stores persistent user-determined state for Airplane mode for each technology like WiFi, WWAN, and WiMAX.  Normally this is controlled by hardware buttons, but some systems don’t have hardware buttons or the drivers don’t work, plus that state is not persistent across boots.  So NetworkManager stores a user-defined state for each radio type and will ensure the radio stays in that state across reboots too.

DHCP lease and configuration files

When you obtain a DHCP lease, that lease may last longer than your connection to that network.  To ensure that you receive a nominally stable IP address the next time you connect, or to ensure that your TCP sessions are not broken if there is a network hiccup, NetworkManager stores the DHCP lease and attempts to acquire the same lease again.  These files are stored per-connection to ensure that a lease acquired on your home WiFi or ethernet network is not used for work or Starbucks.  Temporary DHCP configuration files are also stored here, which are constructed based on your preferences and on generic DHCP configuration files in /etc for each supported DHCP client.  If you want to wipe the DHCP slate clean, feel free to remove any of the lease or configuration files.

And that’s it for this time, stay tuned for the next part in this series!

D-Bus API design guidelines

D-Bus has just gained some upstream guidelines on how to write APIs which best use its features and follow its conventions. Hopefully they draw together all the best practices from the various articles which have been published about writing D-Bus APIs in the past. If anybody has suggestions, criticism or feedback about them, or ideas for extra guidelines to include, please get in touch!

February 15, 2015

I shot the Tracker

In free software some fashions never change, and some are particularly hard to overcome. Today I’ll talk about the “Tracker makes $ANYTHING slow” adage, lately gnome-music being on the spotlight here. I’m glad that I could personally clear this up to some individuals on the hackfests/conferences I’ve been around lately.

But convincing is a never ending labor, there’s still confused people around the internets, and disdainful looks don’t work as well over there. The next best thing I could do is fixing things myself to make Tracker look less like the bad guy. So, from the “can’t someone else do it” department, here’s some commits to improve the situation. The astute reader might notice that there is nothing about tracker in these changes.

There’s of course more to it, AFAICT other minor performance hits are caused by:

  • grilo emitting one signal per media item found, which is somewhat bad on huge lists
  • icon view performance generally sucking, which makes scrolling not as smooth in the Albums view while covers are loading
  • After all that, well sure, Tracker queries can be marginally optimized.

This will eventually hit master and packages, until then, do me a favor an point to this post anyone still saying how Tracker made gnome-music slow.

Developer experience hackfest

Kind of on topic with this, I attended a few weeks ago to the Developer experience hackfest. Besides trying to peg round pieces into square holes, after some talking with how much of a steep barrier was Sparql as a prerequisite for accessing Tracker data, I started there on a simpler query API that abstracted all of these gritty details. Code is just shaping up there, but I expect it to cover the most common usecases. I must thank Red Hat and Collabora for enabling me to go there, all the people there, and particularly Philip for being such a great host.

Oh, and also attended Fosdem and Devconf, even talked on the last one about the input plans going on in GNOME, busy days!

Submit proposals for GSoC 2015!

Guadec2013: Team of GSoC and GNOME Woman
Photo by Ana Rey, CC-by-SA 2.0

The time has come for round 11 of GSoC! If you are interested in mentoring an intern, then now is the time to act: add your project ideas to the untriaged ideas list on the GNOME wiki by 2014-02-19. If you’re unsure about mentoring, it’s worth having a browse through the information for mentors or get in touch with the GNOME GSoC admins in #soc on irc.gnome.org.

If you are a student who wants to take on one of the GNOME project, now is the time to get in touch with the project that you are interested in and help come up with an idea that you want to work on.

More information can be found at https://mail.gnome.org/archives/desktop-devel-list/2015-February/msg00109.html

GNOME.Asia summit 2015 is call for paper


http://2015.gnome.asia/cfp

PRESENTING AT THE GNOME.ASIA SUMMIT

GNOME.Asia Summit 2015 invites proposals for presentations at the conference. GNOME.Asia Summit is Asia's GNOME user and developer conference, spreading the knowledge of GNOME across Asia. The conference will be held in Universitas Indonesia, Depok West Java, Indonesia on May 8-9, 2015. The conference follows the release of GNOME 3.14, helping to bring new desktop paradigms that facilitate user interaction in the computing world.  It will be a great place to celebrate and explore the many new features and enhancements to the GNOME 3.14 release and to help make GNOME as successful as possible. We welcome proposals by newcomers and experienced speakers alike.
Possible topics include, but not limited to:
How to Promote/Contribute to GNOME in Asia
  • GNOME Marketing
  • Promotion of Free / Open Source Software
  • How to run a Local GNOME Users Group
  • Asia success stories / Local GNOME Projects
  • GNOME and Educations
  • GNOME Outreach Program for Women
  • Google Summer of Code
Hacking GNOME
  • Latest developments in GNOME
  • GNOME 3 & GNOME 3 Usability
  • GNOME Human Interface Engineering (Icons and Graphic Design)
  • QA and testing in GNOME
  • GNOME Accessibility
  • GNOME Coding How-to
  • Writing applications for GNOME 3
  • Integration of web life into the desktop
Adapting GNOME to new types of devices
  • Developing GNOME on mobile devices (smart phones, tablets)
  • Developing GNOME on embedded systems or open source hardware
  • On-going projects and success stories
  • Finding Free and Open Source friendly hardware manufacturers
Localization and Internationalization
  • Translations
  • Input methods
  • Fonts
Other topics could include any topic related to Free and Open Source Software not listed above:
  • Small Board,
  • Open Hardware,
  • Open Data,
  • Big Data,
  • Cloud Computing,
  • Mobile Technology
Lightning talks! A five minute presentation to demonstrate your work or promote an interesting topic. These talks will be grouped together in a single session.
A standard session at GNOME.Asia 2015 will be scheduled as 45 mins (35 mins talk + 10 mins Q&A).  Please take into consideration any time you will need for preparation. The session could be a technical talk, panel discussion, or BOF.
If you’d like to share your knowledge and experience at GNOME.Asia 2015, please fill in the form at before March 15th, 2015.  Please provide a short abstract about your proposal (under 150 words). Include your name, biographical information, a photo suitable for the web, a title, and a description of your presentation . The reviewing team will evaluate the entries based on the submitted abstracts and available time in the schedule. You will be contacted before March 17th, 2015 on whether your submission has been accepted or not.
All interested contributors are highly encouraged to send in their talks.  Please help us to spread the invitation to other potential participants. Even you do not plan to be a speaker, please consider joining GNOME.Asia 2015. This is going to be a great event!

February 14, 2015

Mimi Geier, a great math teacher

MimiGeierThe world lost a great math teacher this week. Mimi Geier not only loved math, she loved teaching math and delighted in watching kids discover solutions. If I had a picture to share here, it would be of Ms. Geier with a grin on her face, holding out a piece of chalk so that a student could teach.

My first day at BFIS, Ms. Geier asked me if I was in first or seventh period math. I wanted to ask which one was the advanced math class, but I didn’t. Instead I said I didn’t know. She told me to come to both and we’d figure it out.

I got worried during the first math class. I could solve any quadratic equation in the world with the quadratic formula but Ms. Geier didn’t think too much of that method. She wanted us to factor, to pull the problem apart and understand the pieces that solved it.

Walking up the stairs after lunch, a girl who later became my friend told me, “You don’t want to be in the seventh period math class.” So it was with trepidation that I entered seventh period. Is this where they sent the kids that had never learned to factor? To my surprise I found a much different class. It was a small classroom of relaxed students and a very different Ms. Geier. This was not the homeroom teacher Ms. Geier. This was not the Ms. Geier who could take forever to make a simple point. This was not the Ms. Geier who was always misplacing that paper that she’d just had. This Ms. Geier grinned a lot. She loved it when we came up with a hard problem. She delighted in solving problems with us. She was thrilled when we figured it out. Ecstatic when we could teach each other. This was Ms. Geier the math teacher. I got to stay in seventh period, advanced math.

One day, we were all having trouble with some calculus. We could solve all the problems but we were struggling with the why. We got the formulas but not how they worked. The next day, a kid in my class whose dad was an engineer at IBM came in and said, “I got it! My dad explained it to me.” Ms. Geier, who had probably spent hours figuring out how to teach it to us, just grinned, held out the chalk and said “Show us!”

Several years after that first day of school, Ms. Geier was out of town for a few weeks. Her substitute pulled me aside during break. Sitting at Ms. Geier’s desk, he asked me for help with a math problem and said Ms. Geier had told him that if he had any problems with the math, he should ask me. Me, the kid who was afraid to ask which class was advanced, now trusted to help the math teacher!

Unknown to me, Ms. Geier also intervened on our behalf in other areas. We were having trouble with our science teacher. Several of us were banned from asking questions. One of my classmates was banned from asking questions because her questions were too stupid (she’s now a food scientist) and I was banned because my questions were too ridiculous (too much science fiction?). In all fairness, she did explore my ridiculous questions outside of class, even consulting her college professor. Things eventually got better. Several years later she told me that Ms. Geier had helped her figure out how to cope with us.

Ms. Geier taught me many things. Among them were that it’s ok to love math just because it’s math, that it’s ok to be the expert and let somebody else teach you – not just ok but exciting, that it’s ok to be the expert and not know all the answers, that sometimes people learn best from peers, that solving problems together is fun, and much more. I owe a lot of who I’ve become in my career to Mimi.

I, and many generations of math students, will miss Mimi Geier.

Despicable SPI

So apparently, this happened, and then this. Long story short, the elementary OS guys had been offered to use SPI as the legal entity to represent the project, something they didn't need at all, and since they didn't, Joshua Drake, apparently a director at SPI, decided to threat them with bad press all over if they didn't agree to join SPI. Which he then did, he started several threads on reddit and wrote a blog post trying to undermine the project, the post is now deleted, and this aberration of an apology (which is total BS and shows how much of an ass he is).

I seriously don't get why this guy has not been fired from the SPI organization immediately, this sort of bullying behaviour should not be allowed and, at least in my book, an apology means nothing. Someone like that does not belong to an organization that is supposed to help free software thrive and protect its communities.

I don't get how SPI expects the community to trust them at all after this.

I am really angry at this and I would like to express the elementary OS guys all my support.

February 13, 2015

Running a Docs Workshop at DevConf

With Bara Ancincova of Red Hat docs fame, we ran a docs workshop at this year’s DevConf in Brno. It was mostly a follow-up to to a Fedora documentation hackfest held at Flock 2014 in Prague. Quite a few people showed up Saturday afternoon, which was a nice surprise, given that docs sessions are usually not among the most attended at a technical conference.

Some of the topics we touched included:

  • DevAssistant. Slavek Kabrda joined us to give a short presentation about what DevAssistant has to offer when it comes to kick-starting your projects. Based on the ideas we first explored at Flock, Slavek and Jaromir Hradilek started to hack on a new assistant that will mostly serve people who are getting started with DocBook, the Publican toolchain, and Fedora documentation. However, there is nothing preventing us from making a step further and including support for other documentation projects, formats, and toolchains, if there is interest (and people willing to help out).
  • The plan is for DevAssistant  to be able to create a complete writing environment that would let you set up a basic structure for your new documentation project, with templates and different content types.
  • DevAssistant works well in both CLI and GUI mode. While developers and some documentation writers might prefer to work on the command line, newcomers often prefer a GUI option. Both options, however, provide an integrated solution that lets you work within a single app (and write in a text editor of choice). This means that DevAssistant aims for both the developer and documentation writer audiences, which often overlap anyway.
  • Building documentation. We talked about Jenkins, which is typically used in a software development environment.  Since Jenkins can run pretty much any commands available on the server, it can also easily be used to take care of your continuous documentation builds. This makes docs QA’ing and reviewing so much easier.
  • Pavel Tisnovsky is working on Jenkinscat, a dashboard for Jenkins to let you easily manage documentation builds.
  • Publishing documentation. This is a long-standing issue in Fedora. We explored the idea of using Jenkins and Jenkiscat for Fedora.
  • Testing your documentation. In this segment, Jaromir Hradilek talked about Emender, an emerging test automation framework for documentation. Its goal is to allow you to run a number of tests against your (semantic) documentation. Ultimately, this can save tons of time on the docs QA front, especially when you are maintaining a huge and ever-increasing number of documents for different projects or products.

A number of great ideas were put forth for future documentation events which I hope we could organize later this year.

Transforming Control flow to Data flow

This is in continuation to my earlier introductory post on Spatial Computing project I have been working on.

I spent my last few days transforming the traditional control flow of the programs into a data flow graph having producer consumer relationship between the instructions. As mentioned in my previous post, this data flow when executed on a data flow architecture that we have been trying to build for ASICs and then later for general purpose computers would be using the highest level of parallelism available because of the producer-consumer relationship between the instructions.

There are some programming constructs which are hard to transform from sequential to data flow. Loops, pointers, pointers to functions are some of them that needs extensive care. This post is more about the results from my project. I will be showing the transformation for single loop just for the sake of simplicity :

Sample Code

 int sum_single_loop (int a){

 int sum = 0;
 for (int i = 0; i < a; i++)
     sum = sum + i;

 }        

I am making use of LLVM IR format, so I convert all the code in imperative languages which are having their LLVM front-end avilable, to LLVM IR format. This would help me deal with a lot of languages. We have clang that can emit LLVM IR for any given C code. So I use clang to emit LLVM and then run the LLVM passes I have written for transforming this control flow graph to data flow graph. The passes use CFG (Control Flow Graph) as a reference, they take the necessary information from the CFG and create a DFG out of it while keeping the correctness of the program intact. After all, that’s the most important thing we don’t want to let happen before performance.

Here are the corresponding CFG (generated using clang and opt - LLVM) and transformed DFG below that (transformed using LLVM passes) for above code.

Above graph consists of 5 basic blocks with their labels on their top.

You should be able to understand all the LLVM instructions if you are familiar with assembly level programming except the PHI instructions which takes a set of array (two in this case) and assigns the first argument of these arrays, to the variable on left side. Since there are two such values in this case, it assigns the value as soon as it receives any one of it. The second argument of these arrays tells the name of the basic block from where this first argument will come from. The constant 0 in PHI instruction is there to trigger the program from outside.

Here is the transformed DFG using CFG above. You can see I have used Steer nodes to eliminate the branch instructions. The steer nodes as the name suggests sends the value coming from its top to its left if the select pin (coming in to its right) is true and to its right if its false. In other words it works as a demultiplexer. You can also see that some of the steer nodes only have one output value to its bottom left but no value on its bottom right. It means that the value on the right is not needed and hence is sinked.

Also, note that su.0 and i.0 instructions in above DFG are still triggers in executing this data flow graph as explained above due to PHI instruction.

The LLVM passes I have written are in a very dirty state as of now. I still have to deal with Load/Store instructions so that I can deal with pointers in programming languages as they and the operations using them on memory are something very fundamental if I have to cover up any programming language fully. I also need to add some mechanism of waves that we have theortically solved to keep the data flow cycles in sync with each other to maintain the program correctness.

16F1454 RA4 input only

To save someone else a wasted evening, RA4 on the MicroChip PIC 16F1454 is an input-only pin, not I/O like stated in the datasheet. In other news, I’ve prototyped the ColorHug ALS on a breadboard (which, it turns out was a good idea!) and the PCB is now even smaller. 12x19mm is about as small as I can go…

Feeds