GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

May 26, 2016

Google Inbox Notifications

Do you use Google Inbox? Do you miss getting notifications on your desktop when new mail arrives? In Gmail you could opt in to get desktop notifications, and the tab title reflects how many unread messages you have.

icon.svg.png

I made a Firefox addon that brings that functionality to Google Inbox. It gives you a notification when new mail arrives and updates the pages title with the unread mail count. You can get it here!


Autonomous Plant Watering

Here in our house we keep our plants on the threshold of death. They are in constant limbo as we remember to water them every few months. It is really quite disgraceful.

A pathetic house plantOur sad sad plant.

One day I looked at our plants and thought we needed to do something. Go and water them? The plant doesn’t just need water. It is suffering from the systemic problem of us never remembering to water it. No. Watering the plant would be too easy, we need a technological solution that will hydrate the plant and not require us to change our comfortable habit of neglect.

I googled “automated house plant watering” and the first link that comes up is an instructable. It promised to be a cheap and easy project. So I went ahead and got all the materials: an aquarium air pump, some tubing and valves. I then followed the instructions and assembled the parts as they describe. The result was underwhelming. I got some gurgling at the plant end of the tube, not a steady or measurable flow. I really don’t understand the physics that makes that system work, but it has a lot to do with water pressure: the deeper your reservoir the more efficient the water gets pumped. As the water gets consumed, the pump gives less output. So to get an optimal plant-watering we would need to make sure the tank is always full. Whats the point of automating it if you have to fill the tank after every watering?? This won’t do. I plan to put this on a timer. I need to know that if I have it on for 1 minute I will always pump a comparable amount of water to the plant.

So I got to thinking, how can an air pump pump water? Specifically, how can it pump water with a constant pressure? I came up with this schematic:

Brilliant schematicAir is pumped into a sealed jar with an outlet tube that relieves the jar’s pressure by pushing water out through another tube!

I grabbed a mason jar and drilled two 3/16″ holes in the lid. This allowed me to insert the two air tube which are slightly thicker (5mm). They fit snuggly and formed a seal.

Jar lid with two tubes running through

Next I attached one of the tubes to the pump, and placed the second one’s end in a glass. Turned on the pump and! Yes! It works! Science! I was getting a steady flow of water. The jar was emptied in a constant rate. This setup will do. I’m so pleased with this!

I splurged and got a slightly more expensive pump with 4 outputs so I can water 4  plants individually.

Aquarium air pump with 4 outlets

This setup has a few advantages over other pump setups:

  • It is cheap. So far the bill of parts is around $12.50.
  • It offers predictable water throughput.
  • You can connect any sealable container. Don’t want to refill the water after 32oz of watering? Get a gallon jug.
  • If the reservoir runs dry the motor won’t catch fire. That apparently is a thing water pumps.
  • Since the water is only going through a simple tube and not an expensive motor, you can pump a nutrient solution. If you want to pamper your plants, we don’t.

I made a stupid pump. Why is that cool? Because with a WiFi plug it becomes smart! It is now a Connected Device™. I plugged it into a Bayit WiFi socket, and set it to turn on for 20 seconds each Monday afternoon. That will feed our plants about a 1/2 a cup a week. If we like the results we may extend it to a full cup!

Bayit WiFi SocketThe Internet is in control!

A Word on WiFi Sockets

They suck. They take the simplest operation of closing a circuit and abstract it in a shitty smartphone app that only works half the time. Well, at least that has been my impression with this Bayit gadget. For my next project I am going to use a Kankun smart plug. Apparently it runs OpenWRT and is very hackable.

Bill of Materials

Air Pump $6.99
Air tube $3.42
Mason jar $2.09
Wifi plug $24.99

May 25, 2016

GStreamer Spring Hackfest 2016

After missing the last few GStreamer hackfests I finally managed to attend this time. It was held in Thessaloniki, Greece’s second largest city. The city is located by the sea side and the entire hackfest and related activities were either directly by the sea or just a couple blocks away.

Collabora was very well represented, with Nicolas, Mathieu, Lubosz also attending.

Nicolas concentrated his efforts on making kmssink and v4l2dec work together to provide zero-copy decoding and display on a Exynos 4 board without a compositor or other form of display manager. Expect a blog post soon  explaining how to make this all fit together.

Lubosz showed off his VR kit. He implemented a viewer for planar point clouds acquired from a Kinect. He’s working on a set of GStreamer plugins to play back spherical videos. He’s also promised to blog about all this soon!

Mathieu started the hackfest by investigating the intricacies of Albanian customs, then arrived on the second day in Thessaloniki and hacked on hotdoc, his new fancy documentation generation tool. He’ll also be posting a blog about it, however in the meantime you can read more about it here.

As for myself, I took the opportunity to fix a couple GStreamer bugs that really annoyed me. First, I looked into bug #766422: why glvideomixer and compositor didn’t work with RTSP sources. Then I tried to add a ->set_caps() virtual function to GstAggregator, but it turns out I first needed to delay all serialized events to the output thread to get predictable outcomes and that was trickier than expected. Finally, I got distracted by a bee and decided to start porting the contents of docs.gstreamer.com to Markdown and updating it to the GStreamer 1.0 API so we can finally retire the old GStreamer.com website.

I’d also like to thank Sebastian and Vivia for organising the hackfest and for making us all feel welcomed!

GStreamer Hackfest Venue

Blog backlog, Post 3, DisplayLink-based USB3 graphics support for Fedora

Last year, after DisplayLink released the first version of the supporting tools for their USB3 chipsets, I tried it out on my Dell S2340T.

As I wanted a clean way to test new versions, I took Eric Nothen's RPMs, and updated them along with newer versions, automating the creation of 32- and 64-bit x86 versions.

The RPM contains 3 parts, evdi, a GPLv2 kernel module that creates a virtual display, the LGPL library to access it, and a proprietary service which comes with "firmware" files.

Eric's initial RPMs used the precompiled libevdi.so, and proprietary bits, compiling only the kernel module with dkms when needed. I changed this, compiling the library from the upstream repository, using the minimal amount of pre-compiled binaries.

This package supports quite a few OEM devices, but does not work correctly with Wayland, so you'll need to disable Wayland support in /etc/gdm/custom.conf if you want it to work at the login screen, and without having to restart the displaylink.service systemd service after logging in.


 Plugged in via DisplayPort and USB (but I can only see one at a time)


The source for the RPM are on GitHub. Simply clone and run make in the repository to create 32-bit and 64-bit RPMs. The proprietary parts are redistributable, so if somebody wants to host and maintain those RPMs, I'd be glad to pass this on.

May 24, 2016

LVFS Technical White Paper

I spent a good chunk of today writing a technical whitepaper titled Introducing the Linux Vendor Firmware Service — I’d really appreciate any comments, either from people who have seen all progress from the start or who don’t know anything about it at all.

Typos, or more general comments are all welcome and once I’ve got something a bit more polished I’ll be sending this to some important suits in a few well known companies. Thanks for any help!

I/O bursts with QEMU 2.6

QEMU 2.6 was released a few days ago. One new feature that I have been working on is the new way to configure I/O limits in disk drives to allow bursts and increase the responsiveness of the virtual machine. In this post I’ll try to explain how it works.

The basic settings

First I will summarize the basic settings that were already available in earlier versions of QEMU.

Two aspects of the disk I/O can be limited: the number of bytes per second and the number of operations per second (IOPS). For each one of them the user can set a global limit or separate limits for read and write operations. This gives us a total of six different parameters.

I/O limits can be set using the throttling.* parameters of -drive, or using the QMP block_set_io_throttle command. These are the names of the parameters for both cases:

-drive block_set_io_throttle
throttling.iops-total iops
throttling.iops-read iops_rd
throttling.iops-write iops_wr
throttling.bps-total bps
throttling.bps-read bps_rd
throttling.bps-write bps_wr

It is possible to set limits for both IOPS and bps at the same time, and for each case we can decide whether to have separate read and write limits or not, but if iops-total is set then neither iops-read nor iops-write can be set. The same applies to bps-total and bps-read/write.

The default value of these parameters is 0, and it means unlimited.

In its most basic usage, the user can add a drive to QEMU with a limit of, say, 100 IOPS with the following -drive line:

-drive file=hd0.qcow2,throttling.iops-total=100

We can do the same using QMP. In this case all these parameters are mandatory, so we must set to 0 the ones that we don’t want to limit:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0
     }
   }

I/O bursts

While the settings that we have just seen are enough to prevent the virtual machine from performing too much I/O, it can be useful to allow the user to exceed those limits occasionally. This way we can have a more responsive VM that is able to cope better with peaks of activity while keeping the average limits lower the rest of the time.

Starting from QEMU 2.6, it is possible to allow the user to do bursts of I/O for a configurable amount of time. A burst is an amount of I/O that can exceed the basic limit, and there are two parameters that control them: their length and the maximum amount of I/O they allow. These two can be configured separately for each one of the six basic parameters described in the previous section, but here we’ll use ‘iops-total’ as an example.

The I/O limit during bursts is set using ‘iops-total-max’, and the maximum length (in seconds) is set with ‘iops-total-max-length’. So if we want to configure a drive with a basic limit of 100 IOPS and allow bursts of 2000 IOPS for 60 seconds, we would do it like this (the line is split for clarity):

   -drive file=hd0.qcow2,
          throttling.iops-total=100,
          throttling.iops-total-max=2000,
          throttling.iops-total-max-length=60

Or with QMP:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0,
        "iops_max": 2000,
        "iops_max_length": 60,
     }
   }

With this, the user can perform I/O on hd0.qcow2 at a rate of 2000 IOPS for 1 minute before it’s throttled down to 100 IOPS.

The user will be able to do bursts again if there’s a sufficiently long period of time with unused I/O (see below for details).

The default value for ‘iops-total-max’ is 0 and it means that bursts are not allowed. ‘iops-total-max-length’ can only be set if ‘iops-total-max’ is set as well, and its default value is 1 second.

Controlling the size of I/O operations

When applying IOPS limits all I/O operations are treated equally regardless of their size. This means that the user can take advantage of this in order to circumvent the limits and submit one huge I/O request instead of several smaller ones.

QEMU provides a setting called throttling.iops-size to prevent this from happening. This setting specifies the size (in bytes) of an I/O request for accounting purposes. Larger requests will be counted proportionally to this size.

For example, if iops-size is set to 4096 then an 8KB request will be counted as two, and a 6KB request will be counted as one and a half. This only applies to requests larger than iops-size: smaller requests will be always counted as one, no matter their size.

The default value of iops-size is 0 and it means that the size of the requests is never taken into account when applying IOPS limits.

Applying I/O limits to groups of disks

In all the examples so far we have seen how to apply limits to the I/O performed on individual drives, but QEMU allows grouping drives so they all share the same limits.

This feature is available since QEMU 2.4. Please refer to the post I wrote when it was published for more details.

The Leaky Bucket algorithm

I/O limits in QEMU are implemented using the leaky bucket algorithm (specifically the “Leaky bucket as a meter” variant).

This algorithm uses the analogy of a bucket that leaks water constantly. The water that gets into the bucket represents the I/O that has been performed, and no more I/O is allowed once the bucket is full.

To see the way this corresponds to the throttling parameters in QEMU, consider the following values:

  iops-total=100
  iops-total-max=2000
  iops-total-max-length=60
  • Water leaks from the bucket at a rate of 100 IOPS.
  • Water can be added to the bucket at a rate of 2000 IOPS.
  • The size of the bucket is 2000 x 60 = 120000.
  • If iops-total-max is unset then the bucket size is 100.

bucket

The bucket is initially empty, therefore water can be added until it’s full at a rate of 2000 IOPS (the burst rate). Once the bucket is full we can only add as much water as it leaks, therefore the I/O rate is reduced to 100 IOPS. If we add less water than it leaks then the bucket will start to empty, allowing for bursts again.

Note that since water is leaking from the bucket even during bursts, it will take a bit more than 60 seconds at 2000 IOPS to fill it up. After those 60 seconds the bucket will have leaked 60 x 100 = 6000, allowing for 3 more seconds of I/O at 2000 IOPS.

Also, due to the way the algorithm works, longer burst can be done at a lower I/O rate, e.g. 1000 IOPS during 120 seconds.

Acknowledgments

As usual, my work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the QEMU development team.

igalia-outscale

Enjoy QEMU 2.6!

Week 1 of May-August Outreachy

Week 1 of the May-August Outreachy internships start today! I'm very excited to work with Diana, Ciarrai and Renata on usability testing in GNOME.

The Outreachy internship requires that interns maintain a blog, writing at least every other week. This shouldn't be a problem for the usability project. For the first few weeks, I'll essentially give a research topic for Diana, Ciarrai and Renata to look into and write about on their blogs. I've structured the topics so that we'll build up to building our usability tests.

I'll also make a copy of my emails as blog posts on my Open Source Software & Usability blog, so we can use the comments as a sort of discussion forum. If you follow my blog, feel free to participate and comment! Expect a new post every Sunday or Monday.

For week 1, we're going to start with two topics:

1. What is usability?

Some questions that might help you think about this week's topic: What does "usability" mean to you? Can you find a definition of "usability"? Some researchers in this area make a distinction between "big U" Usability and "small u" usability - why is that, and how are they different? How does "usability" differ from "User Experience" (UX)? What are we trying to get out of "usability"?

(You don't have to answer all of those questions in your post, but it may help to pick one or two and write about those.)

I've included a few links to references that you might consider in discussing this topic. There are lots of places with information about usability, so feel free to find other resources if you prefer.


As you learn about usability, also think about what is "good" usability to you, and what is "poor" usability. Is usability something that's universal? In your experience, which open source software programs have good usability? Which have poor usability?

These are great questions to start off. I find it's a good idea to create this "baseline" for first impressions about usability. We will expand on this as we get further along in the internship.

2. How to test usability?

There are many ways to examine the usability of a program, more than the few ideas presented in this week's articles. How do you test the usability of a program? What other ideas do you suggest? Would you use different methods for a small program vs a larger program, or the same?

​I've included a few links to references that you might consider in discussing this topic. What other references can you find?


Feel free to use the comments section here as a kind of discussion forum.
image: Outreachy

May 23, 2016

64-bit Debian on a Bay Trail tablet

Debian Bay Trail

Debian Stretch

After successfully building 32-bit kernels using the Fedora method, I decided to try 64-bit Linux on my ASUS Transformer Book T100TA. The Debian multi-arch installer successfully deals with the 32-bit UEFI boot installation, and even better, certain pre-packaged Ubuntu kernels can simply be installed. Here’s my experience with the upgrade.

I started with the DebianOn ASUS T100TA wiki page. Particularly crucial is the grub command line switch for the cstates issue.

After a bit of trial and error with the install isos— couldn’t locate the correct wifi firmware on the Jessie non-free firmware netinst iso, the Alpha 5 Stretch multi-arch DVD iso would start but never complete the install to the eMMC drive—I settled on the Jessie multi-arch DVD iso with 3.16 kernel. This gave me a running Cinnamon desktop—the iso didn’t contain sufficient packages to switch to GNOME.

I located and installed the wifi firmware according to the wiki page instructions and connected using wpa_supplicant. The upgrade to Stretch and GNOME was a challenge because the wifi connection would drop every few dozen megabytes. It finished after several hours, and I had a running GNOME 3.20 desktop (something I never achieved on the Fedora install). I then…

  • switched to NetworkManager and re-installed the wifi firmware (the latter seemed to fix the issue of the connection cutting out).
  • installed non-free intel-sound firmware and the t100_B.state file, then applied Vinod Koul’s settings for working audio (be sure to keep the volume down for testing).
  • enabled and started ModemManager and installed the mobile-broadband-provider-info package to tether my phone.

The ASUS T100 Ubuntu Google+ community is a volunteer effort geared toward establishing which kernel patches, firmware, and configuration settings are required to get the hardware in the various T100* models working in Linux. Although the stock Debian Stretch kernel (4.5.0) boots with working wifi, the Ubuntu kernel packages from the G+ page (4.4.8.2 and 4.4.9.1) boot with working wifi, sound, mobile broadband, and Bluetooth. There’s still work being done on the camera driver and I haven’t tried anything with the touch screen, but at the moment my GNOME tablet experience is reasonably complete.

Corrections and suggestions welcome.

 

Endless and Codethink team up for GNOME on ARM

A couple of months ago Alberto Ruiz issued a Call to Arms here on planet GNOME. This was met with an influx of eager contributions including a wide variety of server grade ARM hardware, rack space and sponsorship to help make GNOME on ARM a reality.

Codethink and Endless are excited to announce their collaboration in this initiative and it’s my pleasure to share the details with you today.

Codethink has donated 8 cartridges dedicated to building GNOME things for ARM architectures in our Moonshot server. These cartridges are AppliedMicro™ X-Gene™ with 8 ARMv8 64-bit cores at 2.4Ghz, 64GB of DDR3 PC3L-12800 (1600 MHz) Memory and 120GB M.2 solid state storage.

Endless has also enlisted our services for the development and deployment of a Flatpak (formerly known as xdg-app) build farm to run on these machines. The goal of this project is to build and distribute both stable and bleeding edge versions of GNOME application bundles and SDKs on a continuous basis.

And we are almost there !

After one spontaneous hackfest and a long list of patches; I am happy to add here that runtimes, sdks and apps are building and running on both AArch64 and 32bit ARMv7-A architectures. As a side effect of this effort, Flatpak sdks and applications can now also be built for 32bit Intel platforms (this may have already been possible, but not from an x86_64 build host).

The builds are already automated at this time and will shortly be finding their way to sdk.gnome.org.

In the interest of keeping everything repeatable, I have been maintaining a set of scripts which setup nightly builds on a build machine, which can be configured to build various stable/unstable branches of the SDK and app repositories. These are capable of building our 4 supported target architectures: x86_64, i386, aarch64 and arm.

Currently they are only well tested with vanilla installations of Ubuntu 16.04 and are also known to work on Debian Stretch, but it should be trivial to support some modern RPM based distros as well.

endless

logo
Stay tuned for further updates on GNOME’s new found build farm, brought to you by Endless and Codethink !

GNOME.Asia Summit 2016

This year summit held at Manav Rachna International University (MRIU), which is located in the Faridabad district Delhi, it’s a quiet, beautiful and very very hot place. It gave me a lot of wonderful memories.

I arrived at noon of 21th April, and so I participated the workshop “GStreamer in your GNOME: Hands-on multimedia hacking”, here I learned how to play audio and video from command line and code step by step.

For the 1st day of conference, the local team is very very enthusiastic, if you forgot to register, they will find you with the register book and ask you to fill the name, email and phone number.

DSC00741sp

DSC00737sp

After long officials speech, Cosimo and Pravin made the keynote speeches, the next billion GNOME users and  Indian Languages in GNOME 3.20.

BIN_6028sp

In 2nd day, I listened ‘contribute to GNOME’ from Daki, ‘GNOME in China’ from ZhengNing, and ‘5 years of GNOME3’ from Tobi. After each talk the dean of MIRU will award a certificate for thanking for the speaker.

BIN_6093sp

In lighting talk I introduced local user group and GNOME.Asia.

BIN_6126sp

At night, the local students were very patient to teach us how to play the Cricket which is very very popular in local.

13062961_10206228002561944_8509243032419972962_o

In last day local team lead us to Taj Mahal.

BIN_6251sp

Thanks for GNOME Foundation for sponsoring, and everyone who helped this awesome event happen.

Flickr group:  https://www.flickr.com/groups/gnomeasia2016/

Back from Desktop Summit 2011

Two OSCON Conversations, And A Trip Report Between Them

Conversation the last night of OSCON, reconstructed from memory:
"So, Neil Young --"
"The singer-songwriter?"
"Yeah."
"Man, what a white guy name."
"Are you impugning Neil Young? That man sells organic eggs at the farmer's market in my hometown!"
"Are you sure they're organic? You sure he doesn't just get them from Sysco?"
"Sysco doesn't even sell organic eggs."
"Uh, actually Sysco does sell organic eggs."
"Yeah right. I bet it's, like, orgänic, with an umlaut, so it can get around the FDA rules on calling things 'organic'."
"Yeah, and that's also how they get around the rules on Appellation Contrôlée."
"Anyway, Neil Young has a son who really likes model trains, and so he does model train stuff, and he actually has multiple patents related to model trains."
"Patents? Is he part of the Open Invention Network? Is this a defensive patent portfolio against Big Train?"
"You mean Supertrain?"

picture of Sumana in front of a steam train in Melbourne, AustraliaIn honor of late-night wackiness, I have not actually fact-checked whether Neil Young has any model train patents, or whether he or Sysco sell organic eggs. Caveat lector et caveat emptor.

My last visit to OSCON was in 2011, when I had worked for the Wikimedia Foundation for under a year, and wanted to build and strengthen relationships with the MediaWiki and PHP communities. I remember not feeling very successful, and thinking that this was a conference where executives and engineers (who in many cases are not terribly emotionally passionate about open source) meet to hire, get hired, and sell each other things.

These days, it turns out, OSCON is basically aimed at me! Because I am an executive trying to sell my services (e.g. get hired on an as-needed basis) to engineers and executives who make or depend on open source -- including the passionate ideologues, the burned-out used-to-be-passionate folks, the pragmatists, and all manner of other types. Changeset Consulting was novel, relevant, and interesting to approximately everyone I spoke with. Also, in the intervening five years, I've grown in skill, stature, reputation, and network. So I had something interesting to say, and O'Reilly accepted a talk proposal of mine for the first time. I saw scores of acquaintances, friends, peers, and colleagues in the halls and session rooms this year; I know and am known, which helps me feel at home.

I'm grateful to Audra Montenegro at O'Reilly Media for her speaker support. She worked with O'Reilly to cover my flight plus two hotel nights in Austin, making it possible for me to attend OSCON. She and other O'Reilly folks paid and worked with White Coat Captioning to provide CART (live captions) for my talk, at my request. And I was concerned that talking about inessential weirdnesses and inclusivity in open source upped my risk of harassment, so she arranged for some extra security for me. I'm also grateful to Andy Oram, my session chair, for his careful pre-conference prep (including checking on my pronouns -- she/her, thanks!), and for running a written Q&A session at my request.

I shall carry with me several memories of this OSCON, such as:

  • Breaking the no-electronics rule of the quiet room/relaxation lounge (since I was the only one using it) to finish revising my talk and blog about good code review
  • Staying with some lovely relatives in the suburbs for a few days and drinking spinach juice with them each morning (my uncle is in charge of making it, and sometimes he adds grape or orange juice, and sometimes something else, and sometimes it's just spinach. It's a surprise. "Every day's a new day," he said to me)
  • Helping my high schooler cousin revise a skit, and deploying my stand-up comedy wisdom, e.g., use over-the-top worshipful admiration as a kinder substitute for straight-up mockery
  • Being the only person in the pool at the Software Freedom Conservancy party (foolish choice on my part; it was pretty cold)
  • Meditating on a loved one during an exercise in Cate Huston's talk on technical interviewing, and feeling the lovingkindness throughout the whole room
  • Listening to an enthralling performance by rapper Sammus
  • Discussing pull request workflow, Contributor Licensing Agreements, the engagement funnel, the future of OpenHatch and of Apache Zookeeper, the failings of Google Summer of Code, whether GitHub should allow no-license repositories, how Facebook/Amazon/Google's idiosyncrasies come out in their open source work, performing femininity in the workplace, productivity and routines, effectively signalling to one's employees that they should not work on weekends, what maintainership is, JavaScript and data binding, and BLESS
  • Becoming increasingly convinced that continuous integration/continuous delivery is hitting an inflection point that source control hit several years ago, i.e., soon it will be a must-have for software-making communities and not having it will be embarrassing
  • Chatting with OSCON acquaintances in an Austin hotel lobby and being interrupted by a drunk white woman who called me "Mindy Lahiri" (a fictional character from The Mindy Project)
  • Opting out of the millimeter-wave scanner at the Austin airport and witnessing a person wearing an EFF shirt not opt out!

But here's a conversation that I particularly find stays with me. I was on the expo floor, talking with an acquaintance, and a stranger joined our conversation. I'll anonymize this recollection and call him Cyclopes.

He heard about the consulting services I offer, which focus on short-term project management and release management for open source projects and for companies that depend on them -- maintainership-as-a-service, in short.

Cyclopes: "Can you come to my company and tell us that the way we're doing deployments is all wrong, and tell them we should do it my way?"
Sumana: "Well, if your company hires me, and I assess how you're doing deployments and I think it's wrong, and I agree with the approach you want, then yes."
Cyclopes: "Great!" [explains his preferred deployment workflow, with justification, and says that it's like bashing his head against a brick wall for him to try to convince the rest of his company to do it]
Sumana: [lightheartedly] "So it sounds like what you really want is more a sockpuppet or an actor! Which might be cheaper!"
Cyclopes: "Hmmmm! You know, that is kind of what I want!"
[we three jest about this]
Sumana: "But, in all seriousness, like, you could go aboveboard with this. You know, you have options -- fraud and deception, or actually trying to persuade the other people in your org .... or, this is a wild guess, have you kind of burned bridges by being really abrasive, and now they won't listen to you?"
Cyclopes: "Yeaaaaaaaaah that might be what's happened."
Sumana: "OK! That totally happens and you weren't the first and you won't be the last."
Cyclopes: "Like, I've sent some pretty flame-y emails."
[acquaintance nods in sad agreement; we are all sinners here]
Sumana: "Oh man, the emails I've sent. I am so embarrassed when I look back. But you can come back from this. You really can. If you demonstrate to those people in your org that you really want to repair those relationships with them, they will respond. Like, if you say to them, 'I know I burned bridges before, I'm sorry about it, can we talk about this problem,' and you actually try to listen to them about what they need and what their context is, what they're worried about and what problems they're trying to solve, they will work with you, so you can figure out something that works for everybody. There's a book about this, about negotiation, that's a really short, quick read, it helped me a lot: Getting to Yes by Fisher and Ury."
Cyclopes: "Let me write this down." [notes book title and author on the back of my business card]
Sumana: "There you go, some free consulting for you. It's totally possible."
Cyclopes: "I think I could use that, I'm ripe for that kind of personal transformation in my life."
Sumana: "Great! Please, seriously, let me know how it works out."

Memory is unreliable, but I cannot forget the speed of my diagnosis, nor that this guy literally said that he was ripe for the kind of personal transformation I was prescribing. It's no model train patent but I think I'm happy anyway.

Year of the Linux Desktop

As some of you already know, xdg-app project is dead. The Swedish conspiracy members tell me it’s a good thing and should turn your attention to project Flatpak.

Flatpak aims to solve the painful problem of the Linux distribution — the fact that the OS is intertwined with the applications. It is a pain to decouple the two to be able to

  • Keep a particular version of an app around, regardless of OS updates. Or vice versa, be able to run an uptodate application on an older OS.
  • Allow application authors distribute binaries they built themselves. Binaries they can support and accept useful bug reports for. Binaries they can keep updated.

But enough of the useful info, you can read all about the project on the new website. Instead, here comes the irrelevant tidbits that I find interesting to share myself. The new website has been built with Middleman, because that’s what I’ve been familiar with and worked for me in other projects.

It’s nice to have a static site that is maintainable and easy to update over time. Using something like Middleman allows to do things like embedding an SVG inside a simple markdown page and animate it with CSS.

=partial "graph.svg"
:css
  @keyframes spin {
    0% { transform: rotateZ(0deg); }
    100% { transform: rotateZ(359deg); }
  }
  #cog {
    animation: spin 6s infinite normal linear forwards;
  }

See it in action.

The resulting page has the SVG embedded to allow text copy & pasting and page linking, while keeping the SVG as a separate asset allows easy edits in Inkscape.

What I found really refreshing is seeing so much outside involvement on the website despite ever publicising it. Even during developing the site as my personal project I would get kind pull requests and bug reports on github. Thanks to all the kind souls out there. While not forgetting about future proofing our infrastructure, we should probably not forget the barrier to entry and making use of well established infrastructures like github.

Also, there is no Swedish conspiracy. Oh and Flatpak packages are almost ready to go for Fedora.

External Plugins in GNOME Software (6)

This is my last post about the gnome-software plugin structure. If you want more, join the mailing list and ask a question. If you’re not sure how something works then I’ve done a poor job on the docs, and I’m happy to explain as much as required.

GNOME Software used to provide a per-process plugin cache, automatically de-duplicating applications and trying to be smarter than the plugins themselves. This involved merging applications created by different plugins and really didn’t work very well. For 3.20 and later we moved to a per-plugin cache which allows the plugin to control getting and adding applications to the cache and invalidating it when it made sense. This seems to work a lot better and is an order of magnitude less complicated. Plugins can trivially be ported to using the cache using something like this:

 
   /* create new object */
   id = gs_plugin_flatpak_build_id (inst, xref);
-  app = gs_app_new (id);
+  app = gs_plugin_cache_lookup (plugin, id);
+  if (app == NULL) {
+     app = gs_app_new (id);
+     gs_plugin_cache_add (plugin, id, app);
+  }

Using the cache has two main benefits for plugins. The first is that we avoid creating duplicate GsApp objects for the same logical thing. This means we can query the installed list, start installing an application, then query it again before the install has finished. The GsApp returned from the second add_installed() request will be the same GObject, and thus all the signals connecting up to the UI will still be correct. This means we don’t have to care about migrating the UI widgets as the object changes and things like progress bars just magically work.

The other benefit is more obvious. If we know the application state from a previous request we don’t have to query a daemon or do another blocking library call to get it. This does of course imply that the plugin is properly invalidating the cache using gs_plugin_cache_invalidate() which it should do whenever a change is detected. Whether a plugin uses the cache for this reason is up to the plugin, but if it does it is up to the plugin to make sure the cache doesn’t get out of sync.

And one last thing: If you’re thinking of building an out-of-tree plugin for production use ask yourself if it actually belongs upstream. Upstream plugins get ported as the API evolves, and I’m already happily carrying Ubuntu and Fedora-specific plugins that either self-disable at runtime or are protected using --enable-foo configure argument.

GStreamer and Meson: A New Hope

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

May 22, 2016

crossroad 0.6 released: cross-building GIMP as an example

Hello all!

TL;DR; if you don't care at all about how crossroad works,
and you just want to cross-build GIMP in a few commands,
see at the bottom of the article.

Over the years, I have written a tool to cross-compile software under a GNU/Linux OS for other targets: crossroad. Well to this day, the only supported targets are Windows 32/64-bit. I realized I never advertised this tool much which — even though it has been originally built to help me build and hack a specific project (GIMP) — ended up as a generic cross-compilation system for Linux.

“The crossroads” where Robert Johnson supposedly sold his soul to the Devil, by Joe Mazzola (CC by-sa 2.0)

So today, I will explain crossroad by giving a concrete example:

Cross-compiling GIMP on GNU/Linux for Windows system

The basics

The cool stuff about crossroad is that it allows to cross-build as easily as a native build. For instance f your build system were the autotools, you would run the famous triptych ./configure && make && make install, right? Well with crossroad, you only wrap the configure step but the rest is the same!
cmake is also fully supported, even though my examples below won’t show it (check out the manual for more).

All what crossroad does is preparing you a suitable environment, with the right environment variables, but also wrappers to various tools (for instance, you get the x86_64-w64-mingw32-pkg-config tool inside a crossroad environment, which is no more than a wrapper to detect packages inside the cross-built prefix only).

Enough chit-chat. Let’s install crossroad!

pip3 install crossroad

Running crossroad the first time

I will now go through a full cross-build of GIMP for Windows 64-bit, from a brand new crossroad install. It may be a little long and boring, but I want to show you a full process. So you can just run:

$ crossroad --list-targets
crossroad, version 0.6
Available targets:

Uninstalled targets:
w32 Windows 32-bit
w64 Windows 64-bit

See details about any target with `crossroad --help <TARGET>`.

Ok so at this point, you may have no cross-compilation compiler installed. Let’s see what I need for Windows 64-bit.

$ crossroad --help w64
w64: Setups a cross-compilation environment for Microsoft Windows operating systems (64-bit).

Not available. Some requirements are missing:
- x86_64-w64-mingw32-gcc [package "gcc-mingw-w64-x86-64"] (missing)
- x86_64-w64-mingw32-ld [package "binutils-mingw-w64-x86-64"] (missing)

The package name was actually based off a Mint or a Mageia, I think. If you use Fedora for instance, you will want to install mingw64-gcc and mingw64-binutils.  Just adapt to your case.

Let’s check the output of crossroad again:

$ crossroad --help w64
w64: Setups a cross-compilation environment for Microsoft Windows operating systems (64-bit).

Installed language list:
- C
Uninstalled language list:
- Ada Common package name providing the feature: gnat-mingw-w64-x86-64
- C++ Common package name providing the feature: g++-mingw-w64-x86-64
- OCaml Common package name providing the feature: mingw-ocaml
- Objective C Common package name providing the feature: gobjc++-mingw-w64-x86-64
- fortran Common package name providing the feature: gfortran-mingw-w64-x86-64

We could stop here, but I actually know some programs make use of the C++ compiler, so let’s also install `mingw64-gcc-c++`. And now we are good to go!
I create a Windows 64-bit cross-build environment, that I will call “gimp-build”:

$ crossroad w64 gimp-build
Creating project "gimp-build" for target w64...
You are now at the crossroads...
Your environment has been set to cross-compile the project 'gimp-build' on Windows 64-bit (w64).
Use `crossroad help` to list available commands and `man crossroad` to get a full documentation of crossroad capabilities.
To exit this cross-compilation environment, simply `exit` the current shell session.

Dependencies

babl

Now let’s cross-build babl! Very easy one to start with because it does not have any dependencies:

$ cd /path/to/src/of/babl/
$ crossroad configure && make && make install

Done! You’ll notice that you don’t even need to specify a prefix. crossroad is taking care of it.

GExiv2

$ cd /path/to/src/of/gexiv2
$ crossroad configure

The configure ends up with a pretty clear error:

checking for GLIB... no
configure: error: GLib is required. Please install it.

Well basically you have to install glib. And then you think: the hell is starting! Do I have to build every dependency by hand? Fortunately no! crossroad relies on the OpenSUSE cross platform builds and can automatically download many dependencies for you. The reason I don’t do it for for all dependencies (babl, GEGL, libmypaint and GExiv2 in this tutorial) is when packages are either too old or non available. Install glib with:

$ crossroad install glib2-devel

That’s all! Neat right? Well you also have to install Exiv2 (trying to run the configure again, you’d see another error. I spare you the wasted time). You can run a separate command, or could have installed them both at the same time:

$ crossroad install glib2-devel libexiv2-devel

Try again:

$ crossroad configure

In the configuration summary, you’ll see that introspection and python bindings won’t be built-in. But since they are not needed for GIMP, we don’t care.

$ make && make install

And now GExiv2 has been cross-built too!

gegl

Now for gegl:

$ cd /path/to/src/of/gegl
$ crossroad configure

Once again, you’ll get a few dependency errors. I spare you the try and fail process. Here are all the other mandatory libs to install:

$ crossroad install libjson-glib json-glib-devel libjpeg8-devel libpng-devel

And for a fuller list of optional dependencies:

$ crossroad install cairo-devel libtiff-devel librsvg-2-2 librsvg-devel pango-devel libwebp5 libwebp-devel libjasper-devel gdk-pixbuf-devel

Well there are actually more dependencies you could install or build yourself, but I don’t think they are really needed for GIMP. I’ll just leave it there. Now I build:

$ make

But then… a build error (I’ll admit: I left a dependency error on purpose!)!

In file included from /usr/include/SDL/SDL_main.h:26:0,
from /usr/include/SDL/SDL.h:30,
from /home/jehan/dev/src/gegl/operations/external/sdl-display.c:35:
/usr/include/SDL/SDL_stdinc.h:74:20: fatal error: iconv.h: No such file or directory
compilation terminated.
Makefile:1277: recipe for target 'sdl_display_la-sdl-display.lo' failed
make[4]: *** [sdl_display_la-sdl-display.lo] Error 1
make[4]: Leaving directory '/home/jehan/dev/crossbuild/w64/gegl/operations/external'[…]

Well that’s a good example to debug. Look well at the paths… /usr/include/[…], they are definitely for the native platform! So something is wrong.
Also I don’t even remember having installed SDL through crossroad anyway. Well if you were to check out the config.log, you would understand why the configure script found it:

configure:22069: checking for sdl-config
configure:22087: found /usr/bin/sdl-config
configure:22100: result: /usr/bin/sdl-config

Sometimes, some projects would use their own *-config script instead of relying on pkg-config (SDL seems to also have a pkg-config file, not sure why we’d prefer usind sdl-config, but well…). And while pkg-config allows to really separate the build and target environments, I can’t remove the current $PATH because a lot of native tools are needed in a cross-compilation. All this to say: you may encounter issues with software distributing custom *-config script. Be warned!

Anyway let’s install SDL, don’t forget to re-run configure (otherwise the native lib will still be used), then build again.

$ crossroad install libSDL-devel && crossroad configure && make && make install

Easy said, easy done.

libmypaint

Only a single dependency (json-c), then classical configure, make, make install:

$ crossroad install libjson-c2 libjson-c-devel && crossroad configure && make && make install

GIMP

And now for the main course! Once again, I spare you some of the search of dependencies. Here for mandatory deps:

$ crossroad install atk-devel gtk2-devel libbz2-1 libbz2-devel liblzma-devel liblcms2-2 liblcms2-devel

I’ll skip completely Python dependencies. This is the only major part of GIMP I’m still having issues with in my cross-builds so I will run configure with --disable-python.

As for optional dependencies:

crossroad install libgs8 libgs-devel libmng1 libmng-devel libpoppler-glib8 libpoppler-glib-devel poppler-data xpm-nox-devel headers iso-codes-devel libwmf-devel


Note 1: libwmf-devel has the same issue as SDL, thus the configure script would see it installed if you have it on your main system because of the libwmf-config in your $PATH.

Note 2: you can use crossroad search to discover package names. Even better, sometimes you know a file name that configure is looking for (after a dependency error, check out the configure.ac file or the config.log file for this info). For instance, I can’t find a libghostcript but I see that GIMP‘s configure is looking for the file ghostscript/iapi.h to detect ghostscript:

$ crossroad search --search-files iapi.h
"iapi.h" not found in any package name.
The following packages have files matching the search "iapi.h":
- headers
- libgs-devel

Then using crossroad list-files libgs-devel, I can confirm that this is the right package (crossroad info libgs-devel also confirms this is the package for Ghostscript development files).


Back to building GIMP:

$ crossroad configure --disable-python
$ make &&
make install

Note that I don’t built some options in, like the Ascii Art, the help browser, or OpenEXR because I could not find prebuilt dependencies, but it should not be hard for you to build the dependencies yourself as we did for babl/GEGL/…

Running under Windows!

And now you can for instance make a zip to put on a Windows, like this:

$ crossroad --compress=gimp-build.zip w64 gimp-build

This will just compress the whole target build, where GIMP for Windows is under bin/gimp-2.9.exe. Now this is *not* an installer or whatever finale. The target directory is full of files useful for building, yet completely useless for the finale installation. crossroad is *not* meant for making installers… to this day at least (saying that I would not be against if anyone wanted to implement something of the sort. Why not?! Send me a patch!).
Since I usually run Windows in a Virtualbox VM, rather than zipping a whole archive, I would simply share a directory and symlink the target directory like this:

$ cd /path/to/dir/shared/with/windows/vm/
$ crossroad --symlink w64 gimp

Note: in your Windows environment, you must update the PATH environment variable to add your build’s bin/ directory otherwise Windows won’t find your various built DLLs. You’ll find information on how to do so everywhere in the web.

I use crossroad as a developer, to test and sometimes patch directly GIMP for Windows, without ever getting out of my Linux OS. Well I still test under a Windows VM, which runs in Linux. But I develop and compile in my GNU/Linux shell.
Hopefully this tool can be useful to other people willing to test their program under Windows while hacking on GNU/Linux.

About having a good build system

If you followed good programming practice, and use a full-featured build system such as autotools and cmake, chances are that you can cross-build your program with barely any fix to the code. This is for instance what happened when I ported GExiv2 to autotools after we started using it in GIMP. GExiv2 maintainer would say in the blog post:

This in turn allows gexiv2 to build on Windows, something we’d not targeted at all when we began development but is obviously a hard requirement for GIMP.

Just having a good build system turn their Linux-only library into a multi-platform one. Pretty cool, no? And now libmypaint got the same fate, I autotooled it making it just so easy to cross-compile!

But if you have hand-made makefiles (like GExiv2 did), a scons build (like libmypaint did), or other build systems which don’t have a proper support for cross-building, this won’t be as easy. Think twice when you set up your software. The tool you use for builds is definitely not something to take lightly!

The one-step crossbuild script

This was all for you to see how crossroad is working. If you are only interested into building GIMP specifically, I compiled my whole tutorial below into a single build script that you are welcome to copy paste and run with crossroad like this:

crossroad install glib2-devel libexiv2-devel libjson-glib json-glib-devel \
  libjpeg8-devel libpng-devel cairo-devel libtiff-devel librsvg-2-2 \
  librsvg-devel pango-devel libwebp5 libwebp-devel libjasper-devel \
  gdk-pixbuf-devel libSDL-devel libjson-c2 libjson-c-devel atk-devel \
  gtk2-devel libbz2-1 libbz2-devel liblzma-devel liblcms2-2 \
  liblcms2-devel libgs8 libgs-devel libmng1 libmng-devel \
  libpoppler-glib8 libpoppler-glib-devel poppler-data \
  xpm-nox-devel headers iso-codes-devel libwmf-devel headers
git clone git://git.gnome.org/babl
cd babl && crossroad configure && make && make install && cd ..
git clone git://git.gnome.org/gegl
cd gegl && crossroad configure && make && make install && cd ..
git clone git://git.gnome.org/gexiv2
cd gexiv2 && crossroad configure && make && make install && cd ..
git clone https://github.com/mypaint/libmypaint.git
cd libmypaint && crossroad configure && make && make install && cd ..
git clone git://git.gnome.org/gimp
cd gimp && crossroad configure --disable-python && make && make install

If you copy these into a file, say build-gimp.sh, you can run it at once with:

$ crossroad w64 gimp-build --run build-gimp.sh -n

I hope you enjoyed this tutorial on how to build GIMP with crossroad!

Note: if you like what I do, consider sponsoring our
animation film (CC by-sa), made with GIMP: ZeMarmot!
We fund it with Patreon (USD) and Tipeee (EUR).

Writing GStreamer plugins and elements in Rust

This weekend we had the GStreamer Spring Hackfest 2016 in Thessaloniki, my new home town. As usual it was great meeting everybody in person again, having many useful and interesting discussions and seeing what everybody was working on. It seems like everybody was quite productive during these days!

Apart from the usual discussions, cleaning up some of our Bugzilla backlog and getting various patches reviewed, I was working with Luis de Bethencourt on writing a GStreamer plugin with a few elements in Rust. Our goal was to be able to be able to read and write a file, i.e. implement something like the “cp” command around gst-launch-1.0 with just using the new Rust elements, while trying to have as little code written in C as possible and having a more-or-less general Rust API in the end for writing more source and sink elements. That’s all finished, including support for seeking, and I also wrote a small HTTP source.

For the impatient, the code can be found here: https://github.com/sdroege/rsplugin

Why Rust?

Now you might wonder why you would want to go through all the trouble of creating a bridge between GStreamer in C and Rust for writing elements. Other people have written much better texts about the advantages of Rust, which you might want to refer to if you’re interested: The introduction of the Rust documentation, or this free O’Reilly book.

But for myself the main reasons are that

  1. C is a rather antique and inconvenient language if you compare it to more modern languages, and Rust provides a lot of features from higher-level languages while still not introducing all the overhead that is coming with it elsewhere, and
  2. even more important are the safety guarantees of the language, including the strong type system and the borrow checker, which make a whole category of bugs much more unlikely to happen. And thus saves you time during development but also saves your users from having their applications crash on them in the best case, or their precious data being lost or stolen.

Rust is not the panacea for everything, and not even the perfect programming language for every problem, but I believe it has a lot of potential in various areas, including multimedia software where you have to handle lots of complex formats coming from untrusted sources and still need to provide high performance.

I’m not going to write a lot about the details of the language, for that just refer to the website and very well written documentation. But, although being a very young language not developed by a Fortune 500 company (it is developed by Mozilla and many volunteers), it is nowadays being used in production already at places like Dropbox or Firefox (their MP4 demuxer, and in the near future the URL parser). It is also used by Mozilla and Samsung for their experimental, next-generation browser engine Servo.

The Code

Now let’s talk a bit about how it all looks like. Apart from Rust’s standard library (for all the basics and file IO), what we also use are the url crate (Rust’s term for libraries) for parsing and constructing URLs, and the HTTP server/client crate called hyper.

On the C side we have all the boilerplate code for initializing a GStreamer plugin (plugin.c), which then directly calls into Rust code (lib.rs), which then calls back into C (plugin.c) for registering the actual GStreamer elements. The GStreamer elements themselves have then an implementation written in C (rssource.c and rssink.c), which is a normal GObject subclass of GstBaseSrc or GstBaseSink but instead of doing the actual work in C it is just calling into Rust code again. For that to work, some metadata is passed to the GObject class registration, including a function pointer to a Rust function that creates a new instance of the “machinery” of the element. This is then implementing the Source or Sink traits (similar to interfaces) in Rust (rssource.rs and rssink.rs):

pub trait Source: Sync + Send {
    fn set_uri(&mut self, uri_str: Option<&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn is_seekable(&self) -> bool;
    fn get_size(&self) -> u64;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result;
    fn do_seek(&mut self, start: u64, stop: u64) -> bool;
}

pub trait Sink: Sync + Send {
    fn set_uri(&mut self, uri_str: Option<&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn render(&mut self, data: &[u8]) -> GstFlowReturn;
}

And these traits (plus a constructor) are in the end all that has to be implemented in Rust for the elements (rsfilesrc.rs, rsfilesink.rs and rshttpsrc.rs).

If you look at the code, it’s all still a bit rough at the edges and missing many features (like actual error reporting back to GStreamer instead of printing to stderr), but it already works and the actual implementations of the elements in Rust is rather simple and fun. And even the interfacing with C code is quite convenient at the Rust level.

How to test it?

First of all you need to get Rust and Cargo, check the Rust website or your Linux distribution for details. This was all tested with the stable 1.8 release. And you need GStreamer plus the development files, any recent 1.x version should work.

# clone GIT repository
git clone https://github.com/sdroege/rsplugin
# build it
cd rsplugin
cargo build
# tell GStreamer that there are new plugins in this path
export GST_PLUGIN_PATH=`pwd`
# this dumps the Cargo.toml file to stdout, doing all file IO from Rust
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! fakesink dump=1
# this dumps the Rust website to stdout, using the Rust HTTP library hyper
gst-launch-1.0 rshttpsrc uri=https://www.rust-lang.org ! fakesink dump=1
# this basically implements the "cp" command and copies Cargo.toml to a new file called test
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! rsfilesink uri=file://`pwd`/test
# this plays Big Buck Bunny via HTTP using rshttpsrc (it has a higher rank than any
# other GStreamer HTTP source currently and is as such used for HTTP URIs)
gst-play-1.0 http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_480p_h264.mov

What next?

The three implemented elements are not too interesting and were mostly an experiment to see how far we can get in a weekend. But the HTTP source for example, once more features are implemented, could become useful in the long term.

Also, in my opinion, it would make sense to consider using Rust for some categories of elements like parsers, demuxers and muxers, as traditionally these elements are rather complicated and have the biggest exposure to arbitrary data coming from untrusted sources.

And maybe in the very long term, GStreamer or parts of it can be rewritten in Rust. But that’s a lot of effort, so let’s go step by step to see if it’s actually worthwhile and build some useful things on the way there already.

For myself, the next step is going to be to implement something like GStreamer’s caps system in Rust (of which I already have the start of an implementation), which will also be necessary for any elements that handle specific data and not just an arbitrary stream of bytes, and it could probably be also useful for other applications independent of GStreamer.

Issues

The main problem with the current code is that all IO is synchronous. That is, if opening the file, creating a connection, reading data from the network, etc. takes a bit longer it will block until a timeout has happened or the operation finished in one way or another.

Rust currently has no support for non-blocking IO in its standard library, and also no support for asynchronous IO. The latter is being discussed in this RFC though, but it probably will take a while until we see actual results.

While there are libraries for all of this, having to depend on an external library for this is not great as code using different async IO libraries won’t compose well. Without this, Rust is still missing one big and important feature, which will definitely be needed for many applications and the lack of it might hinder adoption of the language.

Django and PostgreSQL composite types

PostgreSQL has this nifty feature called composite types that you can use to create your own types from the built-in PostgreSQL types. It’s a bit like hstore, only structured, which makes it great for structured data that you might reuse multiple times in a model, like addresses.

Unfortunately to date, they were pretty much a pain to use in Django. There were some older implementations for versions of Django before 1.7, but they tended to do things like create surprise new objects in the namespace, not be migrateable, and require connection to the DB at any time (i.e. during your build).

Anyway, after reading a bunch of their implementations and then the Django source code I wrote django-postgres-composite-types.

Install with:

pip install django-postgres-composite-types

Then you can define a composite type declaratively:

from django.db import models
from postgres_composite_type import CompositeType


class Address(CompositeType):
    """An address."""

    address_1 = models.CharField(max_length=255)
    address_2 = models.CharField(max_length=255)

    suburb = models.CharField(max_length=50)
    state = models.CharField(max_length=50)

    postcode = models.CharField(max_length=10)
    country = models.CharField(max_length=50)

    class Meta:
        db_type = 'x_address'  # Required

And use it in a model:

class Person(models.Model):
    """A person."""

    address = Address.Field()

The field should provide all of the things you need, including formfield etc and you can even inherit this field to extend it in your own way:

class AddressField(Address.Field):
    def __init__(self, in_australia=True, **kwargs):
        self.in_australia = in_australia

        super().__init__(**kwargs)

Finally to set up the DB there is a migration operation that will create the type that you can add:

import address
from django.db import migrations


class Migration(migrations.Migration):

    operations = [
        # Registers the type
        address.Address.Operation(),
        migrations.AddField(
            model_name='person',
            name='address',
            field=address.Address.Field(blank=True, null=True),
        ),
    ]

It’s not smart enough to add it itself (can you do that?). Nor would it be smart enough to write the operations to alter a type. That would be a pretty cool trick. But it’s useful functionality all the same, especially when the alternative is creating lots of 1:1 models that are hard to work with and hard to garbage collect.

It’s still pretty early days, so the APIs are subject to change. PRs accepted of course.

May 21, 2016

GNOME Calendar and Drag n’ Drop

One of the most intuitive ways to interact with an application is reproducing what we do in real life. Applications try to shorten the learning curve by using metaphores of real world objects.

We all know what GNOME Calendar is: a virtual calendar application. As such, using real-life calendars as a reference for it’s UI is mostly a good thing, except that we’d probably have an annoying time moving events around. In this regard, technology can improve what we do.

And that’s why GNOME Calendar now supports dragging and dropping events around.
Aaand… our traditional video:

Oh, did you noticed? Year view is much improved thanks to the awsome work of Isaque, who proved himself an amazing contributor and a good friend. And don’t forget, this summer we have an intern, Vamsi, working on the long wanted Week View.Calendar, just like Nautilus, is in it’s best moment.

GSoC 2016

Hello everybody! My name’s Rares Visalom and I’m a second year Computer Science student at “Politehnica” University of Bucharest, currently pursuing a bachelor’s degree in Computer Science.

This summer I have the opportunity to contribute to GNOME as a GSoC 2016 student. First of all I’d like to congratulate all the other students that were accepted. I’m sure all of us will give our best in order to actively contribute to the open source community.

My proposal involves improving the user experience for both new and already existing users of Polari, the IRC client developed by GNOME. I’m very excited and I can’t wait to begin the actual implementation of the ideas. With the help of Florian Muellner and Bastian Ilso, whom will aid me as mentors in the process of developing these ideas, I plan on adding user-friendly features that would enhance the user onboarding capabilities of Polari, making it easier for people to use our application.

I will regularly post updates on my progress throughout the summer so if you’re interested, this is the place to find out more about my work :).


May 20, 2016

Your project's RCS history affects ease of contribution (or: don't squash PRs)

Github recently introduced the option to squash commits on merge, and even before then several projects requested that contributors squash their commits after review but before merge. This is a terrible idea that makes it more difficult for people to contribute to projects.

I'm spending today working on reworking some code to integrate with a new feature that was just integrated into Kubernetes. The PR in question was absolutely fine, but just before it was merged the entire commit history was squashed down to a single commit at the request of the reviewer. This single commit contains type declarations, the functionality itself, the integration of that functionality into the scheduler, the client code and a large pile of autogenerated code.

I've got some familiarity with Kubernetes, but even then this commit is difficult for me to read. It doesn't tell a story. I can't see its growth. Looking at a single hunk of this diff doesn't tell me whether it's infrastructural or part of the integration. Given time I can (and have) figured it out, but it's an unnecessary waste of effort that could have gone towards something else. For someone who's less used to working on large projects, it'd be even worse. I'm paid to deal with this. For someone who isn't, the probability that they'll give up and do something else entirely is even greater.

I don't want to pick on Kubernetes here - the fact that this Github feature exists makes it clear that a lot of people feel that this kind of merge is a good idea. And there are certainly cases where squashing commits makes sense. Commits that add broken code and which are immediately followed by a series of "Make this work" commits also impair readability and distract from the narrative that your RCS history should present, and Github present this feature as a way to get rid of them. But that ends up being a false dichotomy. A history that looks like "Commit", "Revert Commit", "Revert Revert Commit", "Fix broken revert", "Revert fix broken revert" is a bad history, as is a history that looks like "Add 20,000 line feature A", "Add 20,000 line feature B".

When you're crafting commits for merge, think about your commit history as a textbook. Start with the building blocks of your feature and make them one commit. Build your functionality on top of them in another. Tie that functionality into the core project and make another commit. Add client support. Add docs. Include your tests. Allow someone to follow the growth of your feature over time, with each commit being a chapter of that story. And never, ever, put autogenerated code in the same commit as an actual functional change.

People can't contribute to your project unless they can understand your code. Writing clear, well commented code is a big part of that. But so is showing the evolution of your features in an understandable way. Make sure your RCS history shows that, otherwise people will go and find another project that doesn't make them feel frustrated.

(Edit to add: Sarah Sharp wrote on the same topic a couple of years ago)

comment count unavailable comments

May 19, 2016

“SmartEco” or “Extreme Eco” projector lamp power saving modes are a trap

Here are some findings I’ve been meaning to post for a while.

A bit over a year ago, I fulfilled a decade-long dream of owning a good projector for movies, instead of some silly monitor with a diagonal measured in “inches”. My lifestyle very rarely allows me to watch movies (or series*), so when I decide to watch something, it needs to have a rating over 90% on RottenTomatoes, it has to be watched with a bunch of friends, and it needs to be a top-notch audio-visual experience. I already had a surround sound system for over a decade, so the projector was the only missing part of the puzzle.

After about six months of research and agonizing over projector choices, I narrowed it down to the infamous BenQ W1070, which uses conventional projection lamps (Aaxa’s LED projectors were not competitively priced at that time, costing more for a lower resolution and less connectivity):

2015-02-07--15.30.52

First power-on, with David Revoy‘s beautiful artwork as my wallpaper

In the process of picking up the BenQ W1070, I compared it to the Acer H6510BD and others, and this particular question came up: how realistic are manufacturers’ claims about their dynamic “lamp life saving” features?

The answer is:

bullshit - ten points from gryffindor

For starters, I asked Acer to clarify what their “Extreme Eco” feature really did, and to their credit they answered truthfully (emphasis mine):

Acer ExtremeEco technology reduces the lamp power to 30% enabling up to 70% power savings when there is no input signal, extending the lifespan of the lamp up to 7000 hours and reducing operating costs. 4,000 Hours (Standard), 5,000 Hours (ECO), 7,000 Hours (ExtremeEco)*
*: Lamp Life of ExtremeEco mode is based on an average usage cycle of 45 minutes ECO mode plus 75 minutes ExtremeEco (30% lamp power) mode

Yeah. “When there’s no input signal”. What kind of fool leaves a home theater projector turned on while unused for extended periods of time and thinks it won’t destroy the lamp’s life?

Now you’re wondering if Acer is the only one trying to be clever about marketing and if other manufacturers actually have better technology for lamp savings. After all, BenQ is very proud to say this (direct quote from their website):

By detecting the input content to determine the amount of brightness required for optimum color and contrast performance, the SmartEco Mode is able to reduce lamp power while delivering the finest image quality. No compromise!

Bundled with this chart, no less:

benq-smarteco-chart-nonsense

Note: “LampSave” mode is a different thing, it’s either the manual screen blanking button, or when SmartEco detects no more input signal at all and turns down the lamp, like Acer’s “ExtremeEco”!

So, perhaps the widely acclaimed BenQ W1070 projector’s “SmartEco” mode is a truly smart psychovisual image analysis system that continously varies the intensity to make darker scenes go extra-dark, darker than the “constant” Eco Mode? Glad you asked, because I have the numbers to prove that’s not the case either.

I measured the BenQ W1070’s power consumption through a 3×6 conditions experiment:

  • Power scheme:
    • Normal mode
    • Eco mode
    • SmartEco mode
  • Content type:
    • A photo of the Hong Kong skyline at night
    • A static view of the TED.com website, split-screen at 50% of the width, on top of the Hong Kong image
    • A static view of the TED.com website, maximized
    • GIMP 2.10, maximized, with a pitch-black image and a minimum amount of UI controls (menubar, rulers, scrollbars, statusbar) as bright elements
    • That same 100% pitch-black static image, fullscreened
    • Playing Sintel and measuring the total electricity consumption over 15 minutes, in kWh

Here are some of the “content type” conditions above, so you have an idea what the static images looked like:

BenQ-W1070-static-benchmark-images

Here is a visual summary of the results for the “static images” conditions:

And here are the raw numbers:

Normal mode Eco mode SmartEco mode
Static H.K. night skyline photo 264 watts 198 watts 273 watts
Static TED.com split-screen 264 watts 198 watts 276 watts
Static TED.com maximized 264 watts 198 watts 276 watts
Static GIMP (almost pitch-black) 264 watts 198 watts 256 watts
Static pitch-black image (fullscreen) 264 watts 198 watts 198 watts
15 minutes movie (Sintel) 0.066 kWh 0.0495 kWh 0.06x kWh

Note: for the Sintel benchmark, the “0.06” figure lacks one decimal compared to the two other conditions (Normal and Eco modes). This is due to display limitations of my measurement instrument; other values were calculated (as they are constant) and confirmed with the measurement instrument (ex: Eco mode was measured to be at 0.04x kWh). It is thus possible that SmartEco’s 0.06 kWh was anywhere from 0.060 to 0.069 kWh.

As you can see, BenQ’s SmartEco provided no significant advantage:

  • With normal/bright still images, it increases consumption by as much as 3% compared to the full-blast “normal mode”; it reduces consumption by the same amount (3%) when displaying very dark images, and does not under any circumstances (even a 100% pitch black screen) fall below the constant power consumption value of the regular “Eco mode”.
  • In practice, while watching movies that combine light and dark scenes, “SmartEco” mode might (or might not) reduce energy consumption compared to “normal mode”, by a small amount.
  • SmartEco will always be outperformed by the regular “eco mode”, which yields a constant 25% power saving.

Out of curiosity, I also tested all the other manual image settings in addition to “eco mode”, including: brightness, contrast, color temperature, “Brilliant Color” (on/off); none of these settings had any effect on power consumption.

My conclusion and recommendation:

  • “Smart” eco modes are dumb and will eat your lamps.
  • With current technology, home theater projectors should be locked down to a constant lamp economy mode to maximize longevity and gain other benefits; after all, you’re supposed to be sitting in a pitch-black room to maximize contrast and quality with any projector anyway, and the lower lamp power setting will also reduce heat, and thus cooling fan noise.

Nonetheless, the W1070 is a very nice projector.


*: When I tell people I don’t watch any series at all, including [insert série-du-jour here], I typically get reactions like this:

you don't watch game of thrones

LAS, hosted by GNOME

Sri and many members of our community have spearheaded a wonderful new conference named Libre Application Summit. It’s hosted by the GNOME Foundation and has aspirations to bring together a wide spectrum of contributors.

This conference is meant to bridge a gap in Free Software communication. We need a place for application and game developers, desktop developers, systems implementers, distributions, hardware producers, and driver developers to communicate and solve problems face to face. There are so many moving parts to a modern operating system that it is very rare to have all of these passionate people in the same room.

This will be a great place to learn about how to contribute to these technologies as well. It seems likely that I’ll do tutorial workshops and other training for participants at LAS.

I’m very excited to see where this conference goes and hope to see you in Portland come September!

May 17, 2016

Zulip & Good Code Review

As Changeset Consulting I've recently done some tech writing work for the Zulip open source project (check out my pull requests for the nitty-gritty). And I've gotten to work with Tim Abbott, Zulip's maintainer, including taking his feedback and course-correction. Often, in the software industry, we pretend or assume a dichotomy between brutal clarity and kind vagueness, especially when it comes to criticizing other people's work. Getting to work with Tim has been a welcome and consistent reminder that you can be straightforward and respectful when you need to give negative feedback.

Here's an example: Tim reviewing a pull request. Check out how he starts by saying it looks good, and then offers suggestions on how to break the patch into logical commits -- and how he makes sure to explain why we want to do that (including a link to a document that explains the reasoning and customs in more detail). Tim also tells the code author that it's fine to deviate from his specific suggestions, but gives him some guardrails. Since rebasing can be tricky for people who haven't done it much, he even suggests some git commands and tools that'll make things easier, and offers to clarify if needed. And Tim wraps it up with a thank-you and a quick explanation about why he's being so rigorous in reviewing this particular patch (security considerations).

It's thoughtful, kind, clear, and collaborative. It's the seventh comment Tim's offered on this pull request, as the author's iteratively improved it, and it shows no hint of impatience. Good stuff.

If this has piqued your interest in Zulip, you might also like the roadmap and README. Also, Zulip's holding a sprint at PyCon US, June 2-5, in Portland, Oregon.

News on public transit routing in GNOME Maps

So, this time there's not any fancy new screenshots to showcase…

I've been updating the otp-updater project adding support for storing a configuration so that the script could be run without pointing out the list of GTFS feeds and path to the OpenTripPlanner wrapper script (for re-generating graphs) straight from the command line.
I also added a little README and a sample configuration and feed list so that hopefully it should be a little easier, should someone want to try this out in conjunction with the https://git.gnome.org/browse/gnome-maps/log/?h=wip/mlundblad/transit-routing transit routing WIP branch.

Meanwhile, Andreas Nilsson has been busy making some user interviews and compiled a set of user stories.

https://wiki.gnome.org/Design/Apps/Maps/PublicTransportation/Interviews
https://wiki.gnome.org/Design/Apps/Maps/PublicTransportation

Looking forward to see some nice mockups to set my teeth into now :-)

GSoC 2016 Introduction

Hi everyone,

My name is Pranav Ganorkar and I am a third year student at V.E.S. Institute of Technology, Mumbai pursuing Bachelor’s degree in Computer Engineering.

My very first introduction to Linux was in eighth grade when I booted the Live CD (Yes, CD’s used to be much popular at that time due to low penetration of Internet) of Fedora 13 on my Desktop. I was very much impressed by it’s user interface and started hacking on it. It was my first introduction to GNOME Desktop. At that time , in high school, I was mostly interested in installing various distros and trying out new desktop environments. What a wonderful life back then , wasn’t it ?:)

I started following GNOME Development when I was in my first year of Engineering. It was when I learnt about the GTK+ Toolkit and tried to make some simple applications using it.  It’s been almost three years now since I have been using GNOME as my primary desktop environment on Arch Linux. I started following GNOME Logs development from January 2016, got acquainted with it’s source code, started hacking on it and was able to make my first contribution in March 2016 with a lot of help from David, my mentor.

Now, some details regarding my GSoC project. I will be working on improving the search feature in GNOME Logs under the mentorship of David King. Currently, searching in GNOME Logs is only restricted to the front end part. Meaning, the actual search results are not fetched from sd-journal API (the API which GNOME Logs uses to fetch the entries from the journal) but using the filter function for GtkListBox in the already fetched results. So, my first task is to move the searching to the back-end part. That is, when the user types text in the search bar, the results will actually be fetched from the sd-journal API.

I will also be improving the GUI for the search interface. Currently, GNOME Logs does not have any option to graphically restrict the journal fields by which user entered text will be filtered. So, I will be adding a drop down menu besides the search bar to let the user select the various journal fields by which to restrict the search results from. Currently I have done a GUI mockup for it which looks like this:

gnome-logs-gsoc

Also, the advanced options menu button will provide the user with a dialog box which will provide with advanced journalctl filtering options like “since” and “until”. Also, user can specify the type of search : exact or substring which he wants to perform. Currently, the GUI mockup for advanced search dialog box looks like this:

advanced-search-dialog-box

Along with the above tasks, I will also be writing a search provider for GNOME Logs. So, GNOME Shell will be getting a new search provider. Yay:)

Finally, testing will be done to test all the above implemented functionalities. I hope that after the completion of my project, GNOME Logs will be able to match up with journalctl tool in terms of searching and filtering functionality.This is a short summary of my GSoC project.

Looking forward to an exciting summer and learning a lot in process about GNOME Development and it’s vibrant community:)

Also, feel free to comment about what you think of the current GUI mockups.

See you in next post. Until then,

Happy Coding !


May 16, 2016

Web comics using a Bash script

Over at Linux Journal, I recently wrote a short article about reading web comics using a Bash script.

I follow several web comics. Every morning, I used to open my browser and check out each comic's web site to read that day's comic. That method worked well when I read only a few web comics, but it became a pain to stay current when I followed more than about ten comics.

These days, I read around twenty web comics: Darths and Droids, Ctrl+Alt+Del, Questionable Content, and a bunch of others. It takes a lot of time to open each website separately just to read a Web comic. I suppose I could bookmark the web comics, but that's twenty browser tabs to open every morning. Whether I open them all at once or one at a time, that seems like too much time for me.

I figured there had to be a better way, a simpler way for me to read all of my web comics at once. So I wrote a Bash script that automatically collects my web comics and puts them on a single page on a personal web server in my home. Now, I just open my web browser to my private web server, and read all my comics at once.

The Linux Journal article walks you through how to build the Bash script using wget, xmllint and ImageMagick, and provides the actual script as an example.
image: Linux Journal

NetworkManager and WiFi Scans

Recently Dave Täht wrote a blog post investigating latency and WiFi scanning and came across NetworkManager’s periodic scan behavior.  When a WiFi device scans it obviously must change from its current radio channel to other channels and wait for a short amount of time listening for beacons from access points.  That means it’s not passing your traffic.

With a bad driver it can sometimes take 20+ seconds and all your traffic gets dropped on the floor.

With a good driver scanning takes only a few seconds and the driver breaks the scan into chunks, returning to the associated access point’s channel periodically to handle pending traffic.  Even with a good driver, latency-critical applications like VOIP or gaming will clearly suffer while the WiFi device is listening on another channel.

So why does NetworkManager periodically scan for WiFi access points?

Roaming

Whenever your WiFi network has multiple access points with the same SSID (or a dual-band AP with a single SSID) you need roaming to maintain optimal connectivity and speed.  Jumping to a better AP requires that the device know what access points are available, which means doing a periodic scan like NetworkManager does every 2 minutes.  Without periodic scans, the driver must scan at precisely the worst moment: when the signal quality is bad, and data rates are low, and the risk of disconnecting is higher.

Enterprise WiFi setups make the roaming problem much worse because they often have tens or hundreds of access points in the network and because they typically use high-security 802.1x authentication with EAP.  Roaming with 802.1x introduces many more steps to the roaming process, each of which can fail the roaming attempt.  Strategies like pre-authentication and periodic scanning greatly reduce roaming errors and latency.

User responsiveness and Location awareness

The second reason for periodic scanning is to maintain a list of access points around you for presentation in user interfaces and for geolocation in browsers that support it.  Up until a couple years ago, most Linux WiFi applets displayed a drop-down list of access points that you could click on at any time.  Waiting for 5 to 15 seconds for a menu to populate or ‘nmcli dev wifi list’ to return would be annoying.

But with the proliferation of WiFi (often more than 30 or 40 if you live in a flat) those lists became less and less useful, so UIs like GNOME Shell moved to a separate window for WiFi lists.  This reduces the need for a constantly up-to-date WiFi list and thus for periodic scanning.

To help support these interaction models and click-to-scan behaviors like Mac OS X or Maemo, NetworkManager long ago added a D-Bus API method to request an out-of-band WiFi scan.  While it’s pretty trivial to use this API to initiate geolocation or to refresh the WiFi list based on specific user actions, I’m not aware of any clients using it well.  GNOME Shell only requests scans when the network list is empty and plasma-nm only does so when the user clicks a button.  Instead, UIs should simply request scans periodically while the WiFi list is shown, removing the need for yet another click.

WHAT TO DO

If you don’t care about roaming, and I’m assuming David doesn’t, then NetworkManager offers a simple solution: lock your WiFi connection profile to the BSSID of your access point.  When you do this, NetworkManager understands that you do not want to roam and will disable the periodic scanning behavior.  Explicitly requested scans are still allowed.

You can also advocate that your favorite WiFi interface add support for NetworkManager’s RequestScan() API method and begin requesting periodic scans when WiFi lists are shown or when your browser uses geolocation.  When most do this, perhaps NetworkManager could be less aggressive with its own periodic scans, or perhaps remove them altogether in favor of a more general solution.

That general solution might involve disabling periodic roaming when the signal strength is extremely good and start scanning more aggressively when signal strength drops over a threshold.  But signal strength drops for many reasons like turning on a microwave, closing doors, turning on Bluetooth, or even walking to the next room, and triggering a scan then still interrupts your VOIP call or low ping headshot.  This also doesn’t help people who aren’t close to their access point, leading to the same scanning problem David talks about if you’re in the basement but not if you’re in the bedroom.

Another idea would be to disable periodic scanning when latency critical applications are active, but this requires that these applications consistently set the IPv4 TOS field or use the SO_PRIORITY socket option.  Few do so.  This also requires visibility into kernel mac80211 queue depths and would not work for proprietary or non-mac80211-based drivers.  But if all the pieces fell into place on the kernel side, NetworkManager could definitely do this while waiting for applications and drivers to catch up.

If you’ve got other ideas, feel free to propose them.

Bug Fix #1

This post is about how to submit bugs and fix them to improve the calendar.

I would like to speak about my very first bug fix which wouldn’t have been possible at all without the awesome GNOME community.

The bug link- https://bugzilla.gnome.org/show_bug.cgi?id=763160

It started out as a bug which was due to a simple vague extra click required before actually using the keyboard navigation key. It turned out the problem was a single like of code which was to be added in the property of the .ui file. More on the UI in later. But in short, just needed to add a property called “can_focus” to a window.

This bug was simple but more of introduction on how to submit patches.

To submit a patch:

First, checkout to a new branch using:

git checkout -b <name>

Fix the bug and add, commit by adding a proper commit message. All of which are regular git commands.
Next, the awesome “bz”. This basically submits your patch directly to the bugzilla page with a simple command:

git bz attach <bug number> <patch number>

After that, pat yourself on the back and wait for the moderators to check your code and commit it to the main branch.

Is all this fuss necessary? Maybe/maybe not, but when you get the awesome feel of “open sourcing”, which can’t possibly be explained in this and requires its own article to actually explain it, it’s totally worth it.

Hold tight for the next blog😀

To do after this blog: Article about Calendar UI and Open-Source feel.


GNOME Calendar-||

To get started with GNOME calendar, first get jhbuild running. The Gnome website has a very good page on how to do that, and it should work fine:)
https://wiki.gnome.org/Newcomers/BuildGnome

After getting jhbuild, the command to build the calendar is

jhbuild build --nodeps gnome-calendar

The –nodeps is to ignore the system dependencies. I was getting error in “libsystemd-journal” In order to get around with that, I used –nodeps.

This will clone about 47 repositories and build each product. You can modify the files in the folder where they are cloned, which would be ~/checkout/gnome

To make changes and run, first open up the folder of the product,

subl ~/checkout/gnome/gnome-calendar

My default text editor is sublime, so I just opened up the folder in my editor.
That’s about it😀 Now get started with fixing bugs. After making changes, just save the file and build only the calendar again, no need to build all the dependencies again. To build the product and run it, I use the following commands while moving into the gnome-calendar folder:

cd ~/checkout/gnome/gnome-calendar
jhbuild make
jhbuild run src/gnome-calendar

Now enjoy the glory of making changes in the product😀
Next up: Fixing bugs and submitting patches:)


GSoC 2016: Introduction

My name is Gabriel Ivașcu and I am a third year student at University Politehnica of Bucharest pursuing a Bachelor’s degree in Computer Science.

Ever since I joined university and got introduced to Linux I immediately became an avid fan of free and open source software, my passion evolving constantly since then. I’ve been using GNOME under Fedora as my standard desktop environment for about two years now.

I started contributing to GNOME in October 2015 with the help of my colleague and friend, Iulian Radu, a former participant of GSoC 2015 program and also a current participant in the 2016 edition. I worked mainly on Nibbles, Iulian’s 2015 project, and, later in February, we also began contributing to Web (a.k.a. Epiphany) under the guidance of Michael Catanzaro, our current mentor, who was kind enough to accept mentoring two projects this year.

Regarding my project, the ultimate goal is to introduce a session sync feature to GNOME Web. Session sync is an already present feature in many other popular web browsers such as Firefox, Chrome and Safari, which Epiphany lacks. As stated on the ideas page, the aim is to provide a simple and easy way for users to synchronize their bookmarks, history, saved passwords and open tabs between multiple devices.

Between Mozilla and OwnCloud, together with my mentor we concluded that Mozilla is a better option because it would allow users to seamlessly sync data with Firefox with only a Firefox account, no configuration required (OwnCloud would require the users to set up their own server). Yet Mozilla imposes a rather complex protocol that FxA clients need to use in order to communicate with the FxA server. Since implementing this protocol would require a considerable amount of work that I probably won’t have enough time left to finish all forms of sync (bookmarks, history, passwords, open tabs) my mentor agreed that I should focus on getting the protocol work and only implement bookmarks sync for the moment. Having a working form of bookmarks sync as reference, adding the extras at a later time would be relatively easy.

Due to my exams session between May 28 and June 17 I intend to start my work earlier during the Community Bonding period so I can cover the time lost with my unavailability during my exams.

I am very excited to be part of this and I hope good things will come out. Looking forward to the summer!


GNOME.Asia Summit 2016

While I was going through news.gnome.org, a piece of news flashed on my screen stating that GNOME.Asia summit 2016 is to be held in Delhi, India which is my own place. Though at that time I was completely unaware about what happens in a summit, what it is meant for and all that sort of questions. But for once, I decided to atleast attend it, if not participate. I told about this news to my mentors Jonas Danielsson and Damian Nohales.  Initially i was quite reluctant to participate there, but Jonas pushed me a lot to present a lightning talk about my outreachy project in the summit. Damian too motivated me to go for the summit. Therefore I decided to submit a lightning talk proposal about my project : "Adding print route support in GNOME-Maps". Within few days i got the confirmation regarding the acceptance of my talk and also the approval of travelling sponsorship.

I was all ready for being the part of the summit and was quite excited to meet people whom I have just known by their nicks on IRC. The summit was held in Manav Rachna International University, Faridabad (India).

Day 1 comprised of workshops. The first session was divided based upon different ways in which one can contribute to GNOME (development, documentation, engagement) and in development it was branched further based on programming languages one was interested in. Because of my interest in javascript, I   joined Cosimo's team. The discussion turned out to be really helpful and cleared a lot of my doubts. Then there was a hands on session on gstreamer taken by Nirbheek and Arun. It was again an interesting one. Meanwhile I made many friends and exchanged talk with them. Above all of it, the community felt like very friendly, interesting and helping.

Day 2 and 3 comprised of a lot of interesting talks by various speakers. It was my first experience to deliver a talk at such a big summit. I was quite scared initially, but it happened all well at the end. I felt glad when I was able to reach out to people and shared the work clearly.

I was not aware of Day 4 plan i.e. excursion trip. But when Shobha (The summit coordinator) asked me to join them, I happily agreed to join them on the trip. It was a fun-filled trip to Taj Mahal, Agra. I got to know a lot about cultures of different countries and made awesome friends.

This summit has been very helpful to get me a feel of GNOME community. After all it's the people who has made it. I am thankful to GNOME community for making me a part of it and the summit. :) Looking forward to more such meets. :)

May 15, 2016

Reviving the GTK development blog

The GTK+ project has a development blog.

I know it may come as a shock to many of you, and you’d be completely justified in thinking that I just made that link up — but the truth is, the GTK+ project has had a development blog for a long while.

Sadly, the blog hasn’t been updated in five years — mostly around the time 3.0 was released, and the GTK+ website was revamped; even before that, the blog was mostly used for release announcements, which do not make for very interesting content.

Like many free and open source software projects, GTK+ has various venues of interaction between its contributors and its users; mailing lists, personal blogs, IRC, Stack Overflow, reddit, and many, many other channels. In this continuum of discussions it’s both easy to get lost and to lose the sense of having said things before — after all, if I repeat something at least three times a week on three different websites for three years, how can people still not know about it? Some users will always look at catching up after three years, because their projects live on very different schedules that the GTK releases one; others will try to look for official channels, even if the free and open source software landscape has fragmented to such a degree that any venue can be made “official” by the simple fact of having a contributor on it; others again will look at the API reference for any source of truth, forgetting, possibly, that if everything went into the API reference then it would cease to be useful as a reference.

The GTK+ development blog is not meant to be the only source for truth, or the only “official” channel; it’s meant to be a place for interesting content regarding the project, for developers using GTK+ or considering to use it; a place that acts as a hub to let interested people discover what’s up with GTK+ itself but that don’t want to subscribe to the commits list or join IRC.

From an editorial standpoint, I’d like the GTK+ development blog to be open to contribution from people contributing to GTK+; using GTK+; and newcomers to the GTK+ code base and their experiences. What’s a cool GTK+ feature that you worked on? How did GTK+ help you in writing your application or environment? How did you find contributing to GTK+ for the first time? If you want to write an article for the GTK+ blog talking about this, then feel free to reach out to me with an outline, and I’ll be happy to help you.

In the meantime, the first post in the This Week in GTK+ series has gone up; you’ll get a new post about it every Monday, and if you want to raise awareness on something that happened during a week, feel free to point it out on the wiki.

May 14, 2016

MythWeb Confusing Error Message

I'm finally configuring Kodi properly to watch over-the-air channels using this this USB ATSC / DVB-T tuner card from Thinkpenguin. I hate taking time away, even on the weekends, from the urgent Conservancy matters but I've been doing by-hand recordings using VLC for my wife when she's at work, and I just need to present a good solution to my home to showcase software freedom here.

So, I installed Debian testing to get a newr Kodi, I did discover this bug after it had already been closed but had to pull util-linux out of unstable for the moment since it hadn't moved to testing.

Kodi works fine after installing it via apt, and since VDR is packaged for Debian, I tried getting VDR working instead of MythTV at first. I almost had it working but then I got this error:

VNSI-Error: cxSocket::read: read() error at 0/4
when trying to use kodi-pvr-vdr-vnsi (1.11.15-1) with vdr-plugin-vnsiserver (1:1.3.1) combined with vdr (2.2.0-5) and kodi (16.0+dfsg1-1). I tried briefly using the upstream plugins for both VDR and Kodi just to be sure I'd produce the same error, and got the same so I started by reporting this on the Kodi VDR backend forum. If I don't get a response there in a few weeks, I'll file it as a bug against kodi-pvr-vdr-vnsi instead.

For now, I gave up on VDR (which I rather liked, very old-school Unix-server module was to build a PVR), and tried MythTV instead since it's also GPL'd. Since there weren't Debian packages, I followed this building from source tutorial on MythTV's website.

I didn't think I'd actually need to install MythWeb at first, because I am using Kodi primarily and am only using MythTV backend to handle the tuner card. It was pretty odd that you can only configure MythTV via a QT program called mythtv-setup, but ok, I did that, and it was relatiavely straight forward. Once I did, playback was working reasonable using Kodi's MythTV plugin. (BTW, if you end up doing this, it's fine to test Kodi as its own in a window with a desktop environment running, but I had playback speed issues in that usage, but they went away fully when I switched to a simple .xinitrc that just called kodi-standalone.

The only problem left was that I noticed that I was not getting Event Information Table (EIT) data from the card to add to the Electronic Program Guide (EPG). Then I discovered that one must install MythWeb for the EIT data to make it through via the plugin for EPG in Kodi. Seems weird to me, but ok, I went to install MythWeb.

Oddly, this is where I had the most trouble, constantly receiving this error message:

PHP Fatal error: Call to a member function query_col() on null in /path/to/mythweb/modules/backend_log/init.php on line 15

The top net.search hit is likely to be this bug ticket which out points out that this is a horrible form of an error message to tell you the equivalent of “something is strange about the database configuration, but I'm not sure what”.

Indeed, I tried a litany of items which i found through lots of net.searching. Unfortunately I got a bit frantic, so I'm not sure which one solved my problem (I think it was actually quite obviously multiple ones :). I'm going to list them all here, in one place, so that future searchers for this problem will find all of them together:

  • Make sure the PHP load_path is coming through properly and includes the MythTV backend directory, ala:
    setenv include_path "/path/to/mythtv/share/mythtv/bindings/php/"
  • Make sure the mythtv user has a password set properly and is authorized in the database users table to have access from localhost, ::1, and 127.*, as it's sometimes unclear which way Apache might connect.
  • In Debian testing, make sure PHP 7 is definitely not in use by MythWeb (I am guessing it is incompatible), and make sure the right PHP5 MySql modules are installed. The MythWeb installation instructions do say:
    apache2-mpm-prefork php5 php5-mysql libhttp-date-perl
    And at one point, I somehow got php5-mysql installed and libapache2-mod-php5 without having php5 installed, which I think may have caused a problem.
  • Also, read

    this thread from the MythTV mailing list as it is the most comprehensive in discussing this error.

I did have to update the channel lineup with mythfilldatabase --dd-grab-all

That “My Ears are Burning” Thing Is Definitely Apocryphal

I've posted in the past about the Oracle vs. Google case. I'm for the moment sticking to my habit of only commenting when there is a clear court decision. Having been through litigation as the 30(b)(6) witness for Conservancy, I'm used to court testimony and why it often doesn't really matter in the long run. So much gets said by both parties in a court case that it's somewhat pointless to begin analyzing each individual move, unless it's for entertainment purposes only. (It's certainly as entertaining as most TV dramas, really, but I hope folks who are watching step-by-step admit to themselves that they're just engaged in entertainment, not actual work. :)

I saw a lot go by today with various people as witnesses in the case. About the only part that caught my attention was that Classpath was mentioned over and over again. But that's not for any real salient reason, only because I remember so distinctly, sitting in a little restaurant in New Orleans with RMS and Paul Fisher, talking about how we should name this yet-to-be-launched GNU project “$CLASSPATH”. My idea was that was a shell variable that would expand to /usr/lib/java, so, in my estimation, it was a way to name the project “User Libraries for Java” without having to say the words. (For those of you that were still children in the 1990s, trademark aggression by Sun at the time on the their word mark for “Java” was fierce, it was worse than the whole problem the Unix trademark, which led in turn to the GNU name.)

But today, as I saw people all of the Internet quoting judges, lawyers and witnesses saying the word “Classpath” over and over again, it felt a bit weird to think that, almost 20 years ago sitting in that restaurant, I could have said something other than Classpath and the key word in Court today might well have been whatever I'd said. Court cases are, as I said, dramatic, and as such, it felt a little like having my own name mentioned over and over again on the TV news or something. Indeed, I felt today like I had some really pointless, one-time-use superpower that I didn't know I had at the time. I now further have this feeling of: “darn, if I knew that was the one thing I did that would catch on this much, I'd have tried to do or say something more interesting”.

Naming new things, particularly those that have to replace other things that are non-Free, is really difficult, and, at least speaking for myself, I definitely can't tell when I suggest a name whether it is any good or not. I actually named another project, years later, that could theoretically get mentioned in this case, Replicant. At that time, I thought Replicant was a much more creative name than Classpath. When I named Classpath, I felt it was somewhat obvious corollary to the “GNU'S Not Unix” line of thinking. I also recall distinctly that I really thought the name lost all its cleverness when the $ and the all-caps was dropped, but RMS and others insisted on that :).

Anyway, my final message today is to the court transcribers. I know from chatting with the court transcribers during my depositions in Conservancy's GPL enforcement cases that technical terminology is really a pain. I hope that the term I coined that got bandied about so much in today's testimony was not annoying to you all. Really, no one thinks about the transcribers in all this. If we're going to have lawsuits about this stuff, we should name stuff with the forethought of making their lives easier when the litigation begins. :)

May 13, 2016

Blutella, a Bluetooth speaker receiver

Quite some time ago, I was asked for a way to use the AV amplifier (which has a fair bunch of speakers connected to it) in our living-room that didn't require turning on the TV to choose a source.

I decided to try and solve this problem myself, as an exercise rather than a cost saving measure (there are good-quality Bluetooth receivers available for between 15 and 20€).

Introducing Blutella



I found this pot of Nutella in my travels (in Europe, smaller quantities are usually in a jar that looks like a mustard glass, with straight sides) and thought it would be a perfect receptacle for a CHIP, to allow streaming via Bluetooth to the amp. I wanted to make a nice how-to for you, dear reader, but best laid plans...

First, the materials:
  • a CHIP
  • jar of Nutella, and "Burnt umber" acrylic paint
  • micro-USB to USB-A and jack 3.5mm to RCA cables
  • Some white Sugru, for a nice finish around the cables
  • bit of foam, a Stanley knife, a CD marker

That's around 10€ in parts (cables always seem to be expensive), not including our salvaged Nutella jar, and the CHIP itself (9$ + shipping).

You'll start by painting the whole of the jar, on the inside, with the acrylic paint. Allow a couple of days to dry, it'll be quite thick.

So, the plan that went awry. Turns out that the CHIP, with the cables plugged in, doesn't fit inside this 140g jar of Nutella. I also didn't make the holes exactly in the right place. The CHIP is tiny, but not small enough to rotate inside the jar without hitting the side, and the groove to screw the cap also have only one position.

Anyway, I pierced two holes in the lid for the audio jack and the USB charging cable, stuffed the CHIP inside, and forced the lid on so it clipped on the jar's groove.

I had nice photos with foam I cut to hold the CHIP in place, but the finish isn't quite up to my standards. I guess that means I can attempt this again with a bigger jar ;)

The software

After flashing the CHIP with Debian, I logged in, and launched a script which I put together to avoid either long how-tos, or errors when I tried to reproduce the setup after a firmware update and reset.

The script for setting things up is in the CHIP-bluetooth-speaker repository. There are a few bugs due to drivers, and lack of integration, but this blog is the wrong place to track them, so check out the issues list.

Apart from those driver problems, I found the integration between PulseAudio and BlueZ pretty impressive, though I wish there was a way for the speaker to reconnect to the phone I streamed from when turned on again, as Bluetooth speakers and headsets do, removing one step from playing back audio.

May 12, 2016

Cockpit 0.106

Cockpit is the modern Linux admin interface. There’s a new release every week. Here are the highlights from this weeks 0.106 release.

Stable Cockpit Styles

One of the annoying things about CSS is that when you bring in stylesheets from multiple projects, they can conflict. You have to choose a nomen-clature to namespace your CSS, or nest it appropriately.

We’re stabilizing the internals of Cockpit in the browser, so when folks write plugins, they can count on them working. To make that happen we had to namespace all our own Cockpit specific CSS classes. Most of the styling used in Cockpit come from Patternfly and this change doesn’t affect those styles at all.

Documentation is on the wiki

Container Image Layers

Docker container image layers are now shown much more clearly. It should be clearer to tell which is the base layer, and how the others are layered on top:

Image Layers

Try it out

Cockpit 0.106 is available now:

Convenience, security and freedom - can we pick all three?

Moxie, the lead developer of the Signal secure communication application, recently blogged on the tradeoffs between providing a supportable federated service and providing a compelling application that gains significant adoption. There's a set of perfectly reasonable arguments around that that I don't want to rehash - regardless of feelings on the benefits of federation in general, there's certainly an increase in engineering cost in providing a stable intra-server protocol that still allows for addition of new features, and the person leading a project gets to make the decision about whether that's a valid tradeoff.

One voiced complaint about Signal on Android is the fact that it depends on the Google Play Services. These are a collection of proprietary functions for integrating with Google-provided services, and Signal depends on them to provide a good out of band notification protocol to allow Signal to be notified when new messages arrive, even if the phone is otherwise in a power saving state. At the time this decision was made, there were no terribly good alternatives for Android. Even now, nobody's really demonstrated a free implementation that supports several million clients and has no negative impact on battery life, so if your aim is to write a secure messaging client that will be adopted by as many people is possible, keeping this dependency is entirely rational.

On the other hand, there are users for whom the decision not to install a Google root of trust on their phone is also entirely rational. I have no especially good reason to believe that Google will ever want to do something inappropriate with my phone or data, but it's certainly possible that they'll be compelled to do so against their will. The set of people who will ever actually face this problem is probably small, but it's probably also the set of people who benefit most from Signal in the first place.

(Even ignoring the dependency on Play Services, people may not find the official client sufficient - it's very difficult to write a single piece of software that satisfies all users, whether that be down to accessibility requirements, OS support or whatever. Slack may be great, but there's still people who choose to use Hipchat)

This shouldn't be a problem. Signal is free software and anybody is free to modify it in any way they want to fit their needs, and as long as they don't break the protocol code in the process it'll carry on working with the existing Signal servers and allow communication with people who run the official client. Unfortunately, Moxie has indicated that he is not happy with forked versions of Signal using the official servers. Since Signal doesn't support federation, that means that users of forked versions will be unable to communicate with users of the official client.

This is awkward. Signal is deservedly popular. It provides strong security without being significantly more complicated than a traditional SMS client. In my social circle there's massively more users of Signal than any other security app. If I transition to a fork of Signal, I'm no longer able to securely communicate with them unless they also install the fork. If the aim is to make secure communication ubiquitous, that's kind of a problem.

Right now the choices I have for communicating with people I know are either convenient and secure but require non-free code (Signal), convenient and free but insecure (SMS) or secure and free but horribly inconvenient (gpg). Is there really no way for us to work as a community to develop something that's all three?

comment count unavailable comments

H264 in Fedora Workstation

So after a lot of work to put the policies and pieces in place we are now giving Fedora users access to the OpenH264 plugin from <a href="http://www.cisco.comCisco.
Dennis Gilmore posted a nice blog entry explaining how you can install OpenH264 in Fedora 24.

That said the plugin is of limited use today for a variety of reasons. The first being that the plugin only supports the Baseline profile. For those not intimately familiar with what H264 profiles are they are
basically a way to define subsets of the codec. So as you might guess from the name Baseline, the Baseline profile is pretty much at the bottom of the H264 profile list and thus any file encoded with another profile of H264 will not work with it. The profile you need for most online videos is the High profile. If you encode a file using OpenH264 though it will work with any decoder that can do Baseline or higher, which is basically every one of them.
And there are some things using H264 Baseline, like WebRTC.

But we realize that to make this a truly useful addition for our users we need to improve the profile support in OpenH264 and luckily we have Wim Taymans looking at the issue and he will work with Cisco engineers to widen the range of profiles supported.

Of course just adding H264 doesn’t solve the codec issue, and we are looking at ways to bring even more codecs to Fedora Workstation. Of course there is a limit to what we can do there, but I do think we will have some announcements this year that will bring us a lot closer and long term I am confident that efforts like Alliance for Open Media will provide us a path for a future dominated by royalty free media formats.

But for now thanks to everyone involved from Cisco, Fedora Release Engineering and the Workstation Working Group for helping to make this happen.

Security From Whom?

Secure from whom? I was asked after my recent post questioning the positioning of Mir/Wayland as security improvement.

Excellent question — I am glad you asked! Let us take a look at the whos and compare.

To take advantage of the X11 protocol issues, you need to be able to speak X11 to the server. Assuming you haven’t misconfigured something (ssh or your file permissions) so other users’ software can talk to your
server, that means causing you to run evil X11 protocol code like XEvilTeddy. Who can do that? Well, there are probably a few thousand people who can. That is a lot, but most of application developers or maintainers who have to sneak the changes in via source form. That is possible, but it is slow, has high risk of discovery, and has problems with deniability. And choosing X11 as a mechanism is just plain silly. Just contact a command-and-control server and download the evil payload instead. There are also a smaller number of people who can attack via binaries, either because distributions take binaries directly from them or because the can change and re-sign binary packages. That would mean your entire distribution is compromised and choosing the X11 attack is really silly again.

Now, let us look at the who of a side-channel attack. This requires the ability to run code on your machine,
but it does not have to be code that can speak X11 to your X server equivalent. It can be sand-boxed code such as javascript even when the sand-box is functioning as designed. Who can do that? Well, anyone who controls a web server you visit; plus any adserver network used by such web servers; plus anyone buying ads from such adserver networks. In short, just about anyone. And tracking the origin of such code created by an evil advertiser would be extremely hard.

So to summarize: attacking the X11 protocol is possible by a relatively small group of people who have much better methods available to them; attacking via side-channel can be done by a much wider group who probably do not have better methods. The former threat is so small as to be irrelevant in the face of the second.

Look, it is not that I think of security in black and white terms. I do not. But if improved security is your motivation then looking at a Linux laptop and deciding that pouring man-decades into a partial replacement for the X server is what needs doing is a bad engineering decision when there are so many more important concerns, i.e., you are doing it wrong. And selling said partial X server replacement as a security improvement is at best misleading and uninformed.

On the other hand, if you are working on Mir/Wayland because that kind of thing floats your boat, then fine. But please do not scream “security!” when you break, say, my colour picker.

Boost Graph Library and C++ >17

I’ve recently been looking at the Boost Graph Library (BGL), by reading through the excellent BGL book and playing with the BGL examples, which are mostly from the book.  Although it  was written 15 years ago, in 2001, it doesn’t feel dated because it uses techniques being actively discussed now for C++17 or later. And there’s a clear lineage from then to now.

For instance, I’ve been watching the charming Programming Conversations videos by Alexander Stepanov, who brought generic programming to C++ by designing the STL. At some early point in the videos, he mentioned the idea of concepts, which he expected to be in C++17, and showed how he mimicked the simpler concepts syntax (without the checking) with some #defines. Unfortunately, that proposal, by Andrew Sutton, has recently been rejected for C++17, though it seems likely to succeed after C++17. Andrew Sutton demonstrates the proposed syntax wonderfully in this C++Now 2015 Keynote video. He has implemented C++ concepts in gcc 6, and I’ve played with it very superficially for libsigc++.

I’ve written and maintained lots of C++ template code and I’ve never liked how each template makes only implicit requirements of its types. I would feel more comfortable if I could express those requirements in a concept and have the compiler tell me if my template’s types don’t “model” the concept. Eventually compiler errors will then mention problems at that higher level instead of spewing details about exactly how compilation failed. And eventually, C++ might allow checking of semantics as well as just syntax. I can even imagine tools that would analyze template code and suggest that it should require certain concepts for its types, a little like how the latest compilers can suggest that I use the @override keyword on virtual method overloads. This means more checking at compile time, and that makes me happy. However, I understand why it would need multiple compilers to implement it before it would be accepted into C++17.

Anyway, I started reading that BGL book and immediately noticed the foreword by the same Alexandar Stepanov, which mentions generic programming ideas such as concepts. The BGL uses concepts, though with minimal checking, and the book uses these to show the structure of the API. Furthermore, as I tried to get some simple changes into the BGL, I noticed that the same Andrew Sutton had been a maintainer of the BGL.

I began playing with the BGL by converting its example code to modern C++, replacing as many of those verbose traits typedefs and awkward tie() calls, with the auto keyword and range-based for loops. The result looks far clearer to me, letting us see how the API could be improved further. For instance, the BGL’s use of generic free-standing functions can seem a little unconstrained to people used to knowing exactly what method they can call on what object, particularly as the BGL puts everything in the boost namespace instead of boost::graph). But Bjarne Stroustrup’s Unified Call proposal (apparently rejected for C++17 too) would improve that. For instance, num_vertices(graph), could be written as graph.num_vertices() and Concepts would let the compiler know if that should be allowed.

So, though the BGL source code seems to have had very little attention over the last 15 years, and now looks almost abandoned, it’s clearly been an inspiration for the most current trends in C++, such as Concepts and Unified Calling. All the work on C++11 and C++14 has drained the swamp so much that these old ideas are now more obviously necessary.

Feeds