GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

January 05, 2018

GitHub Issue Notifications on Open Source Projects

Many people receive too many GitHub notifications.Many people receive too many GitHub notifications. (Image parts by freepik.com)

(This post was crossposted with minor modifications on medium.)

Many Open Source Project maintainers suffer from a significant overdose of GitHub notifications. Many have turned them off completely for that.

We (GitMate.io) are constantly researching about how people handle a flood of incoming issues in our aim to improve the situation by applying modern technologies to the problem. (Oh and we love free software!)

By analyzing the biggest open source repositories on GitHub (more info on the data below) we’ve seen that the contributors to any of those projects responds to only 2.3% of all issues on average. (Let a contributor be a person that commented on at least two issues which they didn’t open.)

Contributor Interest Histogram (Summed over the biggest repositories)

This makes clear that for any bigger open source project, “Watching” the repository is resulting in a lot of spam for most of the people. If they don’t respond, notifying them was of no value for the discussion after all.

We can also observe that only very few project managers care for any significant portion of the issues. Only 6 of our human contributors in total care for more than every 5th issue at all. Here’s our heros:

25.05%: golang/go      -> ianlancetaylor, Watching
47.48%: moby/moby      -> thaJeztah,      Watching
27.31%: moby/moby      -> cpuguy83,       Watching
36.67%: owncloud/core  -> PVince81,       Not watching
47.12%: saltstack/salt -> gtmanfred,      Watching
25.54%: saltstack/salt -> Ch3LL,          Watching

However, we do see that 29.1% (117) of all contributors (402) are still subscribed to all notifications of the repository (watching it).

Switching to Polling

Many contributors switch to polling instead of watching the main repository.

However, we still see that the main maintainers keep watching the repository: without them, it’s very easy to miss out on new issues and it’s hard to make sure that the right people take a look at the right issues in a decentralized system.

Introducing Automation

In many communities we see home grown bots arising that apply labels and sometimes assign people based on keywords. This works especially well for automatically created issues (e.g. from sentry) but is not a full solution.

We’ve tried it. Contributors started mentioning keywords consciously and it didn’t really work for user reported issues.

Better Automation!

We wouldn’t be GitMate if we didn’t strive for more. Our data suggests that people are spending way too much time on their notifications. We’ve maintained coala.io in the past and we know that reading through all of them is impossible even for core maintainers. Static keyword based automation doesn’t seem to be enough.

Since quite a while we’re hacking on an artificial intelligence that helps you dealing with this problem by analyzing exactly what every person in your team is discussing about on GitHub or GitLab and mentioning the ones who are important for solving any new issue.

GitMate is built as a full automated triaging solution. Right now it already mentions related developers in new issues, finds duplicates, labels issues and closes old issues. It is already used by companies like ownCloud and Kiwi.comand we’re looking for more beta testers.

If you like this idea, visit GitMate.io and shoot us an email to lasse@gitmate.io :). If you find issues, file them at code.gitmate.io.

About the Data…

We’ve scraped data from a lot of GitHub repositories. We only wanted to look at the biggest ones (measured by scraped file size, i.e. roughly amount of text over all issues communicated). We’ve excluded ‘ZeroK-RTS/CrashReports’ because no humans seem to be operating that repository. The results refer to statistics drawn from those repositories:

  • kubernetes/kubernetes
  • javaee/glassfish
  • microsoft/vscode
  • dart-lang/sdk
  • golang/go
  • moby/moby
  • owncloud/core
  • saltstack/salt

We have filtered out any account with bot in the username as well as the ownclouders account which is using GitMate.

If you’re interested in more information, we can share our Jupyter Notebook and the data with you — just hit us an email to lasse@gitmate.io.

January 04, 2018

2018-01-04 Thursday

  • Mail chew; admin, sync with Jona, then Kendy, ESC call.
  • Irritated by dish-washer, why is it that a dish-washer cannot have a colored LED on the front with a large diffuser that shows green: if it has been through a wash-cycle, and the door has not been opened yet (ie. 'clean'), that slowly decays into red over a minute or so after opening the door. The current effort has an un-lit display to denote both dirty and clean states, with a lit-up one for washing; hey ho.

A (more) random act of kindness

Not too long ago, Kat and I were discussing the postcards we had sent to people who had adopted us. When we found I had sent more, I speculated that it had to do with my position in the list of people who are up for adoption. We were listed alphabetically and that made me the first and default choice. I’m using the past here as I pushed a fix this morning as my birthday present. Now the list order is random and hopefully the other people on it will a more fair share of fuzzy feelings.

I’d like to take this opportunity to thank all the people who became Friends of GNOME, whether they chose me or someone else for the postcard, or even if they opted out. Your donation to the GNOME Foundation helps us a lot. And if you’re not already a donor, consider becoming one!

I also opened a liberapay account today. The description is disappointingly empty at the moment but if you feel like directly supporting my efforts, any donation you’ll make there will be much appreciated.

New header for the new year!

Happy new year everyone!

Aryeom started the first day of the year by the live drawing of a new header illustration welcoming this brand new year. Well that was time since we still had quite a summer-themed header until yesterday. 🙂

New year 2018This new image happens to be also in 16:9 format so it can be used as background image on most screens. Just click the thumbnail on the right to download it full-size.
It is licensed Creative Commons BY 4.0 by Aryeom Han, ZeMarmot director.

Also the drawing session was streamed live (as many of Aryeom’s GIMPing session now, as we explained in “Live Streaming while GIMPing” section of our 2017 report). If you missed it, you can have a look to the recording. As usual, this was not edited afterwards nor was it sped up or anything; oh and we certainly don’t add up any music to make it look cooler or whatever. 😛
This was a real focused live, which explains why it is nearly a one-hour video. Just skip through it if you are bored. 😉
Enjoy!

This drawing and this live are made possible thanks to our many donators!

Reminder: Aryeom's Libre Art creation can be funded on
Liberapay, Patreon or Tipeee through ZeMarmot project.

GtkSourceView fundraising – November/December report

I’ve launched in September a fundraising for the GtkSourceView library. Here is a report for the past two months.

What has been achieved

I prefer to set expectations, I haven’t worked hard on GtkSourceView and Tepl this time around, because the fundraising is not as successful as I would like. Since I’m paid less than one hour per week for that project, I don’t feel forced to work > 10 times more, I think it’s understandable.

But I still continue to maintain GtkSourceView and Tepl, and I’ve progressed a little for the file loading and saving. The tasks that I’ve done in November/December:

  • Code reviews, especially for *.lang files (needed for syntax highlighting);
  • Triage incoming bugs on the bug tracker;
  • Doing releases;
  • Writing an Uncrustify configuration file to apply the GtkSourceView coding style, it will ease contributions;
  • Continue the high-level API for the file loading and saving in Tepl;
  • A few other development tasks in Tepl.

More fun with fonts

Just before Christmas, I spent some time in New York to continue font work with Behdad that we had begun earlier this year.

As you may remember from my last post on fonts, our goal was to support OpenType font variations. The Linux text rendering stack has multiple components: freetype, fontconfig, harfbuzz, cairo, pango. Achieving our goal required a number of features and fixes in all these components.

Getting all the required changes in place is a bit time-consuming, but the results are finally starting to come together. If you use the master branches of freetype, fontconfig, harfbuzz, cairo, pango and GTK+, you can try this out today.

Warm-up

But beyond variations, we want to improve font support in general. To start off, we fixed a few bugs in the color Emoji support in cairo and GTK+.

Polish

Next was small improvements to the font chooser, such as a cleaner look for the font list, type-to-search and maintaining the sensitivity of the select button:

Features

I also spent some time on OpenType features, and making them accessible to users.  When I first added feature support in Pango, I wrote a GTK+ demo that shows them in action, but without a ready-made GTK+ dialog, basically no applications have picked this up.

Time to change this! After some experimentation, I came up with what I think is an acceptable UI for customizing features of a font:

It is still somewhat limited since we only show features that are supported by the selected font and make sense for entire documents or paragraphs of text.  Many OpenType features can really only be selected for smaller ranges of text, such as fractions or subscripts. Support for those may come at a later time.

Part of the necessary plumbing for making this work nicely was to implement the font-feature-settings CSS property, which brings GTK+ closer to full support for level 3 of the CSS font module. For theme authors, this means that all OpenType font features are accessible from CSS.

One thing to point out here is that font feature settings are not part of the PangoFont  object, but get specified via attributes (or markup, if you like). For the font chooser, this means that we’ve had to add new API to return the selected features: pango_font_chooser_get_font_features(). Applications need to apply the returned features to their text by wrapping them in a PangoAttribute.

Variations

Once we had this ‘tweak page’ added to the font chooser, it was the natural place to expose variations as well, so this is what we did next. Remember that variations define number of ‘axes’ for the font, along which the characteristics of the font can be continuously changed. In UI terms, this means we that we add sliders similar to the one we already have for the font size:

Again, fully supporting variations meant implementing the corresponding  font-variation-settings CSS property (yes, there is a level 4 of the CSS fonts module). This will enable some fun experiments, such as animating font changes:

All of this work would be hard to do without some debugging and exploration tools. gtk-demo already contained the Font Features example. During the week in New York, I’ve made it handle variations as well, and polished it in various ways.

To reflect that it is no longer just about font features, it is now called Font Explorer. One fun thing I added is a combined weight-width plane, so you can now explore your fonts in 2 dimensions:

Whats next

As always, there is more work to do. Here is an unsorted list of ideas for next steps:

  • Backport the font chooser improvements to GTK+ 3. Some new API is involved, so we’ll have to see about it.
  • Add pango support for variable families. The current font chooser code uses freetype and harfbuzz APIs to find out about OpenType features and variations. It would be nice to have some API in pango for this.
  • Improve font filtering. It would be nice to support filtering by language or script in the font chooser. I have code for this, but it needs some more pango API to perform acceptably.
  • Better visualization for features. It would be nice to highlight the parts of a string that are affected by certain features. harfbuzz does not currently provide this information though.
  • More elaborate feature support. For example, it would be nice to have a way to enable character-level features such as fractions or superscripts.
  • Support for glyph selection. Several OpenType features provide (possibly multiple) alternative glyphs,  with the expectation that the user will be presented with a choice. harfbuzz does not have convenient API for implementing this.
  • Add useful font metadata to fontconfig, such as ‘Is this a serif, sans-serif or handwriting font ?’ and use it to offer better filtering
  • Implement @font-face rules in CSS and use them to make customized fonts first-class objects.

Help with any of this is more than welcome!

January 03, 2018

New “mypaint-brushes” package

Since January 1st, GIMP depends on the mypaint-brushes” repository which I am maintaining until MyPaint project finally takes it alongside its other repositories.

I am hoping that I won’t have to maintain this for long and am looking forward for the MyPaint developers to take care of it (and last I heard of it, in the bug report, they wanted to). So this blog post is also to say that I am not trying to fork MyPaint or anything. 😛 I am just taking a little advance because we cannot wait much longer unfortunately since GIMP now uses libmypaint and we are really looking into releasing GIMP 2.10 as soon as we can. Therefore we need to have MyPaint brushes as their own separate package, which will allow:

  • Not having MyPaint as a de-facto dependency of GIMP since brushes are currently part of MyPaint itself (not separate) and our new MyPaint brush tool in GIMP is basically useless without the brushes.
  • We can now check the existence and path of the brushes at build time (through pkg-config), instead of guessing, making sure GIMP will have brushes to work with.
  • The libmypaint library has been recently versionned because of changes in its API (current libmypaint repository’s master branch is the future version 2). Since libmypaint 2 has no release yet, GIMP uses libmypaint 1 (and likely still will when we will release).
    Similarly the mypaint-brushes package I created is also versionned because the brush format has evolved with the API. It has new settings and new inputs, which were crashing older libmypaint. New settings crashes have been fixed, but the new input crashes still exist to this day. Obviously this is not good and we cannot afford GIMP to crash when using newer brushes. So we need versionned brushes.
  • Even if the crashes were all fixed, some brush settings changed their behavior/meaning (for instance the computation of the speed). That means basically that the brush format changed in a non-compatible way. Another reason to version brushes.

So that’s a new package out there, but an important one and any distribution out there which wishes to package GIMP will also have to package it.

Also if other projects use libmypaint, I would suggest you also depend on mypaint-brushes (as should MyPaint itself actually). 🙂

P.S.: of course, this does not fix the custom brushes that someone would import from MyPaint to GIMP, but at least we’ll have good default brushes.

P.P.S.: if you build yourself the development version of GIMP, check out the INSTALL file carefully. In particular, you don’t want to install the master branch (as I said above, master is version 2, we use version 1) and when installing in a non-standard prefix, make sure your PKG_CONFIG_PATH environment variable is properly set.

Reminder: my Free Software coding can be funded on:
Liberapay, Patreon or Tipeee through ZeMarmot project.

2018-01-03 Wednesday

  • Up early; larger babes to school; back to work. Mail, sync with Miklos. Plugged away at admin and accumulated task backlog bits, call with Andras. Practiced bass guitar in the evening.

January 02, 2018

2017 Sumana In Review

Four years ago, during my first batch at the Recurse Center, every day I'd write in a little notebook on the subway on my way home, jotting down a few bullet points about what I had learned that day. I found it helped in a variety of ways, and the keenest was that on bad days, reviewing my notes reminded me that I was in fact progressing and learning things.

On any given day in 2017 I often did not feel very happy with my progress and achievements and how I was using my time. I fell ill a lot and I was heartsick at the national political scene and current events. It is genuinely surprising to me to look back and take stock of how it all added up.

Adventures:

I went hiking in Staten Island and in the Hudson Valley. I got back on my bike and had some long rides, including on a canal towpath in New Jersey and over the Queensboro bridge. (And had my first accident -- a car in my neighborhood rear-ending me at a traffic light -- and thankfully escaped without damage or injury.) I learned how to bake bread. I got to meet Ellen Ullman OMG. And I tried to travel less than I had in previous years, but I still had some fine times in other places -- notably, I had a great time in Cleveland, I witnessed the total solar eclipse in Nashville, and I visited Charlotte, North Carolina (where, among other things, I visited the NASCAR Hall of Fame).

Community service:

I did some of the same kinds of volunteering and activism that I'd done in previous years. For instance, I continued to co-organize MergeSort, participated in a fundraising telethon for The Recompiler telethon, signal-boosted a friend's research project to get more participants, and helped revitalize a book review community focusing on writers of color. Also, I served again as the auctioneer for the James Tiptree, Jr. Literary Award fundraising auction at WisCon, which is a particularly fun form of community service. The Tiptree Award encourages the exploration & expansion of gender. I wrote this year about what an award does, and the reflections I've seen from winners of the Tiptree Awards and Fellowships tell me those honors are doing the job -- encouraging creators and fans to expand how we imagine gender. This year I also deepened my commitment to the Tiptree Award by accepting the organization's invitation to join the Tiptree Motherboard; I am pleased to have helped the award through a donation matching campaign.

But the big change in my community service this year was that I tried to prioritize in-person political work. I called, emailed, and wrote postcards to various government officials. I participated in my local Democratic Club, including going door-to-door petitioning to get my local city councilmember onto the ballot for reelection.

And I found that I could usefully bring my technologist perspective to bear on the city and state levels, especially regarding transparency in government software. I spoke to my local councilmember about my concern regarding public access defibrillator data (the topic that led me to file my first-ever Freedom of Information Law requests, for government health department records) and this inspired him to sponsor a bill on that topic. (Which is now filed as end-of-session partly because of the limbo in potentially getting PAD data from NYC's open data portal -- I need to send an email or two.) I was invited to speak to a joint committee of the New York State Assembly on the software side of our forensics labs, and got particularly interested in this aspect of due process in our criminal justice system, publicizing the issue in my MetaFilter posts "'maybe we should throw an exception here??'" and "California v. Johnson". I testified before the Committee on Technology of the New York City Council on amendments to our open data law (I didn't prep my public comment, so this text is reconstructed from memory; video), and then spoke before the same committee on an algorithmic accountability measure (and publicized the bill, especially keeping the Recurse Center community apprised as best I could). And I did research and outreach to help ensure that a state legislature hearing on protecting the integrity of our elections included a few researchers and activists it wouldn't have otherwise.

In 2018 I want to continue on this path. I think I'm, if not making a difference, making headway towards a future where I can make a difference.

Work:

This was by far Changeset Consulting's busiest year.

I had a mix of big projects and smaller engagements. First, some of the latter: I advised PokitDok on developer engagement, with help from Heidi Waterhouse. For Open Tech Strategies, I wrote an installation audit for StreetCRM. And, working with CourageIT, I came in as a part-time project manager on a government health IT open source project so the lead developer could focus more on architecture, code, and product management.

Some larger and longer projects:

Following a sprint with OpenNews in December 2016 to help write a guide to newsrooms who want to open source their code, I worked with Frances Hocutt to create a language-agnostic, general-purpose linter tool to accompany that guide. "The Open Project Linter is an automated checklist that new (or experienced but forgetful) open source maintainers can use to make sure that they're using good practices in their documentation, code, and project resources."

I spent much of the first half of 2017 contracting with Kandra Labs to grow the Zulip community, helping plan and run the PyCon sprint and co-staffing our PyCon and OSCON booths, running English tutoring sessions alongside Google Summer of Code application prep, and mentoring an Outreachy intern, along with the usual bug triage, documentation updates, and so on. We wrapped up my work as Zulip's now such a thriving community that my help isn't as needed!

From late 2016 into 2017, I've continued to improve infrastructure and documentation for a Provider Screening Module that US states will be able to use to administer Medicaid better (the project which spurred this post about learning to get around in Java).

And just in the last few months I started working on two exciting projects with organizations close to my heart. I'm thrilled to be improving HTTPS Everywhere's project workflow for developers & maintainers over the next few months, working with Kate Chapman via Cascadia Technical Mentorship (mailing list announcement). And, thanks to funding by Mozilla's open source grants program and via the Python Software Foundation, the Python Package Index -- basic Python community infrastructure -- is getting a long-awaited overhaul. I'm the lead project manager on that effort, and Laura Hampton is assisting me. (Python milestone: my first time commenting on a PEP!)

Along the way, I've gotten a little or a lot better at a lot of things: git, bash, LaTeX, Python (including packaging), Sphinx, Read the Docs, Pandoc, regular expressions, CSS, the Java ecosystem (especially Gradle, Javadocs, Drools), the Go ecosystem, Travis CI, GitHub Pages, Postgres, sed, npm Linux system administration accessibility standards, IRC bots, and invoicing.

Talks And Other Conferences:

This year, in retrospect, instead of doing technical talks and expository lectures of the type I'm already good at, I played with form.

At LibrePlanet 2017 I gave the closing keynote address, "Lessons, Myths, and Lenses: What I Wish I'd Known in 1998" (schedule, video, in-progress transcript). I tried something aleatoric and it worked pretty well.

At Penguicon 2017 I was one of several Guests of Honor, and spoke in several sessions including "Things I Wish I'd Known About Open Source in 1998" (which was different from the LibrePlanet version, as intended) and "What If Free and Open Source Software Were More Like Fandom?" (further links).

Then, at PyGotham, Jason Owen and I co-wrote and co-starred in a play about management and code review: "Code Review, Forwards and Back" (video on YouTube, video on PyVideo, commentary).

I also attended Maintainerati and led a session, attended !!Con, worked a booth for Zulip at OSCON, attended PyCon and helped run Zulip's sprint there, and co-sponsored a post-PyGotham dinner.

Other Interesting Things I Wrote:

I did not write this year for magazines; my writing went into this blog, MetaFilter, Dreamwidth, microblogging, and client projects, mostly. I also wrote an entry for a local business competition (I didn't make it very far but I'm glad I did it, especially the finance bits) and started two book proposals I would like to return to in 2018.

I've mentioned already some of the posts I'm happy about. Some others:

"On Noticing That Your Project Is Draining Your Soul" (every once in a while someone emails me and mentions that this has helped them, which means a lot)

"How to Teach & Include Volunteers who Write Poor Patches" (12 things you can do)

"Inclusive-Or: Hospitality in Bug Tracking", a response to Jillian C. York and Lindsey Kuper.

I turned part of "Some posts from the last year on inclusion" into "Distinguishing character assassination from accountability", a post about pile-on culture and callout culture where I pulled out quotes from 11 writers on how we take/charge each other with responsibility/power within communities.

I loved Jon Bois's 17776 and discussed it with other fans on MetaFilter, and then, to try to understand its amazingness better, wrote "Boisebration", collecting links to fiction and nonfiction by Bois about class, feminism, aging, sports, politics, wonder, education, & art (and 17776 precursors/callbacks).

I found out about Robert E. Kelly, like so many did, when his kids crashed his BBC interview, then collected some links in a MetaFilter post about his writing on Korea, US foreign policy, international relations, and academia.

I wrote up a bit about "1967's most annoying question for women in Catholic ministry" on MetaFilter to signal-boost another Recurser's cool project.

I enjoyed the learning and the plot twist in "The programmer experience: redundancy edition", in which I discovered a useful resource for Form 990 filings and learned to use the Arrow library for Python date-time manipulation. And was grateful to Pro Publica.

And I made a few jokes on social media I particularly liked:

yesterday, was trying to explain virtual environments/containers/VMs to a friend and said "they range from Inception-style fake computers to putting a blanket on the floor and pretending it's lava"

and

today a friend and I explained leftpad & Left Shark to someone and I began sketching out a hypothetical HuffPo piece connecting them
We habitually crowdsource infrastructure from, expect unsupportedly high levels of performance from unsuspecting participants -> popcorn.gif

Public notice I received:

I got some public attention in 2017 -- even beyond the Guest of Honor and keynote speaker honors and my amazing clients -- that I would like to list, as long as I'm taking an inventory of 2017.

I rode the first revenue ride of the new Q train extension in Manhattan and really loved the art at the new 72nd Street MTA stop. A journalist interviewed me about that on video and my experience got into the New York Times story about the opening.

Presenters at the code4lib conference said their project was specifically motivated by my code4lib 2014 keynote "User Experience is a Social Justice Issue" (written version, video). I was honored and humbled.

And -- this is out of place but I need to record it -- as someone who knew Aaron Swartz, I consented to be interviewed by artists working on a play about him, and so someone briefly portrayed me (as in, pretended to be me and repeated my words aloud) in that play, Building a Real Boy.

Finally, Hari Kondabolu looked at the English Wikipedia page about him, much of which I contributed, and was amazed at how thorough it was. So that was awesome and I was proud.

Habits:

I got on Mastodon as part of my effort to improve how I use social media. I started using a new task tracker. I got back on my bike, and got somewhat into a habit of using it for some exercise and intra-city travel. A new friend got me into taking more frequent photos and noticing the world I'm in. Two new friends caused me to look for more opportunities to see musicians I love perform live.

Watched/listened:

I consumed a fair bit of media this year; didn't get into new music but enjoyed music podcasts "I Only Listen To The Mountain Goats" and "Our Debut Album". I did some book and reading reviews and will catch up to other 2017 reading sometime vaguely soon.

Leonard's film roundups & TV spotlights are a good way to see or remember most of what I saw in the last few years. TV highlights for me for 2017 are The Good Place, Jane the Virgin, The Great British Baking Show (which led me to write a tiny Asimov fanfic), Steven Universe, and Better Call Saul; I also saw Comrade Detective and Yuri!!! On Ice. Films I'm really glad I saw: The Big Sick, Schindler's List, Get Out (I fanned in MetaFilter Fanfare), In Transit, A Man For All Seasons, Hidden Figures, and Lemonade -- and a rewatch of Antitrust.

Social:

I made a few new friends this year, most notably Jason Owen and Mike Pirnat. My friends Emily and Kris got married and I got to hold up part of the chuppah for them. I took care of some friends at hard times, like accompanying them to doctor's visits. I got to see some friends I rarely see, like Mel Chua and Zed Lopez and Zack Weinberg, and kept up some old friendships by phone. My marriage is better than ever.

This year I shall iterate forward, as we all do.

January 01, 2018

Have a great 2018!

I have spent most of December with my family in Portugal and, as it’s becoming tradition, Helena and I (and the kids) are spending the New Year’s eve alone at our place. It gets more and more difficult to say good bye to our family every time we need to come back, especially now that Olivia really enjoys being there and spending time with the grandparents. But I come back with my batteries charged, ready for the coming year.

This year, similar to 2014, will be one that we will never forget because of the birth of our second child, Gil. Gil is a force of Nature! So different from the quiet baby that Olivia was; he always has energy and is (almost) always smiling. But even though we are usually very tired, it’s also much more interesting that they are different like that.
Life is certainly more challenging with two babies than with one, it’s not a linear relation of “2 × kid = 2 × work”, but having a flexible schedule, an understanding manager, and an awesome wife, allows me to manage.

To add more challenges to our personal life, we have also moved to a new place (still in Berlin) this year. The move was already going to be tricky since we did it ourselves instead of hiring a company, but it became boss-like when I got injured in my leg (tore muscle) while playing squash 4 days before we rented a truck and were supposed to carry all big items in it. But that’s gone, and we love our new place!

Even if it was a good year for me personally, in terms of global events, 2017 seemed pretty much the continuation of the shitty ending of 2016. That feeling of “end of the world” was still present all over the news and general day to day talk. In Portugal, the 3rd safest country in the world, more than 100 people were killed by wildfires, and the year has had a dangerous drought, to the point of having to distribute water by train/trucks to some cities… But sure, it’s chilly in some places so global warming must be just a hoax
Luckily the 2017’s big elections in Europe (France, the Netherlands, and Germany), which could set the Union on fire, proved that people can still choose the better route for their lives, despite all the attempts of scaring them off. The current situation in the EU is still alarming but at least it held better than I thought it would.

Workwise, it’s been another very busy year at Endless. I am still in charge of the App Center (our GNOME Software fork) and doing what I can to tame this beast. Endless’ mission has always been a noble one, but with the current direction of the world it’s even more significant and needed; so I will continue to give my best and hope we can keep making a difference in less fortunate regions. If you want to help, check out our job openings.

I really hope 2018 is a great year, with more hope than the past few years. So everybody reading this, have a great 2018!

Write about open source software

I just wanted to point out that over on the FreeDOS Blog, we're asking people to write about FreeDOS.

It's a new year, and we wanted to encourage people to contribute to FreeDOS in new ways. If you're already working on FreeDOS through code, design, testing, or some other technical way - thank you!

If you aren't sure how to contribute to FreeDOS, or want to contribute in a new way, we'd like to encourage you to try something new: Write about FreeDOS!

Write about something that interests you! Others will want to see how you're using FreeDOS, to run existing programs or to write your own programs. We want to hear from everyone! It's not just about developers, or people who contribute to the FreeDOS Project directly. Tell us how you use FreeDOS.

Post on your own blog, or email your articles to me and I'll put them up as a guest post on the FreeDOS Blog. If we can gather enough articles by Spring, we'll try to collect them in a "how-to" ebook in time for the 24th "birthday" of FreeDOS on June 29.

GTK+ Custom Widgets: General Definitions

Writing a GTK+ custom widget with Vala is easy. First all create an XML definition with a top level container widget and a set of child ones. You can use Glade to do so. This is not a tutorial for Glade, so let start at with an already designed template UI file.

For this Glade UI file to be useful in this tutorial pay attention on:

  1. The top level widget’s type must be the same of the one your Vala class is derived from.
  2. The top level widget in your UI file must be declared as a template and should have the same ID as your class, this is the C name, a class called Cust.DateChooser, should be called CustDateChooser in order to allow Vala’s compiler to find its UI definition.
  3. The child widgets, to be controlled by your class should have an ID.

Take in account all these for following delivers.

December 31, 2017

2017: Music.

That annual list of awkward incomplete pop music preferences: Stuff I listened to a lot in the last 12 months. Which did not necessarily get released in 2017. But mostly, I think. And/or enjoyable gigs.

Noga Erez‘ debut album and Zagami Jericho‘s first EP are my favs.
Followed by the latest release by Moby & The Void Pacific Choir, Sevdaliza‘s debut, and Bulp‘s first album.

Looking back at sets, Recondite, Helena Hauff and Setaoc Mass were long nights that left marks.

Looking back at concerts, EMA, Waxahatchee, Lali Puna, Moderat.
Ufomammut and Shobaleader One were banging.
Being able to see Battery live, after all those years.
Someone please send Days’n’Daze on a European tour.

I’d like to thank my bunch.
It’s been a special year, in many ways.

Supporting Conservancy Makes a Difference

Earlier this year, in February, I wrote a blog post encouraging people to donate to where I work, Software Freedom Conservancy. I've not otherwise blogged too much this year. It's been a rough year for many reasons, and while I personally and Conservancy in general have accomplished some very important work this year, I'm reminded as always that more resources do make things easier.

I understand the urge, given how bad the larger political crises have gotten, to want to give to charities other than those related to software freedom. There are important causes out there that have become more urgent this year. Here's three issues which have become shockingly more acute this year:

  • making sure the USA keeps it commitment to immigrants to allow them make a new life here just like my own ancestors did,
  • assuring that the great national nature reserves are maintained and left pristine for generations to come,
  • assuring that we have zero tolerance abusive behavior — particularly by those in power against people who come to them for help and job opportunities.
These are just three of the many issues this year that I've seen get worse, not better. I am glad that I know and support people who work on these issues, and I urge everyone to work on these issues, too.

Nevertheless, as I plan my primary donations this year, I'm again, as I always do, giving to the FSF and my own employer, Software Freedom Conservancy. The reason is simple: software freedom is still an essential cause and it is frankly one that most people don't understand (yet). I wrote almost two years ago about the phenomenon I dubbed Kuhn's Paradox. Simply put: it keeps getting more and more difficult to avoid proprietary software in a normal day's tasks, even while the number of lines of code licensed freely gets larger every day.

As long as that paradox remains true, I see software freedom as urgent. I know that we're losing ground on so many other causes, too. But those of you who read my blog are some of the few people in the world that understand that software freedom is under threat and needs the urgent work that the very few software-freedom-related organizations, like the FSF and Software Freedom Conservancy are doing. I hope you'll donate now to both of them. For my part, I gave $120 myself to FSF as part of the monthly Associate Membership program, and in a few minutes, I'm going to give $400 to Conservancy. I'll be frank: if you work in technology in an industrialized country, I'm quite sure you can afford that level of money, and I suspect those amounts are less than most of you spent on technology equipment and/or network connectivity charges this year. Make a difference for us and give to the cause of software freedom at least as much a you're giving to large technology companies.

Finally, a good reason to give to smaller charities like FSF and Conservancy is that your donation makes a bigger difference. I do think bigger organizations, such as (to pick an example of an organization I used to give to) my local NPR station does important work. However, I was listening this week to my local NPR station, and they said their goal for that day was to raise $50,000. For Conservancy, that's closer to a goal we have for entire fundraising season, which for this year was $75,000. The thing is: NPR is an important part of USA society, but it's one that nearly everyone understands. So few people understand the threats looming from proprietary software, and they may not understand at all until it's too late — when all their devices are locked down, DRM is fully ubiquitous, and no one is allowed to tinker with the software on their devices and learn the wonderful art of computer programming. We are at real risk of reaching that distopia before 90% of the world's population understands the threat!

Thus, giving to organizations in the area of software freedom is just going to have a bigger and more immediate impact than more general causes that more easily connect with people. You're giving to prevent a future that not everyone understands yet, and making an impact on our work to help explain the dangers to the larger population.

Face detector and the Hungarian method

It has been long time I don’t write here, but I was bussy with many things I try to do at the same time. What is interesting is that it seems that I will be able to deal with it. But something that I didn’t comment publicly is that as part of my dissertation project or thesis project I proposed to implement a multiple face detector and tracker to use it to apply Purikura-like effects in Cheese. I have done some research during the last academic semester about how to implement this in a course called Thesis Project I, where students should focus only in researching about the topic they will treat in their project. Now that the course has finished I have time to try things that I researched on, and I am very happy because I could solve a problem when I tried to add support for multiple faces in gstfaceoverlay during the first time I wanted to implement these kind of effects.

The problem I had which was described in an old post was that the OpenCV face detector does not have sense of order. If there are 2 people in the scene, the first person can be tagged with A and the second with B. In the next frame you would expect that the first person would be tagged with A and the second with B as it was before, but as I said the OpenCV face detector does not have sense of order. I have solved this problem and it was using the hungarian method. The idea I have applied is very simple.

Note: dlib’s and OpenCV’s face detector gives as the bounding boxes for each faces. When I talk about the position of the detected faces I refer to the centroid of that bounding box.

So let’s suppose we are in the first frame. There are two people in there: the first one represented by the yellow circle and the second one represented by the red circle. Our face detector tells us that the first person is A and the second person is B.

Frame 1

Then in the second frame, people have moved and thus their faces too, so the faces changed their positions. However when the face detector is run for this frame, our face detector tells us that the first person is B and the second person is B. That’s not how it should supposedly be. Because what we want is that as in the first frame the first person should be tagged again with A and the second one with B.

Frame 2 State 1

How do we solve this problem? At least the first person had teletransportation powers, it is expected that the position of his/her face in the next frame will be pretty close to the previous position. The same for the second person, his/her face shouldn’t have gone pretty far. We need to find some way to match for each face in the current detected frame, which face it belongs to against the previous frame. So the task is simple. For the first detected person, I calculate its (Euclidean) distance to the previous detected faces… and for the second person, I do the same. The next task is to see what are the shorter distances. Here is when the Hungarian method comes.

Frame 2 with annotations

The Hungarian method is an algorithm developed for solving assignment problems. From Wikipedia we have a simple problem that they use to explain how it can be used. I directly cite them:

In this simple example there are three workers: Armond, Francine, and Herbert. One of them has to clean the bathroom, another sweep the floors and the third washes the windows, but they each demand different pay for the various tasks. The problem is to find the lowest-cost way to assign the jobs. The problem can be represented in a matrix of the costs of the workers doing the jobs. For example:

  Clean bathroom Sweep floors Wash windows
Armond $2 $3 $3
Francine $3 $2 $3
Herbert $3 $3 $2

The Hungarian method, when applied to the above table, would give the minimum cost: this is $6, achieved by having Armond clean the bathroom, Francine sweep the floors, and Herbert wash the windows.

With the data of the distances calculated prevoously for the previous and current frame, I build my matrix of distances (or costs) as following:

  First person in frame 1 Second person in frame 1
First person in frame 2 d1 d1’
Second person in frame 2 d2’ d2

The Hungarian method would give as that the first person in frame 2 should be matched to the first person in frame 1 and that the second person in the frame 2 should be matched to the second person in frame 2. Easily explaining because d1 is less than d1’ and d2 is less than d2’. So we can reorder our tags as A and B (instead of B and A). So we would tag the faces as in the figure below:

Frame 3 with annotations

You can see a demonstration with these videos. This first is using just a face detector. You can see how fast the red and blue boxes switch fastly in the subsuquent frames. The idea was to fix this:

Then in the video below you can see how the blue and green boxes are stick to the faces they belong to:

And basically that’s it for the purpose of this post. More complex things needs to be done to have a better tracker like probably applying the Kalman Filter and some other methods to still capture a face when even the face detector fails. If you prefer code, you have some code here. I have used as a face detector dlib, OpenCV for capturing from the webcam and an algorithm I found googling for the Hungarian method, because dlib’s one was not enough if I have an unbalanced matrix (when the number of rows differs from columns). The code I show has some things that are not really necessary and could have been simplied more, but I tell you that I have removed some lines of code there, because it had more because I was implementing tracking using the Kalman Filter. Whatever question or suggestion just left a comment :)

/*
 * Copyright (C) 2017 Fabian Orccon <cfoch.fabian@gmail.com>
 *
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License
 * as published by the Free Software Foundation; either version 2
 * of the License, or (at your option) any later version.
 * 
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 * 
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
 */



#include <cstdio>
#include <vector>
#include <valarray>
#include <algorithm>

#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_io.h>
#include <dlib/gui_widgets.h>
#include <dlib/opencv.h>
#include "opencv2/opencv.hpp"

#include "Hungarian.h"


#define FACE_ASPECT_RATIO     1.25

using namespace cv;
using namespace std;
using namespace dlib;

static std::vector<Scalar> COLORS = {
  Scalar(255, 0, 0), // RED
  Scalar(0, 255, 0), // BLUE
  Scalar(0, 0, 255), // GREEN
  Scalar(255, 128, 0), // ORANGE
  Scalar(255, 255, 0), // YELLOW
  Scalar(0, 255, 255), // CYAN
  Scalar(255, 0, 255), // FUCCSHIA
};

static Scalar
random_color(RNG &rng)
{
  int icolor = (unsigned) rng;
  return Scalar(icolor & 255, (icolor >> 8) & 255, (icolor >> 16) & 255);
}

static cv::Point
calculate_centroid(cv::Point & p1, cv::Point & p2)
{
  return (p1 + p2) / 2;
}

static cv::Point
calculate_centroid(dlib::rectangle & rectangle)
{
  cv::Point tl, br;
  tl = cv::Point(rectangle.left(), rectangle.top());
  br = cv::Point(rectangle.right(), rectangle.bottom());
  return (tl + br) / 2;
}

class FaceTracker {
  private:
    cv::Point centroid;
    bool have_measurement;
    dlib::rectangle measurement_info;

  public:
    dlib::rectangle bounding_box;
    Scalar color;

    FaceTracker(dlib::rectangle &bounding_box)
    {
      this->bounding_box = bounding_box;
      centroid = calculate_centroid(bounding_box);
    }

    cv::Point
    get_centroid(void)
    {
      return centroid;
    }

    void
    set_color(Scalar & color)
    {
      this->color = color;
    }

    void
    set_have_measurement(bool value)
    {
      have_measurement = value;
    }

    void
    set_measurement_info(dlib::rectangle measurement_info)
    {
      have_measurement = true;
      this->measurement_info = measurement_info;
    }

    void
    predict(void) {
      Mat prediction, estimation;
      
      cv::Point estimated_position;
      int estimated_width;
      bounding_box = measurement_info;
    }
};

class MultiFaceTracker {
  public:
    std::vector<FaceTracker> trackers;

    MultiFaceTracker()
    {
    }

    MultiFaceTracker(std::vector<dlib::rectangle> &dets)
    {
      int i;
      for (i = 0; i < dets.size(); i++) {
        Scalar color = COLORS[i % COLORS.size()];
        FaceTracker tracker = FaceTracker(dets[i]);
        tracker.set_color(color);
        trackers.push_back(tracker);
      }
    }

    void
    track(std::vector<dlib::rectangle> &dets)
    {
      int r, c, i;
      std::vector<std::vector<double>> cost_matrix;
      std::vector<int> assignment;

      std::vector<cv::Point> cur_centroids;

      //std::vector<cv::Point *> new_centers(real_centers.size(), NULL);
      std::vector<dlib::rectangle> new_dets;
      HungarianAlgorithm HungAlgo;

      cout << "Calculate current centroids" << endl;
      // Calculate current centroids.
      for (i = 0; i < dets.size(); i++) {
        cv::Point centroid = calculate_centroid(dets[i]);
        cur_centroids.push_back(centroid);
      }

      cout << "Initialize cost matrix" << endl;
      // Initialize cost matrix.
      for (r = 0; r < trackers.size(); r++) {
        std::vector<double> row;
        for (c = 0; c < cur_centroids.size(); c++) {
          float dist;
          dist = cv::norm(cv::Mat(cur_centroids[c]),
              cv::Mat(trackers[r].get_centroid()));
          row.push_back(dist);
        }
        cost_matrix.push_back(row);
      }

      cout << "Solve the Hungarian assignment problem" << endl;
      HungAlgo.Solve(cost_matrix, assignment);

      cout << "assignment: ";
      for (i = 0; i < trackers.size(); i++)
        cout << assignment[i] << " ";
      cout << endl;

      cout << "Reorder faces" << endl; 
      // Reorder faces.
      for (i = 0; i < trackers.size(); i++) {
        if (assignment[i] == -1) {
          cout << "A face was not detected" << endl;
          trackers[i].set_have_measurement(false);
        } else
          trackers[i].set_measurement_info(dets[assignment[i]]);
        trackers[i].predict();
      }
      cout << "Finish tracking for this frame" << endl;
    }
};

int
main(int argc, const char ** argv)
{
  VideoCapture cap;
  const char * filename;
  namedWindow("capture", WINDOW_AUTOSIZE);
  frontal_face_detector detector;
  std::vector<dlib::rectangle> prev_dets;
  std::vector<cv::Point *> real_centers;
  bool detected = false;
  bool tracker_initialized = false;
  int was_detected = false;
  int frame_number = 1;
  int frame_number_when_trackers_initialized = -1;
  MultiFaceTracker multi_tracker;

  cout << "naargs: " << argc << endl;

  if (argc < 2) {
    cerr << "This program receives a filename as an argument" << endl;
    exit (1);
  }

  filename = argv[1];
  cap.open(filename);

  if (!cap.isOpened ()) {
    cerr << "There was a problem opening '" << filename << "'" << endl;
    exit (1);
  }


  detector = get_frontal_face_detector();  
  for (;;) {
    Mat frame;
    std::vector<cv::Point *> cur_centers;
    std::vector<dlib::rectangle> dets;
    int key_pressed;
    int i;

    cout << "Frame: " << i << endl;

    cap >> frame;
    if (!cap.read (frame))
      break;

    cv_image<bgr_pixel> dlib_image(frame);
    dets = detector(dlib_image);

    // Always try to detect faces.
    detected = dets.size() > 0;
    for (i = 0; i < dets.size(); i++) {
      cv::Point tl, br;
      cv::Point *centroid;
      tl = cv::Point(dets[i].left(), dets[i].top());
      br = cv::Point(dets[i].right(), dets[i].bottom());

      //centroid = new cv::Point((tl + br) / 2);
      //cur_centers.push_back(centroid);

      // Initialize.
      if (frame_number_when_trackers_initialized == -1) {
        frame_number_when_trackers_initialized = frame_number;
        multi_tracker = MultiFaceTracker(dets);
      }
    }

    // Only do this for frames after the one in which the trackers were
    // initialized.
    if (frame_number_when_trackers_initialized > 0 &&
        frame_number > frame_number_when_trackers_initialized) {
      multi_tracker.track(dets);
      for (i = 0; i < multi_tracker.trackers.size(); i++) {
        cv::Point tl, br;
        dlib::rectangle bounding_box = multi_tracker.trackers[i].bounding_box;
        tl = cv::Point(bounding_box.left(), bounding_box.top());
        br = cv::Point(bounding_box.right(), bounding_box.bottom());

        cout << "Bounding box: " << i << endl;
        cout << "top_left: " << tl << endl;
        cout << "bottom_right: " << br << endl;

        cv::rectangle(frame, tl, br, multi_tracker.trackers[i].color);
      }
    }

    imshow ("capture", frame);

    frame_number++;
    // Change this to 100000000 to go frame by frame by pressing ENTER.
    // ESC to quit.
    key_pressed = waitKey (30);
    if (key_pressed == 13)
      continue;
    else if (key_pressed == 27)
      break;
  }

  return 0;
}

And here it is a CMakeLists.txt file I used that may be useful if you want to compile that code.

cmake_minimum_required(VERSION 2.8.12)
project( KalmanHungarianTracking )
add_definitions(-msse2)
find_package( OpenCV REQUIRED )
find_package( dlib REQUIRED )
find_package(Threads REQUIRED)

set(CMAKE_CXX_FLAGS "-O3 -lpthread")
set(CMAKE_CXX_FLAGS_RELEASE "-O3 -lpthread")

add_executable( KalmanHungarianTracking
    Hungarian.h
    Hungarian.cpp
    KalmanHungarianTracking.cpp )
target_link_libraries( KalmanHungarianTracking dlib::dlib ${CMAKE_THREAD_LIBS_INIT} ${OpenCV_LIBS} )

These three things could improve the Linux development experience dramatically, #2 will surprise you

The development experience on a modern Linux system is fairly good, however there are several strange things, mostly due to legacy things no longer relevant, that cause weird bugs, hassles and other problems. Here are three suggestions for improvement:

1. Get rid of global state

There is a surprisingly large amount of global (mutable) state everywhere. There are also many places where said global state is altered in secret. As an example let's look at pkg-config files. If you have installed some package in a temporary location and request its linker flags with pkg-config --libs foo, you get out something like this:

-L/opt/lib -lfoo

The semantic meaning of these flags is "link against libfoo.so that is in /opt/lib". But that is not what these flags do. What they actually mean is "add /opt/lib to the global link library search path, then search for foo in all search paths". This has two problems. First of all, the linker might, or might not, use the library file in /opt/lib. Depending on other linker flags, it might find it somewhere else. But the bigger problem is that the -L option remains in effect after this. Any library search later might pick up libraries in /opt/lib that it should not have. Most of the time things work. Every now and then they break. This is what happens when you fiddle with global state.

The fix to this is fairly simple and requires only changing the pkg-config file generator so it outputs the following for --libs foo:

/opt/lib/libfoo.so

2. Get rid of -lm, -pthread et al

Back when C was first created, libc had very little functionality in it. Because of reasons, new functionality was added it went to its own library that you could then enable with a linker flag. Examples include -lm to add the math library and -ldl to get dlopen and friends. Similarly when threads appeared, each compiler had its own way of enabling them, and eventually any compiler not using -pthread died out.

If you look at the compiler flags in most projects there are a ton of gymnastics for adding all these flags not only to compiler flags but also to things like .pc files. And then there is code to take these flags out again when e.g. compiling on Visual Studio. And don't even get me started on related things like ltdl.

All of this is just pointless busywork. There is no reason all these could not be in libc proper and available and used always. It is unlikely that math libraries or threads are going to go away any time soon. In fact this has already been done in pretty much any library that is not glibc. VS has these by default, as does OSX, the BSDs and even alternative Linux libcs. The good thing is that Glibc maintainers are already in the process of doing this transition. Soon all of this pointless flag juggling will go away.

3. Get rid of 70s memory optimizations

Let's assume you are building an executable and that your project has two internal helper libraries. First you do this:

gcc -o myexe myexe.o lib1.a lib2.a

This gives you a linker error due to lib2 missing some symbols that are in lib1. To fix this you try:

gcc -o myexe myexe.o lib2.a lib1.a

But now you get missing symbols in lib1. The helper libraries have a circular dependency so you need to do this:

gcc -o myexe myexe.o lib1.a lib2.a lib1.a

Yes, you do need to define lib1 twice. The reason for this lies in the fact that in the 70s memory was limited. The linker goes through the libraries one by one. When it process a static library, it copies all symbols that are listed as missing and then throws away the rest. Thus if lib2 requires any symbol that myexe.o did not refer to, tough luck, all those symbols are gone. The only way to access them is to add lib1 to the linker line and have it processed in full for a second time.

This simple issue can be fixed by hand but things get more complicated if the come from external dependencies. The correct fix for this would be to change the linker to behave roughly like this:
  • Go through the entire linker line and find all libraries.
  • Look which point to same physical files and deduplicate them
  • Wrap all of these in a single -Wl,--start-group -Wl,--end-group
  • Do symbol lookup once in a global context
This is a fair bit of work and may cause some breakage. On the other hand we do know that this works because many linkers already do this, for example Visual Studio and LLVM's new lld linker.

December 30, 2017

Autotools and Rust

One of the first tasks for my project (GSoC, Rustify GJS) was simply to get Rust building alongside the C++ code using autotools. To do so I had to learn some of the autotools suite, and how to write the configuration and makefile input.

I can tell you honestly that I’m not a fan of autotools after this. Sure, it does the job, but the insane amount of macros used for setup/configuration and so on is mind-bending.

Rust and compilation

Building with autotools Optimization

There are a few ways to compile Rust, each has pros and cons depending on your end goal. Example use cases for Rust are;

  • Embedded controllers
  • Application development
  • Libraries
  • Embedding in other languages

There are many more use cases than the above of course, but these ones will cover the examples I want to show here. I’ll start with the more simple use case, that of compiling Rust on its own for an application.

Compiling Rust Code

This is dead simple, but! There are two ways to do so.

Cargo is the standard way to create and build Rust software. It performs a rather lot of functions: create new projects, compilation, testing, benchmarking, documentation generation, publishing projects as crates, and a few more.

A Quick Binary

Lets create a new Rust project, run; cargo new --bin hello_rust, this creates a new cargo project in a sub directory of the current directory with the name hello_rust which is a binary. The directory structure is:

.
├── Cargo.toml
└── src
    └── main.rs

Rust has also helpfully created an fn main() which prints “Hello, world!”. So lets compile it with cargo build. Cargo by default builds a debug version of everything since this is the most commonly requested mode. To build a release version run cargo build --release.

You can also compile with rustc main.rs. However if you use rustc on its own to compile, you will need to do a lot of extra stuff manually such as adding the compiler flags that cargo build --release adds if you want an equivalent release build; this is generally rustc -C opt-level=3 -C debuginfo=0. Using rustc on its own will get pretty harsh once you start to include external crates, linking other libs and so on, so for the rest of this post I will focus on using only cargo since it handles a lot of stuff for us in the background, but where it may be instructive I will include equivalent rustc commands.

Building a Library

Rust libraries and the integration of a rust lib in to C++ (or any other language) is the focus of my project, so lets get started!

The project I’m going to use as an example will use autotools to control compilation, and use both C++ and Rust, with C++ having the main call point, and both languages calling functions plus passing variables to each other.

Start by creating a directory to store the project in, and in that, create an src directory;

In src/ create a main.c with the following content;

#include <stdio.h>

extern void hello_world(); // declare the Rust function
int main(void)
{
    hello_world();
}

This is fairly standard for C, what we’re interested in though is the declaration of the Rust function; extern void hello_world(). The extern here tells the compiler that what follows is a declaration only, and not to allocate memory for it as it will be found elsewhere at link time. In other words; this is declared, but not defined - it is defined somewhere else. In our case, it will be defined in the Rust source which will then export the symbol (compiled function definition) at compile time so that it can be linked.

Change in to src create a new Rust project using cargo new --lib rs_hello --name rs_hello. This creates our project under rs_hello with a Cargo.toml, and src/lib.rs, and names it. The source file contains only a simple test to run, and no functions or other code. You can erase or leave the test code there, it won’t affect anything being done in this post, but it is good to learn how rust tests are built and run.

In src/rs_hello/src/lib.rs add the function that we declared in the C source;

#[no_mangle]
pub extern "C" fn hello_world() {
    println!("hello world!");
}

It’s as simple as that, but there are two things to note;

  • #[no_mangle] - this tells the rust compiler not to generate a hash of the function name.
  • pub extern "C" - here we’re declaring that the function is publicly accessible (pub), and is being exported to the C calling convention.

You can compile and run this right now if you wanted to, in the base run;

rustc --crate-type staticlib -o librs_hello.a src/rs_hello/src/lib.rs &&
gcc -o hello src/main.c librs_hello.a -ldl -lrt -lpthread -lgcc_s -lc -lm -lrt -lutil

Running cargo build on a new library project will by default produce a rustlib, .rlib, which is not linkable to external non-rust source, open src/rs_hello/Cargo.toml and append to the end;

[lib]
crate-type = ["staticlib"]

Using cargo build in src/rs_hello will produce the static link library in src/rs_hello/target/debug by default, and to link with the main.c just prepend the path to librs_hello.a.

Note: libraries built with cargo will have lib prepended to their name.

Building with autotools

Top Optimization

Now, on to autotools!

We will need two files in the base directory: configure.ac and Makefile.am. The content of configure.ac is;

AC_PREREQ([2.60])

AC_INIT([rust_hello], [0.1])
AM_INIT_AUTOMAKE([1.6 foreign subdir-objects])
m4_ifdef([AM_SILENT_RULES], [
    AM_SILENT_RULES([yes])
])

AC_CANONICAL_HOST

AC_PROG_CC_C99
AM_PROG_CC_C_O

AC_PATH_PROG([CARGO], [cargo], [notfound])
AS_IF([test "$CARGO" = "notfound"], [AC_MSG_ERROR([cargo is required])])

AC_PATH_PROG([RUSTC], [rustc], [notfound])
AS_IF([test "$RUSTC" = "notfound"], [AC_MSG_ERROR([rustc is required])])

LT_INIT

AC_CONFIG_MACRO_DIRS([m4])

AC_CONFIG_FILES([
  Makefile
])

AC_OUTPUT

As far as I can tell (and I’m absolutely not an autotools expert here) this is fairly standard for an ultra basic configure.ac. We’re only going to be focusing on the relevant rust bits however, as that is what makes our build tick.

AC_PATH_PROG([CARGO], [cargo], [notfound]) is a macro (AC_PATH_PROG) that checks if a program (cargo) exists, and stores it in the variable [CARGO], if it doesn’t exist it stores notfound in the variable.

AS_IF([test "$CARGO" = "notfound"], [AC_MSG_ERROR([cargo is required])]) tests the variable CARGO, and checks if the content matches "notfound", if it does then it calls the error print macro AC_MSG_ERROR.

The content of Makefile.am is;

ACLOCAL_AMFLAGS = -I m4

RSHELLO_DIR = src/rs_hello
RSHELLO_TARGET = $(RSHELLO_DIR)/target/release

bin_PROGRAMS = hello_rust
hello_rust_SOURCES = src/main.c
hello_rust_LDADD = $(RSHELLO_TARGET)/librs_hello.a
hello_rust_LDFLAGS = -lrt -ldl -lpthread -lgcc_s -lpthread -lc -lm -lrt -lutil

$(RSHELLO_TARGET)/librs_hello.a:
	cd $(srcdir)/$(RSHELLO_DIR); \
	$(CARGO) rustc --release -- \
	-C lto --emit dep-info,link=$(abs_builddir)/$@

clean-local:
	cd $(srcdir)/$(RSHELLO_DIR); cargo clean

Again, a fairly standard layout. bin_PROGRAMS declares the name of our program, and the lines beginning with hello_rust_ declare much of the same stuff that we used for the gcc command above. We haven’t included the rust source on the SOURCES line however since autotools is geared towards compilation of C/C++.

How does it build the rust source then? It looks at

hello_rust_LDADD = $(RSHELLO_TARGET)/librs_hello.a

and sees that it needs librs_hello.a in the src/rs_hello/target/release directory then looks for the relevant commands to build that if it doesn’t exist’. That’s where $(RSHELLO_TARGET)/librs_hello.a: comes in to play. This is a pattern that make matches against which basically says “for any file named librs_hello.a in directory src/rs_hello/target/release, perform the following operations”;

  • cd in to $(srcdir)/%(RSHELLO_DIR) - srcdir is a variable that Make sets to pwd, and RSHELLO_DIR is the variable we set near the top of the file.
  • run cargo, which is contained in the variable CARGO with the following arguments;
    • rustc –release - instructs cargo to use the rustc option, which allows us to pass arguments to rustc, and uses the “release” profile.
    • -- arguments to rustc begin.
    • -C lto - this is not a default option in --release mode. lto is “link-time optimization”.
    • --emit dep-info,link=$(abs_builddir)/$@ breaks down to;
      • --emit output the following,
      • dep-info, tells us what libraries you need to link to the output,
      • link, a compiled binary with the rustlib linked in,
      • =$(abs_builddir)/$@ output the link files to the builddir (generally the base dir of the source if not set), $@ is a macro the autotools uses which passes in the file name that is before : - $(RSHELLO_TARGET)/librs_hello.a

The last block, clean-local: run along with the usual clean with make clean, since rust and cargo place files in different locations to what autotools expects, we need to clean up manually. This cds in to the cargo project and run cargo clean.

With those two files done, you now need to run autoreconf -si to generate all the files needed. Then run ./configure followed by make.

Congratulations! You’ve built a Rust library used by a C program, using autotools. So with that groundwork out of the way, lets dive a little deeper.

Types of Libraries

You’ll recall that above we had to pass in the staticlib option to rustc and add to the Cargo.toml for use with cargo. This is because rust builds rust libraries (.rlib) by default which are native to rust only. The format of these is still unstable afaik, and may change between rust versions. They also include extra metadata for rust, and don’t require the use of unsafe blocks when you want to use functions/data from them. This cannot be used with other languages.

For this reason we need the staticlib option. This produces a static library which contains all the rust projects generated code and its upstream dependencies. As such it will not have external dependencies on Rust libraries.

There are other options too!

dylib produces a dynamic rust only library. This can be used with other languages at the moment but will eventually be used for Rust only. The file extension is *.so on Linux. You should probably avoid using this altogether and use either lib for Rust libraries, or one of the below for external use cases.

cdylib is a dynamic library which is a newer output format introduced in rust v1.10 specifically for use with embedding in other languages. It exports public Rust symbols as a C API using C calling conventions. This is meant to be linked in to binaries that use it, at run time, this typically uses a system linker mechanism. The file extension is *.so.

staticlib is meant to be compiled and linked in to other projects statically - this means it is copied in to the binary that uses it, at compile time. Suitable for embedding in other languages. File extension is *.a.

lib is default, and will be whatever Rust needs it to be to produce a compiler recommended Rust library.

rlib is a static Rust library.

A small note: if you were to produce a library for use with other Rust projects, you should use the default lib. If you use cdylib or staticlib, Rust projects will need to use unsafe blocks.

Static vs Dynamic Linking

Linking on Linux is typically done using ld, and is the last step of compilation. If you run man ld to view the man page for it, the first sentence of the description states;

ld combines a number of object and archive files, relocates their data and ties up symbol references.

This gives a pretty good idea of what linking is. When building a typical C/C++ program, the compiler will compile each source file to can object file, then as the last step it will invoke ld to tie them all together.

Each declared function or data structure in one source file that is meant to be public to another source file (as in our example above, pub extern "C" fn foo()) is exported and exposed as a symbol. When another source file references this function, the linker looks for the related symbol and links them together.

The way linking is done for static vs dynamic is different.

  • static linking replaces all references to external symbols in a compiled object with the actual code needed at compile time
  • dynamic linking will instead put a reference to the library being linked to in the compiled binary/library, and will not link to it until runtime. A dynamic library can be shared between many programs.

Rust by default static links all Rust dependencies including the Rust std library, as in, it copies in parts of the libraries where it is used.

If you create a library using dylib or cdylib, that library is dynamically linkable to other projects, and also static links the Rust std library. Whereas if you create a staticlib, that library is copied in to other projects that use it (along with the Rust library parts it contains).

Rust will however, dynamic links system libraries such as libc and pthreads. You can static link system libraries if you use an alternative libc such as musl. Read more here

Rust and Objects

In Types of Libraries we outlined a few types of libraries that Rust can build - we can also output object files much like C/C++ compilation does. This can complicate things a bit though and I won’t go in to much detail here except to outline it. If you did want to output objects for linking, then you will be losing the benefit of cargo handling linking for you - this means you need to manually link any Rust libraries you depend on.

When you’re dealing with library names such as /usr/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-912d6e6c7cbc93f3.so, well, I don’t recommend forgoing cargo unless you really need to. Unusual filename right? This hash will change with each distribution that compiles the Rust compiler from scratch and so is a bad idea to hardcode the name in to build scripts. Whence why sticking to cargo is a good idea.

Another use-case is using bare metal rust on an embedded controller. Bare metal Rust is Rust code only, no standard lib, no libraries that may require external dependencies - this makes it much easier to deal with linking.

Rust is not ABI stable

Rust does not have a stable ABI as of yet, and may not do for some time. What this means to us in terms of linking is that a project that dynamically links the Rust std library will only work with the library that it was compiled with. This shouldn’t be an issue with Linux distribution supplied Rust, but if you switch between [Rustup][rustup] and distro supplied, it likely won’t work with one.

Optimization

Top Building with autotools

Now that we have covered what types of libraries there are, lets have a look at way’s to optimize Rust.

In all honesty, there isn’t that much you need to do - default --release produces very fast binaries with the following defaults;

  • opt-level = 3
  • debug = false
  • lto = false
  • panic = ‘unwind’

But there are some things you can do such as reducing size via link-time optimization, using a different allocator, and a few other tricks.

Covering how LTO works in detail is well beyond my abilities, but I may be able to adequately simplify it; at compile time the objects produced consist of everything that may be used, eg, all of a library. For C, the first pass of a linker may find that function foo() is not actually used, and so it is removed from the object, a second pass may find some condition is always false and so bar() is never called, on a third pass since fizz() was being called by bar(), and bar() was removed, fizz() is no-longer called and so is removed too.

Using LTO with Rust works similar to this, it will find all the functions etc that are never called and remove them - this results in a very nice size drop. Once of the differences between Rust and C here, is that Rust will warn you that a block of code isn’t reachable (the compiler treats it as an error if it is a pattern matching block) and implores you to remove it.

So how do you use LTO? Two ways;

  • pass -C lto to rustc as an arg, or via cargo rustc --release -- -C lto if using cargo
  • or, (also for cargo) add a section in the Cargo.toml as follows;
[profile.release]
lto = true

[profile.debug]
lto = true

Currently for the small amount of Rust I have in GJS so far, using LTO reduces the size of libgjs.so from 12mb to 7.7mb - quite a decent saving.

Another way to reduce the final size is with the use of [strip][strip] - a tool used to remove symbols from a binary/object. Handy also for making reverse-engineering harder (you’ll probably never stop Matthew Garrett though).

Running strip on libgjs.so with my Rust code compiled in without LTO reduces this size down to 1.9mb. Using LTO and strip reduces it to 912K.

The usual way to use strip is to remove only the debug symbols, via strip --strip-debug, running this on libgjs.so along with LTO reduced the size from 7.7M to 926K.

This step is typically performed by Linux distributions as part of their packaging process - they strip the debug symbols out to a separate file/s and package these alongside the stripped binary/library. The end user doesn’t require them normally.

You can pass a strip argument to rustc with

rustc -C link-args=-s

If you are using cargo this would be

cargo rustc --release -- -C link-args=-s

The last thing we can try is changing how panics are handled. The default handling for a panic is to include code to unwind the stack to help debugging. We can remove the code for unwinding, and just abort by passing an arg to rustc;

rustc -C panic=abort

or with cargo, add panic = "abort" to the relevant profile section. The saving here isn’t all that much though, ~100K, but this may be useful for embedded devices etc.

Finally

In light of all the testing and getting to grips with autotools and how various bits of the Rust compiler work, I’ve decided for the “Rustify GJS” project to use basically what is covered in the examples.

  • the default args for the --release are quite adequate
  • to reduce final size I have used lto
  • stripping is to be left to distributions
  • static linking the rust code in will be best to keep libgjs.so whole.

And one last thing: You can pass global args by the RUSTFLAGS environment variable, such as RUSTFLAGS="-C lto -C panic=abort" cargo build, I will likely switch to this method at some point. The RUSTFLAGS env-var also means that Rust crate dependencies also use these flags, where without the env-var set, they use the rust defaults.

Please email me if you see anything factually inaccurate that needs correction, or even just better explanations.

Note: Makefiles require the use of actual tabs, not spaces.

TODO: Parallel build fails due to Rust not finishing build before C++ linking.

GNOME.Asia and Engagmeent update

I’ve been wanting to write a post on GNOME.Asia and the going ons with engagement for awhile, but never seemed to get the motivations to blog.  :)

GNOME.Asia was an amazing event and I wanted to reach out to the organizers and thank them for the wonderful reception that I received while I was there.  The trip to Chongqing was mostly uneventful other than the fact every Chinese official was gunning for my battery brick when going through airport security.  After a long layover in Beijing, I was landed in Chongqing and met up with Mathias Clasen and proceeded to head to the hotel.

Next day, we went on a wonderful trip to some caves in the local area that Lennart found in Lonely Planet.  We were very lucky to have Jonathan Kang with us to speak the local language as it would have been a challenging trip otherwise.  But we got to see some really interesting statues that were many centuries old.  There was one really especially interesting one that showed the Bodhisvatta with a thousand hands.  It definitely had a presence!  We came back in time to attend the reception although we were a little late.

I was lucky to have my talk on the first day which allowed me to not have to worry about my talk for the rest of the conference.  This was my first time going to a GNOME.Asia conference and everything was impressive.  First conference I’ve been to where there was a mini-drama with a fire-blower/fire-eater.  That definitely left an impression!

I gave my talk shortly after the intro, and it was well attended and I think people enjoyed it.  Most people know me that I tend to get a little energetic while on stage.

The rest of the conference, I enjoyed going to the english speaking talks, meeting with conference attendees, selfies, and everything else.  We had a day for BoFs and of course, we had an engagement one.  Well, Nuritzi and I had an engagement BoF.  :)  Apparently, writing code is still the sexy thing to do.  :-)

The conference did drive an impetus to harness the energy in Asia, and not just China, but India, Japan, S. Korea and so forth.  And after the conference, we started working in earnest to start organizing to bring in new members from Asia.

We had a lovely party on a ship, and a tour of Chongqing at night, that was really impressive.  Chongqing is really large, like one city the size of the Bay Area and possibly a larger population.  There was much fun to be had and of course more pictures and selfies and the like.

The next day was a walking tour of the city that our new friends took us around in the city.  The city reminds me a lot of Portland, simply because it was perpetually raining and foggy.  Which kept the pollution to a minimum.  The food was delicious and very spicy.  I’ve come to have a love/hate relationship with the szechuan pepper which while I like the spice, wasn’t overly fond of the numbness it causes.  Where in the cities on the west coast, the smell of cannibis shows up, so does the aroma of szechuan peppers in Chongqing!

I went shopping on the last day of my trip there and finally the next morning headed back to Denver.  Chinese security confiscated my power brick of which I was quite irate about.  I had that thing for 3 years, and I hated letting it go.  Of course as luck what have it, I needed it charged by the time I got out in Denver and had to spend an extra hour to charge my phone in order to go home. :(

Post conference, there has been a lot of re-organization to accommodate the Asian members of GNOME engagement.  We have moved to a new time and hopefully we can use some of amazing talent that some of our new members have in some of the engagement things.

Thanks to Carlos, we were able to also start using gitlab as a way to project manage social media and you’ll find that in the past 6 weeks that GNOME engagement social media are tracked in issues, and are getting completed.  The side effect is that work done by the engagement team are no longer opaque and instead the entire project can see what the team is doing.  Our engagement internally with the rest of the project has improved markedly.  We hope to continue making progress and improving the engagement team.  Our meetings are half discussions and have work sessions so that we remove items out of our todo.

There has never been a better time to get involved with GNOME engagement.  As a recent blog post by Christian Hergert has underscored, the project in order to grow needs to be able to have non-coding skills like project management, graphics artists, designers, and community managers.  We’re also a bit of a counter-culture group compared to the rest of the project.  So if you are interested in the people behind GNOME, the users of our software, or solving humanistic problems, come join us and let’s chat!

I would like to finally thank the GNOME Foundation for sponsoring my trip to GNOME.Asia.  I would not have been able to go otherwise.

sponsored by GNOME Foundation

December 29, 2017

So long, Linux Journal

If you don't know, Linux Journal has ceased publication. Unless an investor drops in at the last minute, the LJ website will soon shut down. Thus ends over twenty-three years in Linux and open source publication. That's quite a legacy!

Linux Journal first hit shelves in April 1994. To remind you about those times: that's when Linux reached the 1.0 version milestone. That's also the same year that I started the FreeDOS Project. Elsewhere in technology, Yahoo!, Amazon, and Netscape got their start in 1994. That's also the same year the shows E.R. and Friends first hit TV. Also that year, the movies Pulp Fiction, Forrest Gump, Speed, and Stargate.

In 1994, you most likely connected to the Internet using a dial-up modem. KDE and GNOME were several years away, so the most popular Linux graphical interface in 1994 was FVWM, or the more lightweight TWM. In 1994, you probably ran an 80486 Intel CPU, unless you had upgraded to the recently-released Pentium CPU. In mainstream computing, Microsoft's Windows 3.1 ruled; Windows95 wouldn't come out for another year. In the Apple world, Macs ran MacOS 7.1 and PowerPC CPUs. Apple was strictly a hardware company; no one had heard of iTunes or an iPod.

With that context, we should recognize Linux Journal as having made an indelible mark in computing history. LJ chronicled the new features of Linux, and Linux applications. I would argue that Linux Journal helped raise the visibility of Linux and fostered a kind of Linux ecosystem.

Linux Journal operated in the same way that Linux developers did: LJ encouraged its community to write articles, essays, and reviews for the magazine and website. You didn't do it for the money; I think I received tiny payments for the articles I submitted. Rather, you wrote for LJ for the love of the community. That's certainly why I contributed to Linux Journal. I wanted to share what I had learned about Linux, and hoped others would enjoy my contributions.

So before the Linux Journal website goes dark, I wanted to share a few articles I wrote for them. Here you are:


Update:

Looks like Linux Journal was saved at the last minute by investors! From the article:
In fact, we're more alive than ever, thanks to a rescue by readers—specifically, by the hackers who run Private Internet Access (PIA) VPN, a London Trust Media company. … In addition, they aren't merely rescuing this ship we were ready to scuttle; they're making it seaworthy again and are committed to making it bigger and better than we were ever in a position to think about during our entirely self-funded past.

2017 in review

I began this year in a hedge in Mexico City and immediately had to set off on a 2 day aeroplane trek back to Manchester to make a very tired return to work on the 3rd January. From there things calmed down somewhat and I was geared up for a fairly mundane year but in fact there have been many highlights!

The single biggest event was certainly bringing GUADEC 2017 to Manchester. I had various goals for this such as ensuring we got a GUADEC 2017, showing my colleages at Codethink that GNOME is a great community, and being in the top 10 page authors on wiki.gnome.org for the year. The run up to the event from about January to July took up many evenings and it was sometimes hard to trade it off with my work at Codethink; it was great working with Allan, Alberto, Lene and Javier though and once the conference actually arrived there was a mass positive force from all involved that made sure it went well. The strangest moment was definitely walking into Kro Bar slightly before the preregistration event was due to start to find half the GNOME community already crammed into the tiny bar area waiting for something to happen. Obviously my experience of organizing music events (where you can expect people to arrive about 2 hours after you want them somewhere) didn’t help here.

Codethink provides engineers with a travel budget a little bit of extra leave for attending conferences; obviously what with GUADEC being in Manchester I didn’t make a huge use of that this year, but I did make it to FOSDEM and also to PyConES which took place in the beautiful city of Cácares. My friend Pedro was part of the organizing team and it was great to watch him running round fighting fires all day while I relaxed and watched the talks (which were mostly all trying to explain machine learning in 30 minutes with varying degrees of success).

Stream powered carriageWork wise I spent most of my year looking at compilers and build tools, perhaps not my dream job but it’s an enjoyable area to work in because (at least in terms of build tools) the state of the art is comically bad. In 10 years we will look back at GNU Autotools in the way we look at a car that needs to be started with a hand crank, and perhaps the next generation of distro packagers will think back in wonder at how their forebears had to individually maintain dependency and configuration info in their different incompatible formats.

BuildStream is in a good state and is about to hit 1.0; it’s beginning to get battle tested in a couple of places (one of these being GNOME) which is no doubt going to be a rough ride — I already have a wide selection of performance bottlenecks to be looking at in the new year. But it’s looking already like a healthy community and I want to thanks to everyone who has already got behind the project.

It also seems to have been a great year for Meson; something that has been a long time coming but seems to be finally bringing Free Software build systems into the 21st century. Last year I ported Tracker to build with Meson, and have been doing various ongoing fixes to the new build system — we’re not yet able to fully switch to Autotools primary because of issue #2166, and also because of some Tracker test suite failures that seem to only show up with Meson that we haven’t yet dug into fully.

With GUADEC out of the way I managed to spend some time prototyping something I named Tagcloud. This is the next iteration of a concept that I’ve wanted since more or less forever, that of being able to apply arbitrary tags to different local and online resources in a nice way. On the web this is a widespread concept but for some reason the desktop world doesn’t seem to buy into it. Tracker is a key part of this puzzle, as it can deal with many types of content and can actually already handle tags if you don’t mind using the commandline so part of my work on Tagcloud has been making Tracker easy to embed as a subproject. This means I can try new stuff without messing up any session-wide Tracker setup, and it builds builds on some great work Carlos has been doing to modernize Tracker as well. I’ve been developing the app in Python, which has required me to fix issues in Tracker’s introspection bindings (and GLib’s, and GTK+’s … on the whole I find the PyGObject experience pretty good and it’s obviously been a massive effort to get this far, but at the same time these teething issues are quite demotivating.) Anyway I will post more about Tagcloud in the new year once some of the ideas are a bit further worked out; and of course it may end up going nowhere at all but it’s been nice to actually write a GTK+ app for the first time in ages, and to make use of Flatpak for the first time.

It’s also been a great year for the Flatpak project; and to be honest if it wasn’t for Flatpak I would probably have talked myself out of writing a new app before I’d even started. Previously the story for getting a new app to end users was that you must either be involved or know someone involved in a distro or two so that you can have 2+ year old versions of your app installable through a package manager; or your users have to know how to drive Git and a buildsystem from the commandline. Now I can build a flatpak bundle every time I push to master and link people straight to that. What a world! And did I mention GitLab? I don’t know how I ever lived without GitLab CI and I think that GNOME’s migration to GitLab is going to be *hugely* beneficial for the project.

Looking back it seems I’ve done more programming stuff than I thought I had; perhaps a good sign that you can achieve stuff without sacrificing too much of your spare time.

It’s also been a good year music wise, Manchester continues to have a fantastic music scene which has only got better with the addition of the Old Abbey Taphouse where I in fact spent the last 4 Saturdays in a row. Last Saturday we put on Babar Luck, I saw a great gig of his 10 years ago and have managed to keep missing him ever since but things finally worked out this time. Other highlights have been Paddy Steer, Baghdaddies and a very cold gig we did with the Rubber Duck Orchestra on the outdoor stage on a snowy December evening.

I caught a few gigs by Henge who only get better with time and who will hopefully break way out of Manchester next year. And in September I had the privilege of watching Jeffrey Lewis supported by The Burning Hell in a little wooden hut outside Lochcarron in Scotland, that was certainly a highlight despite being ill and wearing cold shoes.

Lochcarron TreehouseI didn’t actually know much of Scotland until taking the van up there this year; I was amazed that such a beautiful place has been there the whole time just waiting there 400 miles north. This expedition was originally planned to be a bike trip but ended up being a road trip, and having now seen the roads that is probably for the best. However we did manage a great bike trip around the Netherlands and Belgium, the first time I’ve done a week long bike trip and hopefully the beginning of a new tradition ! Last year I did a lot of travel to crazily distant places, its a privilege to be able to do so but one that I prefer to use sparingly so it was nice to get around closer to home this year.

All in all a pretty successful year, not straightforward at times but one with several steps in the directions I wanted to head. Let’s see what next year holds 🙂


December 28, 2017

Frogr 1.4 released

Another year goes by and, again, I feel the call to make one more release just before 2017 over, so here we are: frogr 1.4 is out!

Screenshot of frogr 1.4

Yes, I know what you’re thinking: “Who uses Flickr in 2017 anyway?”. Well, as shocking as this might seem to you, it is apparently not just me who is using this small app, but also another 8,935 users out there issuing an average of 0.22 Queries Per Second every day (19008 queries a day) for the past year, according to the stats provided by Flickr for the API key.

Granted, it may be not a huge number compared to what other online services might be experiencing these days, but for me this is enough motivation to keep the little green frog working and running, thus worth updating it one more time. Also, I’d argue that these numbers for a niche app like this one (aimed at users of the Linux desktop that still use Flickr to upload pictures in 2017) do not even look too bad, although without more specific data backing this comment this is, of course, just my personal and highly-biased opinion.

So, what’s new? Some small changes and fixes, along with other less visible modifications, but still relevant and necessary IMHO:

  • Fixed integration with GNOME Software (fixed a bug regarding appstream data).
  • Fixed errors loading images from certain cameras & phones, such as the OnePlus 5.
  • Cleaned the code by finally migrating to using g_auto, g_autoptr and g_autofree.
  • Migrated to the meson build system, and removed all the autotools files.
  • Big update to translations, now with more than 22 languages 90% – 100% translated.

Also, this is the first release that happens after having a fully operational centralized place for Flatpak applications (aka Flathub), so I’ve updated the manifest and I’m happy to say that frogr 1.4 is already available for i386, arm, aarch64 and x86_64. You can install it either from GNOME Software (details on how to do it at https://flathub.org), or from the command line by just doing this:

flatpak install --from https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

Also worth mentioning that, starting with Frogr 1.4, I will no longer be updating my PPA at Launchpad. I did that in the past to make it possible for Ubuntu users to have access to the latest release ASAP, but now we have Flatpak that’s a much better way to install and run the latest stable release in any supported distro (not just Ubuntu). Thus, I’m dropping the extra work required to deal with the PPA and flat-out recommending users to use Flatpak or wait until their distro of choice packages the latest release.

And I think this is everything. As usual, feel free to check the main website for extra information on how to get frogr and/or how to contribute to it. Feedback and/or help is more than welcome.

Happy new year everyone!

December 27, 2017

State of Meson in GLib/GStreamer

During the last couple of months I’ve been learning the Meson build system. Since my personal interests in Open Source Software are around GLib and GStreamer, and they both have Meson and Autotools build systems in parallel, I’ve set as personal goal to list (and try to fix) blocker bugs preventing from switching them to Meson-only. Note that I’m neither GLib nor GStreamer maintainer, so it’s not my call whether or not they will drop Autotools.

GLIB

I opened bug #790954, a meta-bug depending on all the bugs related to meson I’ve found and are bad enough that we cannot drop Autotools unless we fix them first. Amount those bugs, I’ve been personally working on those:

Bug 788773 meson does not install correct pc files

pkg-config files for glib/gobject/gio are currently generated from a .pc.in template but some flags (notably -pthread) are hidden internally in Meson and cannot be written in the .pc file. To fix that bug I had to use Meson’s pkg-config generator which has access to those internal compiler flags. For example the -pthread flag is hidden behind the dependency(‘threads’) object in a meson.build file. Unfortunately such object cannot be passed to the pkg-config generator, so I opened a pull request making that generator smarter:

  • When generating a pc file for a library, it automatically take all dependencies and link_with values from that library and add them into the Libs.private and Requires.private fields.
  • Extra dependencies (such as ‘threads’) can be added explicitly if needed. One common case is explicitly adding a public dependency that the generator would have added in private otherwise.

Bug 786796 gtk-doc build fails with meson

Pretty easy one: gobject’s API documentation needs to pass an extra header file when compiling gtkdoc-scangobj from a non standard location. I made a patch to have include_directories argument to gnome.gtkdoc() method and use it in glib.

Bug 790837 Meson: missing many configure options

Compared to Autotools’ configure, glib’s meson build system was missing many build options. Also existing options were not following the GNOME guideline. tl;dr: we want foo_bar instead of enable-foo-bar, and we want to avoid automatic options as much as possible.

Now configure options are on par with Autotools, except for the missing –with-thread which has a patch pending on bug #784995.

GStreamer

Both static and shared library

For GStreamer static builds are important. The number of shared libraries an Android application can link to is limited, and dlopen of plugins is forbidden on IOS (if I understood correctly). On those platforms GStreamer is built as one big shared library that statically link all its dependencies (e.g. glib).

Autotools is capable of generating both static and shared libraries and compile C files only once. Doing so with Meson is possible but requires unnecessary extra work. I created a pull request that adds both_library() method to meson, and add a global project option that turns all library() calls to build both shared and static.

Static build of gio modules

This one is not directly related to Meson, but while working on static builds, I’ve noticed that GStreamer patched glib-networking to be able to static build them. Their patch never made it upstream and it has one big downside: it needs to be built twice for static and dynamic. GStreamer itself recently fixed their plugins  ABI to be able to do a single compile and produce both shared and static libraries.

The crux is you cannot have the same symbol defined in every plugin. Currently GIO modules must all define g_io_module_load/unload/query() symbols which would clash if you try to static link more than one GIO module. I wrote patches for gio and glib-networking to rename those symbols to be unique. The symbol name is derived from the shared module filename. For example when gio loads libgiognutls.so extension it will remove “libgio” prefix and “.so” suffix to get “gnutls” plugin name. Then lookup for g_io_gnutls_load/unload/query() symbols instead (and fallback to old names if not found).

The second difficulty is GIO plugins uses G_DEFINE_DYNAMIC_TYPE which needs  a GTypeModule to be able to create its GType. When those plugins are static linked we don’t have any GTypeModule object. I made a patch to allow passing NULL in that case to turn a G_DEFINE_DYNAMIC_TYPE into a static GType.

Bug 733067 cerbero: support python3

Meson being python3-only and Cerbero python2-only, if we start building meson projects in cerbero it means we require installing both pythons too. It also adds problems with PYTHONPATH environment variable because it cannot differentiate between 2 and 3 (seriously why is there no PYTHONPATH3?).

I ran 2to3 script against the whole cerbero codebase and then fixed a few remaining bugs manually. All in all it was pretty easy, the most difficult part is to actually test all build variants (linux, osx, windows, cross-android, cross-windows). it’s waiting for 1.14 to be released before merging this into master.

Bug 789316 Add Meson support in cerbero

Cerbero already had a recipe to build meson and ninja but they were broken. I made patches to fix that and also add the needed code to be able to build recipes using meson. It also makes use of meson directly to build gst-transcoder instead of the wrapper configure and makefile it ships. Later more recipes will be able to be converted to Meson (e.g. glib). Blocking on the python3 port of cerbero.

Choose between static or shared library

One use case Olivier Crête described on Meson issue #2765 is he wants to make the smallest possible build of a GStreamer application, for IoT. That means static link everything into the executable (e.g. GStreamer, glib) but dynamic link on a few libraries provided by the platform (e.g. glibc, openssl).

With Autotools he was doing that using .la files: Libraries built inside cerbero has a “.la” file, and libraries provided by the platform don’t. Autotools has a mode to static link the former and dynamic link the latter, meson don’t.

I think the fundamental question in meson is what to do when a dependency can be provided by both a static and a shared library. Currently meson takes the decision for you and always use the shared library, unless you explicitly set static: true when you declare your dependency, with no project-wide switch.

In this pull request I fixed this by added 2 global options:

  • default_link: Tells whether we prefer static or shared when both are available.
  • static_paths: List of path prefixes where it is allowed to use static
    libraries. By default it has “/” which means all paths are allowed. It can be set to the path where cerbero built GStreamer (e.g. /home/…)  and it will static link only them, using shared library from /usr/lib for libraries not built within cerbero.

Conclusion

There is a long road ahead before getting meson build system on par with Autotools for GLib and GStreamer. But bugs and missing features are relatively easy to fix. Meson code base is easy and pleasant to hack, unlike m4 macros I’ve never understood in the past 10 years I’ve been writing Autotools projects.

I think droping Autotools from GLib is a key milestone. If we can achieve that, it proves that all weird use-cases people has been relying on can be done with Meson.

I’ve been working on this on my personal time and on Collabora’s “2h/week for personal projects” policy. I’ll continue working on that goal when possible.

Adding tags to my jekyll website

This iteration of the olea.org website uses the Jekyll static website generator. From time to time I add some features to the configuration. This time I wanted to add tags support to my posts. After a fast search I found jekyll-tagging. To put it working has been relatively easy because if you are not into Ruby you can misconfigure the gem dependencies as me. And to add some value to this post I’m just sharing some tips I added not written in the project readme file.

First: added a /tag/ page with the cloud of used tags in the form of a tag/index.html file with this content:

---
layout: page
permalink: /tag/
---

<div class="tag-cloud" id="tag-cloud">
  <a href="/tag/%40firma/" class="set-1">@firma</a> <a href="/tag/akademy/" class="set-1">Akademy</a> <a href="/tag/alepo/" class="set-1">Alepo</a> <a href="/tag/almeria/" class="set-2">Almería</a> <a href="/tag/android/" class="set-1">Android</a> <a href="/tag/barcelona/" class="set-1">Barcelona</a> <a href="/tag/bolivia/" class="set-1">Bolivia</a> <a href="/tag/cacert/" class="set-1">CAcert</a> <a href="/tag/canarias/" class="set-1">Canarias</a> <a href="/tag/centos/" class="set-1">CentOS</a> <a href="/tag/ceres/" class="set-1">Ceres</a> <a href="/tag/chronojump/" class="set-1">ChronoJump</a> <a href="/tag/cuba/" class="set-1">Cuba</a> <a href="/tag/cubaconf/" class="set-1">CubaConf</a> <a href="/tag/epf/" class="set-1">EPF</a> <a href="/tag/fnmt/" class="set-1">FNMT</a> <a href="/tag/fosdem/" class="set-1">FOSDEM</a> <a href="/tag/fudcon/" class="set-1">FUDCon</a> <a href="/tag/factura-e/" class="set-1">Factura-e</a> <a href="/tag/fedora/" class="set-3">Fedora</a> <a href="/tag/flock/" class="set-1">Flock</a> <a href="/tag/fuerteventura/" class="set-1">Fuerteventura</a> <a href="/tag/gdg/" class="set-1">GDG</a> <a href="/tag/gnome/" class="set-2">GNOME</a> <a href="/tag/gnome-hispano/" class="set-1">GNOME-Hispano</a> <a href="/tag/guadec/" class="set-1">GUADEC</a> <a href="/tag/galicia/" class="set-1">Galicia</a> <a href="/tag/geocamp/" class="set-1">GeoCamp</a> <a href="/tag/google/" class="set-1">Google</a> <a href="/tag/guademy/" class="set-1">Guademy</a> <a href="/tag/hacklab_almeria/" class="set-1">HackLab_Almería</a> <a href="/tag/hispalinux/" class="set-1">Hispalinux</a> <a href="/tag/ia/" class="set-1">IA</a> <a href="/tag/ibm/" class="set-1">IBM</a> <a href="/tag/kde/" class="set-1">KDE</a> <a href="/tag/kompozer/" class="set-1">Kompozer</a> <a href="/tag/l10n/" class="set-1">L10N</a> <a href="/tag/la_coruna/" class="set-1">La_Coruña</a> <a href="/tag/la_paz/" class="set-1">La_Paz</a> <a href="/tag/la_rioja/" class="set-1">La_Rioja</a> <a href="/tag/linuxtag/" class="set-1">LinuxTag</a> <a href="/tag/lucas/" class="set-1">LuCAS</a> <a href="/tag/lugo/" class="set-1">Lugo</a> <a href="/tag/mdd/" class="set-1">MDD</a> <a href="/tag/madrid/" class="set-1">Madrid</a> <a href="/tag/microsoft/" class="set-1">Microsoft</a> <a href="/tag/mono/" class="set-1">Mono</a> <a href="/tag/mexico/" class="set-1">México</a> <a href="/tag/nueva_york/" class="set-1">Nueva_York</a> <a href="/tag/ocsp/" class="set-1">OCSP</a> <a href="/tag/odf/" class="set-1">ODF</a> <a href="/tag/osl_unia/" class="set-1">OSL_UNIA</a> <a href="/tag/osor.eu/" class="set-1">OSOR.eu</a> <a href="/tag/oswc/" class="set-1">OSWC</a> <a href="/tag/omegat/" class="set-1">OmegaT</a> <a href="/tag/openid/" class="set-1">OpenID</a> <a href="/tag/openmind/" class="set-1">Openmind</a> <a href="/tag/pycones/" class="set-1">PyConES</a> <a href="/tag/renfe/" class="set-1">Renfe</a> <a href="/tag/scfloss/" class="set-1">SCFLOSS</a> <a href="/tag/soos/" class="set-3">SOOS</a> <a href="/tag/ssl/" class="set-1">SSL</a> <a href="/tag/sonic_pi/" class="set-1">Sonic_Pi</a> <a href="/tag/supersec/" class="set-1">SuperSEC</a> <a href="/tag/superlopez/" class="set-1">Superlópez</a> <a href="/tag/tldp-es/" class="set-1">TLDP-ES</a> <a href="/tag/ue/" class="set-1">UE</a> <a href="/tag/vpn/" class="set-1">VPN</a> <a href="/tag/valencia/" class="set-1">Valencia</a> <a href="/tag/x509/" class="set-1">X509</a> <a href="/tag/yorokobu/" class="set-1">Yorokobu</a> <a href="/tag/zaragoza/" class="set-1">Zaragoza</a> <a href="/tag/anotaciones/" class="set-1">anotaciones</a> <a href="/tag/calidad/" class="set-1">calidad</a> <a href="/tag/ciencia_abierta/" class="set-1">ciencia_abierta</a> <a href="/tag/conferencia/" class="set-3">conferencia</a> <a href="/tag/congreso/" class="set-2">congreso</a> <a href="/tag/correo-e/" class="set-1">correo-e</a> <a href="/tag/cultura/" class="set-1">cultura</a> <a href="/tag/docker/" class="set-1">docker</a> <a href="/tag/ensayo/" class="set-1">ensayo</a> <a href="/tag/entrevista/" class="set-1">entrevista</a> <a href="/tag/filosofia/" class="set-1">filosofía</a> <a href="/tag/flatpak/" class="set-1">flatpak</a> <a href="/tag/fpga_wars/" class="set-1">fpga_wars</a> <a href="/tag/git/" class="set-1">git</a> <a href="/tag/gvsig/" class="set-1">gvSIG</a> <a href="/tag/hardware/" class="set-1">hardware</a> <a href="/tag/historia/" class="set-1">historia</a> <a href="/tag/innovacion/" class="set-1">innovación</a> <a href="/tag/interoperabilidad/" class="set-1">interoperabilidad</a> <a href="/tag/jekyll/" class="set-1">jekyll</a> <a href="/tag/laptop/" class="set-1">laptop</a> <a href="/tag/linux/" class="set-1">linux</a> <a href="/tag/micro-educacion/" class="set-1">micro-educación</a> <a href="/tag/migas/" class="set-1">migas</a> <a href="/tag/node.js/" class="set-1">node.js</a> <a href="/tag/opensource/" class="set-5">opensource</a> <a href="/tag/p2p/" class="set-1">p2p</a> <a href="/tag/politica/" class="set-1">política</a> <a href="/tag/procomunes/" class="set-1">procomunes</a> <a href="/tag/propiedad_intelectual/" class="set-1">propiedad_intelectual</a> <a href="/tag/publciacion/" class="set-1">publciación</a> <a href="/tag/publicacion/" class="set-2">publicación</a> <a href="/tag/revolucion_digital/" class="set-2">revolución_digital</a> <a href="/tag/seguridad/" class="set-1">seguridad</a> <a href="/tag/servicios/" class="set-1">servicios</a> <a href="/tag/software/" class="set-3">software</a> <a href="/tag/sofware/" class="set-1">sofware</a> <a href="/tag/sostenibilidad/" class="set-1">sostenibilidad</a> <a href="/tag/video/" class="set-1">vídeo</a> <a href="/tag/web/" class="set-1">web</a> <a href="/tag/web-semantica/" class="set-1">web-semántica</a> <a href="/tag/etica/" class="set-1">ética</a>
</div>

Compared to the jekyll-tagging examples I only use the tag cloud in that /tag/ page and not in the tag entries pages because it’s a bit annoying when using too much tag words.

And second, probably more interesting, showing the post tags in the html page:

<p class="post-meta">  tags:  <a href="/tag/jekyll/" rel="tag">jekyll</a> </p>

This is relevant because the tagging readme example uses {{ post | tags }} but to work inside the post page you should use {{ page | tags }}.

Yeah, this is not a great post but maybe it can save some time if your adding jekyll-tagging to your web.

My journey to Rust

As most folks who know me already know, I've been in love with Rust language for a few years now and in the last year I've been actively coding in Rust. I wanted to document my journey to how I came to love this programming language, in hope that it will help people to see the value Rust brings to the world of software but if not, it would be nice to have my reason documented for my own sake.

When I started my professional career as a programmer 16 years ago, I knew some C, C++, Java and a bit of x86 assembly but it didn't take long before I completely forgot most of what I knew of C++ and Java, and completely focused on C. There were a few difference reasons that contributed to that:
  • Working with very limited embedded systems (I'm talking 8051) at that time, I quickly became obsessed with performance and C was my best bet if I didn't want to write all my code in assembly.
  • Shortly before I graduated, I got involved in GStreamer project and became a fan of GNOME and Gtk+, all of which at that time was in C. Talking to developers of these projects (who at that time seemed like gods), I learnt how C++ is a bad language and why they write everything in C instead.
  • An year after graduation, I developed a network traffic shaping solution and the core of it was a Linux kernel module, which as you know is almost always done in C. Some years later, I also wrote some device drivers for Linux.
The more C code I wrote over the years, the more I developed this love/hate relationship with it. I just loved the control C gave me but hated the huge responsibility it came with. With experience, I became good at avoiding common mistakes in C, but nobody is perfect and if you can make a mistake, you eventually will. Another reason C didn't seem perfect to me was the lack of high-level constructs in the language itself. Copy&pasting boilerplate to write simple GObjects is nothing most people enjoy. You end up avoiding to organise your code in the best way to spare yourself the trouble of having to write GObjects.

So I've been passively and sometimes actively seeking a better programming language for more than a decade now. I got excited about a few different ones over the years but there was always something very essential missing. The first language I got into was Lisp/Scheme (more specifically Guile) but the lack of type declarations soon started to annoy me a lot. I felt the same after then about all scripting languages, e.g Python. Don't get me wrong, python is a pretty awesome language for specific uses (e.g writing tests, simple apps, quick prototyping etc) but with lack of type declarations, any project larger than 1000 LOCs can quickly become hard to maintain (at least it did for me).

Because of my love for strong-typing, C# and Java did attract me too briefly. Not having to care about memory management in most cases, not only helps developers focus on the actual problems they are solving, it indirectly allows them to avoid making very expensive mistakes with memory management. However, if developer is not managing the memory, it's the machine doing it and in case of these languages, it does that at run time and not compile time. As a C developer and a big hater of waste in general, that is very hard to be convinced of as a good idea.

There was another big problem all these high-level languages: you can't nicely glue them with the C world. Sure you can use libraries written in C from them but the other way around is not a viable option (even if possible). That's why you'll find GNOME apps written in all these languages but you will not find any libraries written in them.

Along came Vala


So along came Vala, which offered features that at that time (2007-2008) were the most important to me:
  • It is a strongly-typed language.
  • It manages the memory for you in most cases but without any run time costs.
  • It's high-level language so you avoid a lot of boilerplate.
  • GNOME was and still is the main target platform of Vala.
  • It compiled to C, so you can write libraries in it and use them from C code as if they were written in C. Because of GObject-introspection, this also means you can use them from other languages too.
Those who know me, will agree that I was die-hard (I'm writing this on Christmas day so that reference was mandatory I guess) fan of Vala for a while. I wrote two projects in Vala and given what I knew then I think it was the right decision. Some people will be quick to point out specific technical issues with Vala but I think those could have been helped. There two other reasons, I ultimately gave up on Vala. The first one was that the general interest in it started to decline after Nokia stopped funding projects using Vala and so did its development.

 

Hello Rust


But the main reason for giving up was that I saw something better finally becoming a viable option (1.0 release) and gaining adoption in many communities, including GNOME. While Vala had many good qualities I mentioned above, Rust offered even more:
  • Firstly the success of Rust is not entirely dependent on one very specific project or a tiny group of people, even if until now most of the development has been from one company. Ever month, you hear of more communities and companies starting to depend on Rust and that ensure it's success even if Mozilla was to go down (not that I think it's likely) or stopped working on it. i-e "it's too big to fail". If we compare to Vala, the community is a lot bigger. There are conferences and events happening around the world that are entirely focused on Rust and there are books written on Rust. Vala never came anywhere remotely close to that level of popularity.

    When I would mention Vala in job interviews, interviewers would typically have no idea what I'm talking about but when I mention Rust, the typical response is "Oh yes we are interested in trying that out in our company".
  • While Vala is already a safer language than C & C++, you still have null-pointer dereferencing and some other unsafe possibilities. Safety being one of the main focus of the language design, Rust will not allow you to build unsafe code, unless you mark it as such and even then, your possibilities of messing up are limited. Marking unsafe code as such, makes it much easier to find the source of any issues you might have. More over, you usually only write unsafe code to interface with the unsafe C world.

    This is a very important point in my opinion. I really do not want to live in a world where simple human errors are allowed cause disasters.
Admittedly, there are some benefits of Vala over Rust:
  • Ability to easily write GObjects.
  • Creating shared libraries.
However, some people have been working on the former and latter is already possible with some compromise and tricks.

 

Should we stop writing C/C++ code?


Ideally? Yes! Most definitely, yes. Practically speaking, that is not an option for most existing C/C++ projects out there. I can only imagine the huge amount of resources needed for porting large projects, let alone training existing developers on Rust. Having said that, I'd urge every developer to at least seriously consider writing all new code in Rust rather than C/C++.

Especially if you are writing safety-critical software (people implementing self-driving cars, militaries and space agencies, I'm looking at you!), laziness and mental-inertia are not valid reasons to continue writing potentially unsafe code, no matter how smart and good at C/C++ you think you are. You will eventually make mistake and when you do, lives will be at stake. Think about that please.

 

Conclusion


I am excited about Rust and I'm hopeful the future is much safer thanks to people behind it. Happy holidays and have a fun and safe 2018!

December 26, 2017

Creating an USB image that boots to a single GUI app from scratch

Every now and than you might want or need to create a custom Linux install that boots from a USB stick, starts a single GUI application and keeps running that until the user turns off the power. As an example at a former workplace I created an application for downloading firmware images from an internal server and flashing those. The idea there was that even non-technical people could walk up to the computer, plug in their device via USB and push a button to get it flashed.

Creating your own image based on latest stable turns out to be relatively straightforward, though there are a few pitfalls. The steps are roughly the following:
  1. Create a Debian boostrap install
  2. Add dependencies of your program and things like X, Network Manager etc
  3. Install your program
  4. Configure the system to automatically login root on boot
  5. Configure root to start X upon login (but only on virtual terminal 1)
  6. Create an .xinitrc to start your application upon X startup
Information on creating a bootable Debian live image can easily be found on the Internet. Unfortunately information on setting up the boot process is not as easy to find, but is instead scattered all over the place. A lot of documentation still refers to the sysvinit way of doing things that won't work with systemd. Rather than try to write yet another blog post on the subject I instead created a script to do all that automatically. The code is available in this Github repo. It's roughly 250 lines of Python.

Using it is simple: insert a fresh USB stick in the machine and see what device name it is assigned to. Let's assume it is /dev/sdd. Then run the installer:

sudo ./createimage.py /dev/sdd

Once the process is complete and you can boot any computer with the USB stick to see this:


This may not look like much but the text in the top left corner is in fact a PyGTK program. The entire thing fits in a 226 MB squashfs image and takes only a few minutes to create from scratch. Expanding the program to have the functionality you want is then straightforward. The Debian base image takes care of all the difficult things like hardware autodetection, network configuration and so on.

Problems and points of improvement

The biggest problem is that when booted like this the mouse cursor is invisible. I don't know why. All I could find were other people asking about the same issue but no answers. If someone knows how to fix this, patches are welcome.

The setup causes the root user to autologin on all virtual terminals, not just #1.

If you need to run stuff like PulseAudio or any other thing that requires a full session, you'll probably need to install a full DE session and use its kiosk mode.

This setup runs as root. This may be good. It may be bad. It depends on your use case.

For more complex apps you'd probably want to create a DEB package and use it to install dependencies rather than hardcoding the list in the script as is done currently.

December 24, 2017

GNOME.Asia Summit 2017

GNOME.Asia Summit  2017




It always great time to join GNOME.Asia Summit.


This is my 7th year with GNOME.Asia, it also 10th for GNOME.Asia.


Thanks all local team work hard for GNOME.Asia summit.


We have very good host with local team.




I love the idea “ open source market ”, open source and freeware get everyone together and share to each other. :)




Thanks all speakers give us speech with GNOME and open source.




Thanks local team warm our hearts at first night with 国窖1573.




We have very professional cookie this year.




Thanks professors from university give us very good panel discussion, thanks Emily Chen to host this great panel discussion.  
It’s import to get support in university when we want to promote open source and freeware all the time.




This year, also GNOME 20th Anniversary, local arrange the great parth for GNOME 20th.






At the end of the GNOME.Asia summit 2017, I see the passionate from local team and young gnomer.



To me, it’s my pleasure to work with great local team every year and get more friends with GNOME.



At the end, I want to think GNOME foundation and all our sponsors to support us.


<(_   _)>

See you next GNOME.Asia Summit


All picture from GNOME.Asia Summit 2017 Flickr Group
( https://www.flickr.com/groups/gnomeasia2017/pool  )

sponsored-by-foundation-round.png

December 23, 2017

LinuXatUNI held last meeting of the year

The local Linux community in Lima, Peru held the last meeting today sharing a breakfast. Peruvians usually take “chocolatada” (made with chocolate and milk) with paneton for Christmass holidays, and we are not the exception. Thanks to the LinuxFoundation we have new jackets, scarves and vest branded with the LinuxFoundation logo.

After having our breakfast, instead of hacking, we interacted through a work group “Linux” dynamics to strong the relationship among the participants. They are students from different universities: PUCP, UNMSM, UNI, UTP and UNAC in the picture! 🙂

The games table were followed by a physical contact and coordination as a group. We needed a big space to support the game, so we did not have other choice than the streetThanks so much to all the students that have participated as LinuXatUNI during this year, and in previous rounds. Special thanks to students from UNMSM: Martin Vuelta and Fiorella Effio for their support during this year as well as Toto, Solanch and Leyla Marcelo for her work as a designer. Another thanks to PUCP students which has been helping us for four years in a raw: Giohanny Falla and Fabian Orccon 😀 I am extremely grateful for the support of the Linux Foundation, GNOME, Fedora, BacktrackAcademy and LinuXatUNI work members for outreaching Linux newcomers.

 You can see more pictures here!

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: by 2017, fedora, GNOME, Julita Inca, Julita Inca Chiroque, last meeting, Linux Foundation, Linux Foundation event, LinuXatUNI, prize Linux Foundation

December 22, 2017

GStreamer Rust bindings release 0.10.0 & gst-plugin release 0.1.0

Today I’ve released version 0.10.0 of the Rust GStreamer bindings, and after a journey of more than 1½ years the first release of the GStreamer plugin writing infrastructure crate “gst-plugin”.

Check the repositories¹² of both for more details, the code and various examples.

GStreamer Bindings

Some of the changes since the 0.9.0 release were already outlined in the previous blog post, and most of the other changes were also things I found while writing GStreamer plugins. For the full changelog, take a look at the CHANGELOG.md in the repository.

Other changes include

  • I went over the whole API in the last days, added any missing things I found, simplified API as it made sense, changed functions to take Option<_> if allowed, etc.
  • Bindings for using and writing typefinders. Typefinders are the part of GStreamer that try to guess what kind of media is to be handled based on looking at the bytes. Especially writing those in Rust seems worthwhile, considering that basically all of the GIT log of the existing typefinders consists of fixes for various kinds of memory-safety problems.
  • Bindings for the Registry and PluginFeature were added, as well as fixing the relevant API that works with paths/filenames to actually work on Paths
  • Bindings for the GStreamer Net library were added, allowing to build applications that synchronize their media of the network by using PTP, NTP or a custom GStreamer protocol (for which there also exists a server). This could be used for building video-walls, systems recording the same scene from multiple cameras, etc. and provides (depending on network conditions) up to < 1ms synchronization between devices.

Generally, this is something like a “1.0” release for me now (due to depending on too many pre-1.0 crates this is not going to be 1.0 anytime soon). The basic API is all there and nicely usable now and hopefully without any bugs, the known-missing APIs are not too important for now and can easily be added at a later time when needed. At this point I don’t expect many API changes anymore.

GStreamer Plugins

The other important part of this announcement is the first release of the “gst-plugin” crate. This provides the basic infrastructure for writing GStreamer plugins and elements in Rust, without having to write any unsafe code.

I started experimenting with using Rust for this more than 1½ years ago, and while a lot of things have changed in that time, this release is a nice milestone. In the beginning there were no GStreamer bindings and I was writing everything manually, and there were also still quite a few pieces of code written in C. Nowadays everything is in Rust and using the automatically generated GStreamer bindings.

Unfortunately there is no real documentation for any of this yet, there’s only the autogenerated rustdoc documentation available from here, and various example GStreamer plugins inside the GIT repository that can be used as a starting point. And various people already wrote their GStreamer plugins in Rust based on this.

The basic idea of the API is however that everything is as Rust-y as possible. Which might not be too much due to having to map subtyping, virtual methods and the like to something reasonable in Rust, but I believe it’s nice to use now. You basically only have to implement one or more traits on your structs, and that’s it. There’s still quite some boilerplate required, but it’s far less than what would be required in C. The best example at this point might be the audioecho element.

Over the next days (or weeks?) I’m not going to write any documentation yet, but instead will write a couple of very simple, minimal elements that do basically nothing and can be used as starting points to learn how all this works together. And will write another blog post or two about the different parts of writing a GStreamer plugin and element in Rust, so that all of you can get started with that.

Let’s hope that the number of new GStreamer plugins written in C is going to decrease in the future, and maybe even new people who would’ve never done that in C, with all the footguns everywhere, can get started with writing GStreamer plugins in Rust now.

Denoising Autoencoder as TensorFlow estimator

I recently started to use Google's deep learning framework TensorFlow. Since version 1.3, TensorFlow includes a high-level interface inspired by scikit-learn. Unfortunately, as of version 1.4, only 3 different classification and 3 different regression models implementing the Estimator interface are included. To better understand the Estimator interface, Dataset API, and components in tf-slim, I started to implement a simple Autoencoder and applied it to the well-known MNIST dataset of handwritten digits. This post is about my journey and is split in the following sections:

  1. Custom Estimators
  2. Autoencoder network architecture
  3. Autoencoder as TensorFlow Estimator
  4. Using the Dataset API
  5. Denoising Autocendoer

I will assume that you are familiar with TensorFlow basics. The full code is available at https://github.com/sebp/tf_autoencoder.

Estimators

The tf.estimator.Estimator is at the heart TenorFlow's high-level interface and is similar to Kera's Model API. It hides most of the boilerplate required to train a model: managing Sessions, writing summary statistics for TensorBoard, or saving and loading checkpoints. An Estimator has three main methods: train, evaluate, and predict. Each of these methods requires a callable input function as first argument that feeds the data to the estimator (more on that later).

Custom estimators

You can write your own custom model implementing the Estimator interface by passing a function returning an instance of tf.estimator.EstimatorSpec as first argument to tf.estimator.Estimator.

def model_fn(features, labels, mode):
    …
    return tf.estimator.EstimatorSpec(
        mode=mode,
        predictions=predictions,
        loss=total_loss,
        train_op=train_op,
        eval_metric_ops=eval_metric_ops)

The first argument – mode – is one of tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL or tf.estimator.ModeKeys.PREDICT and determines which of the remaining values must be provided.

In TRAIN mode:

  • loss: A Tensor containing a scalar loss value.
  • train_op: An Op that runs one step of training. We can use the return value of tf.contrib.layers.optimize_loss here.

In EVAL mode:

  • loss: A scalar Tensor containing the loss on the validation data.
  • eval_metric_ops: A dictionary that maps metric names to Tensors of metrics to calculate, typically, one of the tf.metrics functions.

In PREDICT mode:

  • predictions: A dictionary that maps key names of your choice to Tensors containing the predictions from the model.

An important difference to the Estimators included with TensorFlow is that we need to call relevant tf.summary functions in model_fn ourselves. However, the Estimator will take care of writing summaries to disk so we can inspect them in TenorBoard.

Autoencoder model

Autoencoder architecture

The Autoencoder model is straightforward, it consists of two major parts: an encoder and an decoder. The encoder has an input layer (28*28 = 784 dimensions in the case of MNIST) and one or more hidden layers, decreasing in size. In the decoder, we reverse the operations of the encoder by blowing the output of the smallest hidden layer up to the size of the input (optionally, with hidden layers of increasing size in-between). The loss function computes the difference between the original image and the reconstructed image (the output of the decoder). Common loss functions are mean squared error and cross-entropy.

To construct the encoder network, we specify a list containing the number of hidden units for each layer and (optionally) add dropout layers in-between:

def encoder(inputs, hidden_units, dropout, is_training):
    net = inputs
    for num_hidden_units in hidden_units:
        net = tf.contrib.layers.fully_connected(
            net, num_outputs=num_hidden_units)
        if dropout is not None:
            net = slim.dropout(net, is_training=is_training)
        add_hidden_layer_summary(net)
    return net

where add_hidden_layer_summary adds a histogram of the activations and the fraction of non-zero activations to be displayed in TensorBoard. The latter is particularly useful when debugging networks with rectified linear units (ReLU). If too many hidden units return 0 values early during optimization, the model won't be able to learn anymore, in which case one would typically try to lower the learning rate or choose a different activation function.

The network of the decoder is almost identical, we just explicitly use a linear activation function (activation_fn=None) and no dropout in the last layer:

def decoder(inputs, hidden_units, dropout, is_training):
    net = inputs
    for num_hidden_units in hidden_units[:-1]:
        net = tf.contrib.layers.fully_connected(
            net, num_outputs=num_hidden_units)
        if dropout is not None:
            net = slim.dropout(net, is_training=is_training)
        add_hidden_layer_summary(net)
 
    net = tf.contrib.layers.fully_connected(net, hidden_units[-1],
                                            activation_fn=None)
    tf.summary.histogram('activation', net)
    return net

You may have noticed that we did no specify any activation function so far. Thanks to TenorFlow's arg_scope context manager, we can easily set the activation function for all fully connected layers. At the same time we set an appropriate weight initializer and (optionally) use weight decay:

def autoencoder(inputs, hidden_units, activation_fn, dropout, weight_decay, mode):
    is_training = mode == tf.estimator.ModeKeys.TRAIN
 
    weights_init = slim.initializers.variance_scaling_initializer()
    if weight_decay is None:
        weights_regularizer = None
    else:
        weights_reg = tf.contrib.layers.l2_regularizer(weight_decay)
 
    with slim.arg_scope([tf.contrib.layers.fully_connected],
                        weights_initializer=weights_init,
                        weights_regularizer=weights_reg,
                        activation_fn=activation_fn):
        net = encoder(inputs, hidden_units, dropout, is_training)
        n_features = inputs.shape[1].value
        decoder_units = hidden_units[:-1][::-1] + [n_features]
        net = decoder(net, decoder_units, dropout, is_training)
    return net

where slim.initializers.variance_scaling_initializer corresponds to the initialization of He et al., which is the current recommendation for networks with ReLU activations.

This concludes the architecture of the autoencoder. Next, we need to implement the model_fn function passed to tf.estimator.Estimator as outlined above.

Autoencoder model_fn

First, we construct the network's architecture using the autoencoder function described above:

logits = autoencoder(inputs=features,
                     hidden_units=hidden_units,
                     activation_fn=activation_fn,
                     dropout=dropout,
                     weight_decay=weight_decay,
                     mode=mode)

Subsequent steps depend on the value of mode. In prediction mode, we merely have to return the reconstructed image, therefore we make sure all values are within the interval [0; 1] by applying the sigmoid function:

probs = tf.nn.sigmoid(logits)
predictions = {"prediction": probs}
if mode == tf.estimator.ModeKeys.PREDICT:
    return tf.estimator.EstimatorSpec(
        mode=mode,
        predictions=predictions)

In training and evaluation mode, we need to compute the loss, which is cross-entropy in this example:

tf.losses.sigmoid_cross_entropy(labels, logits)
total_loss = tf.losses.get_total_loss(add_regularization_losses=is_training)

The second line is needed to add the $\ell_2$-losses used in weight decay.

Most importantly, training relies on choosing an optimizer, here we use Adam and an exponential learning rate decay. The latter dynamically updates the learning rate during training according to the formula
$$
\text{decayed learning rate} = \text{base learning rate} \cdot 0.96^{\lfloor i / 1000 \rfloor} ,
$$ where $i$ is the current iteration. It would probably work as well without learning rate decay, but I included it for the sake of completeness.

if mode == tf.estimator.ModeKeys.TRAIN:
    train_op = tf.contrib.layers.optimize_loss(
        loss=total_loss,
        optimizer="Adam",
        learning_rate=learning_rate,
        learning_rate_decay_fn=lambda lr, gs: tf.train.exponential_decay(lr, gs, 1000, 0.96, staircase=True),
        global_step=tf.train.get_global_step(),
        summaries=["learning_rate", "global_gradient_norm"])
 
    # Add histograms for trainable variables
    for var in tf.trainable_variables():
        tf.summary.histogram(var.op.name, var)

Note that we add a histogram of all trainable variables for TensorBoard in the last part.

Finally, we compute the root mean squared error when in evaluation mode:

if mode == tf.estimator.ModeKeys.EVAL:
    eval_metric_ops = {
        "rmse": tf.metrics.root_mean_squared_error(
            tf.cast(labels, tf.float64), tf.cast(probs, tf.float64))
    }

and return the specification of our autoencoder estimator:

return tf.estimator.EstimatorSpec(
    mode=mode,
    predictions=predictions,
    loss=total_loss,
    train_op=train_op,
    eval_metric_ops=eval_metric_ops)

 

Feeding data to an Estimator via the Dataset API

Once we constructed our estimator, e.g. via

estimator = AutoEncoder(hidden_units=[128, 64, 32],
                        dropout=None,
                        weight_decay=1e-5,
                        learning_rate=0.001)

we would like to train it by calling train, which expects a callable that returns two tensors, one representing the input data and one the groundtruth data. The easiest way would be to use tf.estimator.inputs.numpy_input_fn, but instead I want to introduce TensorFlow's Dataset API, which is more generic.

The Dataset API comprises two elements:

  1. tf.data.Dataset represents a dataset and any transformations applied to it.
  2. tf.data.Iterator is used to extract elements from a Dataset. In particular, Iterator.get_next() returns the next element of a Dataset and typically is what is fed to an estimator.

Here, I'm using what is called an initializable Iterator, inspired by this post. We define one placeholder for the input image and one for the groundtruth image and initialize the placeholders before training starts using a hook. First, let's create a Dataset from the placeholders:

placeholders = [
    tf.placeholder(data.dtype, data.shape, name='input_image'),
    tf.placeholder(data.dtype, data.shape, name='groundtruth_image')
]
dataset = tf.data.Dataset.from_tensor_slices(placeholders)

Next, we shuffle the dataset and allow retrieving data from it until the specified number of epochs has been reached:

dataset = dataset.shuffle(buffer_size=10000)
dataset = dataset.repeat(num_epochs)

When creating input for evaluation or prediction, we are going to skip these two steps.

Finally, we combine multiple elements into a batch and create an iterator from the dataset:

dataset = dataset.batch(batch_size)
 
iterator = dataset.make_initializable_iterator()
next_example, next_label = iterator.get_next()

To initialize the placeholders, we need to call tf.Sesssion.run with feed_dict = {placeholders[0]: input_data, placeholders[1]: groundtruth_data}. Since the Estimator will create a Session for us, we need a way to call our initialization code after the session has been created and before training begins. The Estimator's train, evaluate and predict methods accept a list of SessionRunHook subclasses as the hooks argument, which we can use to inject our code in the right place. Therefore, we first create a generic hook that runs after the session has been created:

class IteratorInitializerHook(tf.train.SessionRunHook):
    """Hook to initialise data iterator after Session is created."""
 
    def __init__(self):
        self.iterator_initializer_func = None
 
    def after_create_session(self, session, coord):
        """Initialise the iterator after the session has been created."""
        assert callable(self.iterator_initializer_func)
        self.iterator_initializer_func(session)

To make things a little bit nicer, we create an InputFunction class which implements the __call__ method. Thus, it will behave like a function and we can pass it directly to tf.estimator.Estimator.train and related methods.

class InputFunction:
    def __init__(self, data, batch_size, num_epochs, mode):
        self.data = data
        self.batch_size = batch_size
        self.mode = mode
        self.num_epochs = num_epochs
        self.init_hook = IteratorInitializerHook()
 
     def __call__(self):
        # Define placeholders
        placeholders = [
            tf.placeholder(self.data.dtype, self.data.shape, name='input_image'),
            tf.placeholder(self.data.dtype, self.data.shape, name='reconstruct_image')
        ]
 
        # Build dataset pipeline
        dataset = tf.data.Dataset.from_tensor_slices(placeholders)
        if self.mode == tf.estimator.ModeKeys.TRAIN:
            dataset = dataset.shuffle(buffer_size=10000)
            dataset = dataset.repeat(self.num_epochs)
        dataset = dataset.batch(self.batch_size)
 
        # create iterator from dataset
        iterator = dataset.make_initializable_iterator()
        next_example, next_label = iterator.get_next()
 
        # create initialization hook
        def _init(sess):
            feed_dict = dict(zip(placeholders, [self.data, self.data])
            sess.run(iterator.initializer,
                     feed_dict=feed_dict)
 
        self.init_hook.iterator_initializer_func = _init
 
        return next_example, next_label

Finally, we can use the InputFunction class to train our autoencoder for 30 epochs:

from tensorflow.examples.tutorials.mnist import input_data as mnist_data
 
mnist = mnist_data.read_data_sets('mnist_data', one_hot=False)
train_input_fn = InputFunction(
    data=mnist.train.images,
    batch_size=256,
    num_epochs=30,
    mode=tf.estimator.ModeKeys.TRAIN)
autoencoder.train(train_input_fn, hooks=[train_input_fn.init_hook])

The video below shows ten reconstructed images from the test data and their corresponding groundtruth after each epoch of training:

Denoising Autoencoder

A denoising autoencoder is slight variation on the autoencoder described above. The only difference is that input images are randomly corrupted before they are fed to the autoencoder (we still use the original, uncorrupted image to compute the loss). This acts as a form of regularization to avoid overfitting.

noise_factor = 0.5  # a float in [0; 1)
 
def add_noise(input_img, groundtruth_img):
    noise = noise_factor * tf.random_normal(input_img.shape.as_list())
    input_corrupted = tf.clip_by_value(tf.add(input_img, noise), 0., 1.)
    return input_corrupted, groundtruth

The function above takes two Tensors representing the input and groundtruth image, respectively, and corrupts the input image by the specified amount of noise. We can use this function to transform all of the images using Dataset's map function:

dataset = dataset.map(add_noise, num_parallel_calls=4)
dataset = dataset.prefetch(512)

The function passed to map will be part of the compute graph, thus you have to use TensorFlow operations to modify your input or use tf.py_func. The num_parallel_calls arguments speeds up preprocessing significantly, because multiple images are transformed in parallel. The second line ensures a certain amount of corrupted images are precomputed, otherwise the transformation would only be applied when executing iterator.get_next(), which would result in a delay for each batch and bad GPU utilization. The video below shows the groundtruth, input and output of the denoising autoencoder for up to 60 epochs:

I hope this tutorial gave you some insight on how to implement a custom TensorFlow estimator and use the Dataset API.

References

CEF on Wayland

TL;DR: we have patches for CEF to enable its usage on Wayland and X11 through the Mus/Ozone infrastructure that is to become Chromium’s streamlined future. And also for Content Shell!

At Collabora we recently assisted a customer who wanted to upgrade their system from X11 to Wayland. The problem: they use CEF as a runtime for web applications and CEF was not Wayland-ready. They also wanted to have something which was as future-proof and as upstreamable as possible, so the Chromium team’s plans were quite relevant.

Chromium is at the same time very modular and quite monolithic. It supports several platforms and has slightly different code paths in each, while at the same time acting as a desktop shell for Chromium OS. To make it even more complex, the Chromium team is constantly rewriting bits or doing major refactorings.

That means you’ll often find several different and incompatible ways of doing something in the code base. You will usually not find clear and stable interfaces, which is where tools like CEF come in, to provide some stability to users of the framework. CEF neutralizes some of the instability, providing a more stable API.

So we started by looking at 1) where is Chromium headed and 2) what kind of integration CEF needed with Chromium’s guts to work with Wayland? We quickly found that the Chromium team is trying to streamline some of the infrastructure so that it can be better shared among the several use cases, reducing duplication and complexity.

That’s where the mus+ash (pronounced “mustache”) project comes in. It wants to make a better split of the window management and shell functionalities of Chrome OS from the browser while at the same time replacing obsolete IPC systems with Mojo. That should allow a lot more code sharing with the “Linux Desktop” version. It also meant that we needed to get CEF to talk Mus.

Chromium already has Wayland support that was built by Intel a while ago for the Ozone display platform abstraction layer. More recently, the ozone-wayland-dev branch was started by our friends at Igalia to integrate that work with mus+ash, implementing the necessary Mus and Mojo interfaces, window decorations, menus and so on. That looked like the right base to use for our CEF changes.

It took quite a bit of effort and several Collaborans participated in the effort, but we eventually managed to convince CEF to properly start the necessary processes and set them up for running with Mus and Ozone. Then we moved on to make the use cases our customer cared about stable and to port their internal runtime code.

We contributed touch support for the Wayland Ozone backend, which we are in the process of upstreaming, reported a few bugs on the Mus/Ozone integration, and did some debugging for others, which we still need to figure out better fixes for.

For instance, the way Wayland fd polling works does not integrate nicely with the Chromium run loop, since there needs to be some locking involved. If you don’t lock/unlock the display for polling, you may end up in a situation in which you’re told there is something to read and before you actually do the read the GL stack may do it in another thread, causing your blocking read to hang forever (or until there is something to read, like a mouse move). As a work-around, we avoided the Chromium run loop entirely for Wayland polling.

More recently, we have start working on an internal project for adding Mus/Ozone support to Content Shell, which is a test shell simpler than Chromium the browser. We think it will be useful as a test bed for future work that uses Mus/Ozone and the content API but not the browser UI, since it lives inside the Chromium code base. We are looking forward to upstreaming it soon!

PS: if you want to build it and try it out, here are some instructions:

# Check out Google build tools and put them on the path
$ git clone https://chromium.googlesource.com/a/chromium/tools/depot_tools.git
$ export PATH=$PATH:`pwd`/depot_tools

# Check out chromium; note the 'src' after the git command, it is important
$ mkdir chromium; cd chromium
$ git clone -b cef-wayland https://gitlab.collabora.com/web/chromium.git src
$ gclient sync  --jobs 16 --with_branch_heads

# To use CEF, download it and look at or use the script we put in the repository
$ cd src # cef goes inside the chromium source tree
$ git clone -b cef-wayland https://gitlab.collabora.com/web/cef.git
$ sh ./cef/build.sh # NOTE: you may need to edit this script to adapt to your directory structure
$ out/Release_GN_x64/cefsimple --mus --use-views

# To build Content Shell you do not need to download CEF, just switch to the branch and build
$ cd src
$ git checkout -b content_shell_mus_support origin/content_shell_mus_support
$ gn args out/Default --args="use_ozone=true enable_mus=true use_xkbcommon=true"
$ ninja -C out/Default content_shell
$ ./out/Default/content_shell --mus --ozone-platform=wayland

December 21, 2017

Christmas Maps

So, we're approaching the end of the year and holidays, so I thought I should share some updates on some going-ons in Maps.

One issue we've had on our table is the way we do attribution. Currently in 3.26 and earlier we have shown the common OSM attribution and a provider logo on the map view. Now we also show attribution to OSM and the tile provider in the About dialog:


The tile provider name and link is included in the service file that is downloaded on start-up, so this can be changed later on without pushing new versions (if needed).

Another nice feature that we've had in mind for a while but didn't make it into 3.26 because we hadn't settled on the exact graphical layout is showing thumbnail images for places in the place info bubbles:


So, now we show a thumbnail picture for a location if it has a Wikipedia article linked to it in the OSM data, and if the Wikipedia API gives us a thumbnail corresponding to the article. This is yet another area where you as a user can add value by adding Wikipedia links, and also by uploading article title images to Wikipedia.

Another thing that has come up lately is an issue with how we overlay some things (like the zoom control buttons) using a GtkOverlay, which unfortunatly doesn't play well with the LibChamplain-based Clutter view used to display the map background when running under a Wayland compositor.
So, as a workaround, we have moved the zoom buttons to the headerbar (similar to the way it works in the EOG image viewer):

I think this is actually pretty nice, and I think I prefer this over the overlayed zoom buttons. So, I think we should probably keep this even if the GtkOverlay problem is solved (or we move away for using Clutter).

Lastly, I've been wanting to fill in some missing spots in our set of icons used for various modes of transit when doing the public transit routing, so I've been pestering Andreas about drawing some additional icons now and then for a while. As a piece of work-in-progress I can show you this nice rendition of a steam locomotive intending for representing tourist/heritage railways:



This example shows routes for the Mornington heritage railway outside of Melbourne in Australia, as the data I usually use for showcasing from the Swedish Samtrafiken organization currently unfortunately doesn't have this classification (I've bugged them about it…).

So, that's that for tonight. And happy hollidays everybody! 🎅🎆

Nautilus desktop plans

Hello all,

We have been discussing our plans with the desktop part of Nautilus for quite a bit of time and in this blog post I’ll explain them.

Context

Nautilus had have a feature called “the desktop” which adds icons on the background of the user workspace, similar to Windows.

The desktop was disabled for the default experience when GNOME 3 came out now 6 years ago,  and so far has been mostly unmaintained. I spent around 3 months of work two years ago to try to save it somehow and did a rearchitectural work  to try to separate the desktop from the Nautilus app so it won’t affect Nautilus development, and while it achieved some degree of separation, it didn’t achieve its main purpose and unfortunately brought even more problems than we had before. Now it has got to a point where the desktop is blocking us deeply in basically every major front we have set for future releases.

Also we notice that users rightfully have expectations for the desktop to work decently, and we acknowledge this is far from the reality and we are aware that the desktop is in a very poor state.

If you are interested in more technical details of the issues with the desktop implementation you can read them here.

With these points at hand that have accumulated over the years, we are at the point that we need to remove the desktop code inside Nautilus for Nautilus 3.28 to move forward with Nautilus development. You can check the branch with the removal work here, if you look you will see that the desktop is composed by more than 10.000 lines of code given the complexity to create it with the technologies that were available in 1999 (yes, the code is that old :)).

Solutions

With removing the desktop code from Nautilus we have considered these three options, keeping in mind that the ability to have icons on the desktop can be provided by other projects than Nautilus:

  1. Fork Nautilus desktop, one project being the desktop and the other being Nautilus app. This however doesn’t fix anything at all, the issues will still be there and I’ll be very sad for anyone that has to maintain it.
  2. Use Nemo desktop. This is, as of today, a more featureful desktop than the one in Nautilus, so probably it’s the best short term solution. It however has similar problems than the ones Nautilus desktop faced. You can read how to install it here.
  3. Make the desktop icons a Shell extension.

Proposal

The best option to of those three to move forward is to integrate it in a GNOME Shell extension. Doing what nautilus desktop was doing in an app (Nautilus) while trying to be part of the compositor (GNOME Shell) was a big mistake, and it’s one of the major issues. It’s important if you understand some technical terms to check the issues we were facing to understand how we agreed to this proposal to be the best option. This is true even more nowadays with various technologies being more in the isolation, privacy and secure side (Wayland, sandboxing, etc).

A nice thing of doing a GNOME Shell extension is that it opens the path to a more dynamic desktop workflow, and the good news is that the prototype that we have in place already has fixed one of the oldest bugs we had in the Nautilus desktop, proper multimonitor support!

You can check the extension prototype here (very early prototype). While this extension is the way forward, the time I can spend on it is limited since upstream wise the desktop pattern didn’t make much sense for long time now. So I encourage anyone that knows JavaScript or wants to learn JavaScript and likes the desktop workflow to make it happen, I’ll be there for helping any contributor making code alongside and to expand on new ideas for the desktop workflow that those that like it have in mind. You can check few points that would be nice to have for a first release here. Ping me on IRC if you are interested on it.

Who will experience a change

For those using distributions with desktop pattern workflows (e.g. Ubuntu), I don’t expect anyone to be affected, if those distributions/projects derived from upstream design decisions based on their own design decisions, I expect they will hold into those patterns and provide a desktop workflow in one form or another.

If your distribution didn’t ship a desktop by default and you were using Nautilus desktop, you can check out the options proposed before to continue using a similar workflow.

Fruits of the removal

The question is, when are the fruits of the removal of the desktop going to be noticeable?

Thanks to the removal of the desktop code we are now free to move forward with the complete rework of the Nautilus backend for 3.28, probably the biggest rework ever done in Nautilus. This will allow heavy searches to not make the UI get stuck and to be resource balanced so you could do multiple searches at the same time, it will also provide proper thread handling, ability to pause operations, ability for performance stats (and finally being able to do performance improvements), easy tweak for operations resource balancing, and more.

Another major work that thanks to the removal of the desktop code we can move forward with is the new views that I talked about in a previous blog post. We won’t make it for Nautilus 3.28 since it requires gtk+ work too, but now we can continue improving it in the Nautilus side and hopefully for 3.30 we can finally move to the next generation of content views.

And finally, this also allows for Nautilus to be ported to gtk4. This is really good news in various fronts for Nautilus; hopefully gtk4 stabilizes a little bit more and we can port to it for Nautilus 3.30.

 

I want to thank Ernestas and Antonio to work on these big changes and the help overall with Nautilus!


Outreachy Week 2: Getting Connection Details for Network Processes

This blog post summarizes my progress until the second week of Outreachy. Mainly over these two weeks I’ve worked on fetching the following details which will eventually help to associate packets with their corresponding processes .

I’ve made extensive use of /proc file system to fetch these details.

Fetching a List of Sockets on different interfaces

The virtual files present in /proc/net/ have details about our system’s network configuration and /proc/net/tcp and /proc/net/udp in particular have details about the sockets which have been created to transfer data with the respective tranport protocol . Appropriate data structures have been used to store the socket entries of these files after parsing the necessary details:

/*
*HEADER OF /proc/net/tcp
*sl local_address rem_address t tx_queue rx_queue
*tr tm->when retrnsmt uid timeout inode
**/

int matches=sscanf(buf,"%*X: %64[0-9A-Fa-f]:%X %64[0-9A-Fa-f]:%X %*X"
"%*X:%*X %*X:%*X %*X %*d %*d %ld %*512s\n",
temp_local_addr,
&(temp_socket->local_port),
temp_rem_addr,
&(temp_socket->rem_port),
&(temp_socket->inode));

Mapping Socket Inode to its PID

Although /proc/net/tcp and /proc/net/udp do have the socket’s inode but they don’t have the PID information required to map the packets to processes. Therefore a traversal of the /proc dir was done to map the inode with the process to which it belonged to.

Sample Output from the Test

For those interested to have a look at the details which have been fetched through these steps, here is a sample output from a test file :

/*TCP SOCKET DETAILS*/
proc_name:jekyll inode: 535408 pid: 5536 local_addr: 127.0.0.1 rem_addr: 0.0.0.0 local_port: 4000 rem_port: 0 sa_family: 2

proc_name:mysqld inode: 24445 pid: 1142 local_addr: 127.0.0.1 rem_addr: 0.0.0.0 local_port: 3306 rem_port: 0 sa_family: 2

proc_name:systemd-resolve inode: 24220 pid: 1135 local_addr: 0.0.0.0 rem_addr: 0.0.0.0 local_port: 5355 rem_port: 0 sa_family: 2

proc_name:master inode: 24471 pid: 1567 local_addr: 0.0.0.0 rem_addr: 0.0.0.0 local_port: 25 rem_port: 0 sa_family: 2

proc_name:firefox inode: 693964 pid: 2515 local_addr: 192.168.43.126 rem_addr: 216.58.199.162 local_port: 45994 rem_port: 443 sa_family: 2

Next thing in line which I’m working on is the packet capture on different interfaces.
Feel free to check out my work.
Stay tuned !

December 20, 2017

My Impossible Story

Keeping up my bi-yearly blogging cadence, I thought it might be fun to write about what I’ve been doing since I left Mozilla. It’s also a convenient time, as it coincides with our work being open-sourced and made public (and of course, developed in public, because otherwise what’s the point, right?) Somewhat ironically, I’ve been working on another machine-learning project, though I’m loathe to call it that, as it uses no neural networks so far, and most people I’ve encountered consider those to be synonymous. I did also go on a month’s holiday to the home of bluegrass music, but that’s a story for another post. I’m getting ahead of myself here.

Some time in March I met up with some old colleagues/friends and of course we all got to chatting about what we’re working on at the moment. As it happened, Rob had just started working at a company run by a friend of our shared former boss, Matthew Allum. What he was working on sounded like it would be a lot of fun, and I had to admit that I was a little jealous of the opportunity… But it so happened that they were looking to hire, and I was starting to get itchy feet, so I got to talk to Kwame Ferreira and one thing lead to another.

I started working for Impossible Labs in July, on an R&D project called ‘glimpse’. The remit for this work hasn’t always been entirely clear, but the pitch was that we’d be working on augmented reality technology to aid social interaction. There was also this video:

How could I resist?

What this has meant in real terms is that we’ve been researching and implementing a skeletal tracking system (think motion capture without any special markers/suits/equipment). We’ve studied Microsoft’s freely-available research on the skeletal tracking system for the Kinect, and filling in some of the gaps, implemented something that is probably very similar. We’ve not had much time yet, but it does work and you can download it and try it out now if you’re an adventurous Linux user. You’ll have to wait a bit longer if you’re less adventurous or you want to see it running on a phone.

I’ve worked mainly on implementing the tools and code to train and use the model we use to interpret body images and infer joint positions. My prior experience on the DeepSpeech team at Mozilla was invaluable to this. It gave me the prerequisite knowledge and vocabulary to be able to understand the various papers around the topic, and to realistically implement them. Funnily, I initially tried using TensorFlow for training, with the mind that it’d help us to easily train on GPUs. It turns out re-implementing it in native C was literally 1000x faster and allowed us to realistically complete training on a single (powerful) machine, in just a couple of days.

My take-away for this is that TensorFlow isn’t necessarily the tool for all machine-learning tasks, and also to make sure you analyse the graphs that it produces thoroughly and make sure you don’t have any obvious bottlenecks. A lot of TensorFlow nodes do not have GPU implementations, for example, and it’s very easy to absolutely kill performance by requiring frequent data transfers to happen between CPU and GPU. It’s also worth noting that a large graph has a huge amount of overhead that will be unrelated to the actual operations you’re trying to run. I’m no TensorFlow expert, but it’s definitely a particular tool for a particular job and it’s worth being careful. Experts can feel free to look at our repository history and tell me all the stupid mistakes I was making before we rewrote it 🙂

So what’s it like working at Impossible on a day-to-day basis? I think a picture says a thousand words, so here’s a picture of our studio:

Though I’ve taken this from the Impossible website, this is seriously what it looks like. There is actually a piano there, and it’s in tune and everything. There are guitars. We have a cat. There’s a tree. A kitchen. The roof is glass. As amazing as Mozilla (and many of the larger tech companies) offices are, this is really something else. I can’t overstate how refreshing an environment this is to be in, and how that impacts both your state of mind and your work. Corporations take note, I’ll take sunlight and life over snacks and a ball-pit any day of the week.

I miss my 3-day work-week sometimes. I do have less time for music than I had, and it’s a little harder to fit everything in. But what I’ve gained in exchange is a passion for my work again. This is code I’m pretty proud of, and that I think is interesting. I’m excited to see where it goes, and to get it into people’s hands. I’m hoping that other people will see what I see in it, if not now, sometime in the near future. Wish us luck!

December 19, 2017

Why hasn’t The Year of the Linux Desktop happened yet?

Having spent 20 years of my life on Desktop Linux I thought I should write up my thinking about why we so far hasn’t had the Linux on the Desktop breakthrough and maybe more importantly talk about the avenues I see for that breakthrough still happening. There has been a lot written of this over the years, with different people coming up with their explanations. My thesis is that there really isn’t one reason, but rather a range of issues that all have contributed to holding the Linux Desktop back from reaching a bigger market. Also to put this into context, success here in my mind would be having something like 10% market share of desktop systems, that to me means we reached critical mass. So let me start by listing some of the main reasons I see for why we are not at that 10% mark today before going onto talking about how I think that goal might possible to reach going forward.

Things that have held us back

  • Fragmented market
  • One of the most common explanations for why the Linux Desktop never caught on more is the fragmented state of the Linux Desktop space. We got a large host of desktop projects like GNOME, KDE, Enlightenment, Cinnamon etc. and a even larger host of distributions shipping these desktops. I used to think this state should get a lot of the blame, and I still believe it owns some of the blame, but I have also come to conclude in recent years that it is probably more of a symptom than a cause. If someone had come up with a model strong enough to let Desktop Linux break out of its current technical user niche then I am now convinced that model would easily have also been strong enough to leave the Linux desktop fragmentation behind for all practical purposes. Because at that point the alternative desktops for Linux would be as important as the alternative MS Windows shells are. So in summary, the fragmentation hasn’t helped for sure and is still not helpful, but it is probably a problem that has been overstated.

  • Lack of special applications
  • Another common item that has been pointed to is the lack of applications. We know that for sure in the early days of Desktop Linux the challenge you always had when trying to convince anyone of moving to Desktop Linux was that they almost invariably had one or more application they relied on that was only available on Windows. I remember in one of my first jobs after University when I worked as a sysadmin we had a long list of these applications that various parts of the organization relied on, be that special tools to interface with a supplier, with the bank, dealing with nutritional values of food in the company cafeteria etc. This is a problem that has been in rapid decline for the last 5-10 years due to the move to web applications, but I am sure that in a given major organization you can still probably find a few of them. But between the move to the web and Wine I don’t think this is a major issue anymore. So in summary this was a major roadblock in the early years, but is a lot less of an impediment these days.

  • Lack of big name applications
  • Adopting a new platform is always easier if you can take the applications you are familiar with you. So the lack of things like MS Office and Adobe Photoshop would always contribute to making a switch less likely. Just because in addition to switching OS you would also have to learn to use new tools. And of course along those lines there where always the challenge of file format compatibility, in the early days in a hard sense that you simply couldn’t reliably load documents coming from some of these applications, to more recently softer problems like lack of metrically identical fonts. The font for example issue has been mostly resolved due to Google releasing fonts metrically compatible with MS default fonts a few years ago, but it was definitely a hindrance for adoption for many years. The move to web for a lot of these things has greatly reduced this problem too, with organizations adopting things like Google Docs at rapid pace these days. So in summary, once again something that used to be a big problem, but which is at least a lot less of a problem these days, but of course there are still apps not available for Linux that does stop people from adopting desktop linux.

  • Lack of API and ABI stability
  • This is another item that many people have brought up over the years. I think I have personally vacillated over the importance of this one multiple times over the years. Changing APIs are definitely not a fun thing for developers to deal with, it adds extra work often without bringing direct benefit to their application. Linux packaging philosophy probably magnified this problem for developers with anything that could be split out and packaged separately was, meaning that every application was always living on top of a lot of moving parts. That said the reason I am sceptical to putting to much blame onto this is that you could always find stable subsets to rely on. So for instance if you targeted GTK2 or Qt back in the day and kept away from some of the more fast moving stuff offered by GNOME and KDE you would not be hit with this that often. And of course if the Linux Desktop market share had been higher then people would have been prepared to deal with these challenges regardless, just like they are on other platforms that keep changing and evolving quickly like the mobile operating systems.

  • Apple resurgence
  • This might of course be the result of subjective memory, but one of the times where it felt like there could have been a Linux desktop breakthrough was at the same time as Linux on the server started making serious inroads. The old Unix workstation market was coming apart and moving to Linux already, the worry of a Microsoft monopoly was at its peak and Apple was in what seemed like mortal decline. There was a lot of media buzz around the Linux desktop and VC funded companies was set up to try to build a business around it. Reaching some kind of critical mass seemed like it could be within striking distance. Of course what happened here was that Steve Jobs returned to Apple and we suddenly had MacOSX come onto the scene taking at least some air out of the Linux Desktop space. The importance of this one I do find exceptionally hard to quantify though, part of me feels it had a lot of impact, but on the other hand it isn’t 100% clear to me that the market and the players at the time would have been able to capitalize even if Apple had gone belly-up.

  • Microsoft aggressive response
  • In the first 10 years of Desktop linux there was no doubt that Microsoft was working hard to try to nip any sign of Desktop Linux gaining any kind of foothold or momentum. I do remember for instance that Novell for quite some time was trying to establish a serious Desktop Linux business after having bought Miguel de Icaza’s company Helix Code. However it seemed like a pattern quickly emerged that every time Novell or anyone else tried to announce a major Linux desktop deal, Microsoft came running in offering next to free Windows licensing to get people to stay put. Looking at Linux migrations even seemed like it became a goto policy for negotiating better prices from Microsoft. So anyone wanting to attack the desktop market with Linux would have to contend with not only market inertia, but a general depression of the price of a desktop operating systems, and knowing that Microsoft would respond to any attempt to build momentum around Linux desktop deals with very aggressive sales efforts. So in summary, this probably played an important part as it meant that the pay per copy/subscription business model that for instance Red Hat built their server business around became really though to make work in the desktop space. Because the price point ended up so low it required gigantic volumes to become profitable, which of course is a hard thing to quickly achieve when fighting against an entrenched market leader. So in summary Microsoft in some sense successfully fended of Linux breaking through as a competitor although it could be said they did so at the cost of fatally wounding the per copy fee business model they built their company around and ensured that the next wave of competitors Microsoft had to deal with like iOS and Android based themselves on business models where the cost of the OS was assumed to be zero, thus contributing to the Windows Phone efforts being doomed.

  • Piracy
  • One of the big aspirations of the Linux community from the early days was the idea that a open source operating system would enable more people to be able to afford running a computer and thus take part in the economic opportunities that the digital era would provide. For the desktop space there was always this idea that while Microsoft was entrenched in North America and Europe there was this ocean of people in the rest of the world that had never used a computer before and thus would be more open to adopting a desktop linux system. I think this so far panned out only in a limited degree, where running a Linux distribution has surely opened job and financial opportunities for a lot of people, yet when you look at things from a volume perspective most of these potential Linux users found that a pirated Windows copy suited their needs just as much or more. As an anecdote here, there was recently a bit of noise and writing around the sudden influx of people on Steam playing Player Unknown: Battlegrounds, as it caused the relatively Linux marketshare to decline. So most of these people turned out to be running Windows in Mandarin language. Studies have found that about 70% of all software in China is unlicensed so I don’t think I am going to far out on a limb here assuming that most of these gamers are not providing Microsoft with Windows licensing revenue, but it does illustrate the challenge of getting these people onto Linux as they already are getting an operating system for free. So in summary, in addition to facing cut throat pricing from Microsoft in the business sector one had to overcome the basically free price of pirated software in the consumer sector.

  • Red Hat mostly stayed away
  • So few people probably don’t remember or know this, but Red Hat was actually founded as a desktop Linux company. The first major investment in software development that Red Hat ever did was setting up the Red Hat Advanced Development Labs, hiring a bunch of core GNOME developers to move that effort forward. But when Red Hat pivoted to the server with the introduction of Red Hat Enterprise Linux the desktop quickly started playing second fiddle. And before I proceed, all these events where many years before I joined the company, so just as with my other points here, read this as an analysis of someone without first hand knowledge. So while Red Hat has always offered a desktop product and have always been a major contributor to keeping the Linux desktop ecosystem viable, Red Hat was focused on the server side solutions and the desktop offering was always aimed more narrowly things like technical workstation customers and people developing towards the RHEL server. It is hard to say how big an impact Red Hats decision to not go after this market has had, on one side it would probably have been beneficial to have the Linux company with the deepest pockets and the strongest brand be a more active participant, but on the other hand staying mostly out of the fight gave other companies a bigger room to give it a go.

  • Canonical business model not working out
  • This bullet point is probably going to be somewhat controversial considering I work for Red Hat (although this is my private blog my with own personal opinions), but on the other hand I feel one can not talk about the trajectory of the Linux Desktop over the last decade without mentioning Canonical and Ubuntu. So I have to assume that when Mark Shuttleworth was mulling over doing Ubuntu he probably saw a lot of the challenges that I mention above, especially the revenue generation challenges that the competition from Microsoft provided. So in the end he decided on the standard internet business model of the time, which was to try to quickly build up a huge userbase and then dealing with how to monetize it later on. So Ubuntu was launched with an effective price point of zero, in fact you could even get install media sent to you for free. The effort worked in the sense that Ubuntu quickly became the biggest player in the Linux desktop space and it certainly helped the Linux desktop marketshare grow in the early years. Unfortunately I think it still basically failed, and the reason I am saying that is that it didn’t manage to grow big enough to provide Ubuntu with enough revenue through their appstore or their partner agreements to allow them to seriously re-invest in the Linux Desktop and invest in the kind of marketing effort needed to take Linux to a less super technical audience. So once it plateaued what they had was enough revenue to keep what is a relatively barebones engineering effort going, but not the kind of income that would allow them to steadily build the Linux Desktop market further. Mark then tried to capitalize on the mindshare and market share he had managed to build, by branching out into efforts like their TV and Phone efforts, but all those efforts eventually failed.
    It would probably be an article in itself to deeply discuss why the grow userbase strategy failed here vs why for instance Android succeeded with this model, but I think the short version goes back to the fact that you had an entrenched market leader and the Linux Desktop isn’t different enough from a Mac or Windows desktops to drive the type of market change the transition from feature phones to smartphones was.
    And to be clear I am not criticizing Mark here for the strategy he choose, if I where in his shoes back when he started Ubuntu I am not sure I would have been able to come up a different strategy that would have been plausible to succeed from his starting point. That said it did contribute to even further push the expected price of desktop Linux down and thus making it even harder for people to generate significant revenue from desktop linux. On the other hand one can argue that this would likely have happened anyway due to competitive pressure and Windows piracy. Canonicals recent focus pivot away from the desktop towards trying to build a business in the server and IoT space is in some sense a natural consequence of hitting the desktop growth plateau and not having enough revenue to invest in further growth.
    So in summary, what was once seen as the most likely contender to take the Linux Desktop to critical mass turned out to have taken off with to little rocket fuel and eventually gravity caught up with them. And what we can never know for sure is if they during this run sucked so much air out of the market that it kept someone who could have taken us further with a different business model from jumping in.

  • Original device manufacturer support
  • THis one is a bit of a chicken and egg issue. Yes, lack of (perfect) hardware support has for sure kept Linux back on the Desktop, but lack of marketshare has also kept hardware support back. As with any system this is a question of reaching critical mass despite your challenges and thus eventually being so big that nobody can afford ignoring you. This is an area where we even today are still not fully there yet, but which I do feel we are getting closer all the time. When I installed Linux for the very first time, which I think was Red Hat Linux 3.1 (pre RHEL days) I spent about a weekend fiddling just to get my sound card working. I think I had to grab a experimental driver from somewhere and compile it myself. These days I mostly expect everything to work out of the box except more unique hardware like ambient light sensors or fingerprint readers, but even such devices are starting to land, and thanks to efforts from vendors such as Dell things are looking pretty good here. But the memory of these issues is long so a lot of people, especially those not using Linux themselves, but have heard about Linux, still assume hardware support is a very much hit or miss issue still.

What does the future hold?

So any who has read my blog posts probably know I am an optimist by nature. This isn’t just some kind of genetic disposition towards optimism, but also a philosophical belief that optimism breeds opportunity while pessimism breeds failure. So just because we haven’t gotten the Linux Desktop to 10% marketshare so far doesn’t mean it will not happen going forward. It just means we haven’t achieved it so far. One of the key identifies of open source is that it is incredibly hard to kill, because unlike proprietary software, just because a company goes out of business or decides to shut down a part of its business, the software doesn’t go away or stop getting developed. As long as there is a strong community interested in pushing it forward it remains and evolves and thus when opportunity comes knocking again it is ready to try again. And that is definitely true of Desktop Linux which from a technical perspective is better than it has ever been, the level of polish is higher than ever before, the level of hardware support is better than ever before and the range of software available is better than ever before.

And the important thing to remember here is that we don’t exist in a vacuum, the world around us constantly change too, which means that the things that blocked us in the past or the companies that blocked us in the past might no be around or able to block us tomorrow. Apple and Microsoft are very different companies today than they where 10 or 20 years ago and their focus and who they compete with are very different. The dynamics of the desktop software market is changing with new technologies and paradigms all the time. Like how online media consumption has moved from things like your laptop to phones and tablets for instance. 5 years ago I would have considered iTunes a big competitive problem, today the move to streaming services like Spotify, Hulu, Amazon or Netflix has made iTunes feel archaic and a symbol of bygone times.

And many of the problems we faced before, like weird Windows applications without a Linux counterpart has been washed away by the switch to browser based applications. And while Valve’s SteamOS effort didn’t taken off, it has provided Linux users with access to a huge catalog of games, removing a reason that I know caused a few of my friends to mostly abandon using Linux on their computers. And you can actually as a consumer buy linux from a range of vendors now, who try to properly support Linux on their hardware. And this includes a major player like Dell and smaller outfits like System76 and Purism.

And since I do work for Red Hat managing our Desktop Engineering team I should address the question of if Red Hat will be a major driver in taking Desktop linux to that 10%? Well Red Hat will continue to support end evolve our current RHEL Workstation product, and we are seeing a steady growth of new customers for it. So if you are looking for a solid developer workstation for your company you should absolutely talk to Red Hat sales about RHEL Workstation, but Red Hat is not looking at aggressively targeting general consumer computers anytime soon. Caveat here, I am not a C-level executive at Red Hat, so I guess there is always a chance Jim Whitehurst or someone else in the top brass is mulling over a gigantic new desktop effort and I simply don’t know about it, but I don’t think it is likely and thus would not advice anyone to hold their breath waiting for such a thing to be announced :). That said Red Hat like any company out there do react to market opportunities as they arise, so who knows what will happen down the road. And we will definitely keep pushing Fedora Workstation forward as the place to experience the leading edge of the Desktop Linux experience and a great portal into the world of Linux on servers and in the cloud.

So to summarize; there are a lot of things happening in the market that could provide the right set of people the opportunity they need to finally take Linux to critical mass. Whether there is anyone who has the timing and skills to pull it off is of course always an open question and it is a question which will only be answered the day someone does it. The only thing I am sure of is that Linux community are providing a stronger technical foundation for someone to succeed with than ever before, so the question is just if someone can come up with the business model and the market skills to take it to the next level. There is also the chance that it will come in a shape we don’t appreciate today, for instance maybe ChromeOS evolves into a more full fledged operating system as it grows in popularity and thus ends up being the Linux on the Desktop end game? Or maybe Valve decides to relaunch their SteamOS effort and it provides the foundation for a major general desktop growth? Or maybe market opportunities arise that will cause us at Red Hat to decide to go after the desktop market in a wider sense than we do today? Or maybe Endless succeeds with their vision for a Linux desktop operating system? Or maybe the idea of a desktop operating system gets supplanted to the degree that we in the end just sit there saying ‘Alexa, please open the IDE and take dictation of this new graphics driver I am writing’ (ok, probably not that last one ;)

And to be fair there are a lot of people saying that Linux already made it on the desktop in the form of things like Android tablets. Which is technically correct as Android does run on the Linux kernel, but I think for many of us it feels a bit more like a distant cousin as opposed to a close family member both in terms of use cases it targets and in terms of technological pedigree.

As a sidenote, I am heading of on Yuletide vacation tomorrow evening, taking my wife and kids to Norway to spend time with our family there. So don’t expect a lot new blog posts from me until I am back from DevConf in early February. I hope to see many of you at DevConf though, it is a great conference and Brno is a great town even in freezing winter. As we say in Norway, there is no such thing as bad weather, it is only bad clothing.

Builder 3.27 Progress (Again)

As normal, I’ve been busy since our last update. Here are a few highlights of features in addition to all those bug fixes.

Recursive Directory Monitors

Builder now creates a recursive directory monitor so that your project’s source tree can be updated in case of external modification, such as from a terminal. If you need a recursive directory monitor, the implementation can be found in libdazzle.

Project Tree Drag-n-Drop

The project tree now supports basic drag-n-drop. You can drag within the tree as well as from external programs supporting text/uri-list into the project tree. Nautilus is one such example.

VCS Status in Project Tree

The project tree can now query the VCS backend (git) to provide status about the added and changed files to your project.

vcs status for project tree

Editor Grid Drag-n-Drop

You can drag text/uri-list drag sources onto the editor grid and place them as you like. Drag to the top or bottom to create new above/below splits. Drag to the edges for left/right splits.

Build Pipeline Stages

Sometimes you might want to peek into the build pipeline to get a bit more insight. Expand the “Build Details” to see the pipeline stages. They’ll update as the build progresses.

displaying build pipeline entries

Updating Dependencies

We want to ensure we’re doing less work when Builder starts-up. That means we won’t auto-update dependencies before long. In doing so, you’ll have to choose to update your dependencies when it makes sense. We might as well make that easy, so here is a button to do that. It currently supports flatpak and Cargo.

update dependencies button

Hamburger menu has gone away

We focus more on “contextual” menus rather than stashing things in the window menu. So much that we’ve managed to be able to remove the “hamburger” menu by default. It will automatically display should any enabled plugin use it.

hamburger menu is gone

Lots of bug fixes too, but those don’t have pretty pictures. So that’s it for now!

December 17, 2017

Epiphany Stable Flatpak Releases

The latest stable version of Epiphany is now available on Flathub. Download it here. You should be able to double click the flatpakref to install it in GNOME Software, if you use any modern GNOME operating system not named Ubuntu. But, in my experience, GNOME Software is extremely buggy, and it often as not does not work for me. If you have trouble, you can use the command line:

flatpak install --from https://flathub.org/repo/appstream/org.gnome.Epiphany.flatpakref

This has actually been available for quite a while now, but I’ve delayed announcing it because some things needed to be fixed to work well under Flatpak. It’s good now.

I’ve also added a download link to Epiphany’s webpage, so that you can actually, you know, download and install the software. That’s a useful thing to be able to do!

Benefits

The obvious benefit of Flatpak is that you get the latest stable version of Epiphany (currently 3.26.5) and WebKitGTK+ (currently 2.18.3), no matter which version is shipped in your operating system.

The other major benefit of Flatpak is that the browser is protected by Flatpak’s top-class bubblewrap sandbox. This is, of course, a UI process sandbox, which is different from the sandboxing model used in other browsers, where individual browser tabs are sandboxed from each other. In theory, the bubblewrap sandbox should be harder to escape than the sandboxes used in other major browsers, because the attack surface is much smaller: other browsers are vulnerable to attack whenever IPC messages are sent between the web process and the UI process. Such vulnerabilities are mitigated by a UI process sandbox. The disadvantage of this approach is that tabs are not sandboxed from each other, as they would be with a web process sandbox, so it’s easier for a compromised tab to do bad things to your other tabs. I’m not sure which approach is better, but clearly either way is much better than having no sandbox at all. (I still hope to have a web process sandbox working for use when WebKit is used outside of Flatpak, but that’s not close to being ready yet.)

Problems

Now, there are a couple of loose ends. We do not yet have desktop notifications working under Flatpak, and we also don’t block the screen from turning off when you’re watching fullscreen video, so you’ll have to wiggle your mouse every five minutes or so when you’re watching YouTube to keep the lights on. These should not be too hard to fix; I’ll try to get them both working soon. Also, drag and drop does not work. I’m not nearly brave enough to try fixing that, though, so you’ll just have to live without drag and drop if you use the Flatpak version.

Also, unfortunately the stable GNOME runtimes do not receive regular updates. So while you get the latest version of Epiphany, most everything else will be older. This is not good. I try to make sure that WebKit gets updated, so you’ll have all the latest security updates there, but everything else is generally stuck at older versions. For example, the 3.26 runtime uses, for the most part, whatever software versions were current at the time of the 3.26.1 release, and any updates newer than that are just not included. That’s a shame, but the GNOME release team does not maintain GNOME’s Flatpak runtimes: we have three other other redundant places to store the same build information (JHBuild, GNOME Continuous, BuildStream) that we need to take care of, and adding yet another is not going to fly. Hopefully this situation will change soon, though, since we should be able to use BuildStream to replace the current JSON manifest that’s used to generate the Flatpak runtimes and keep everything up to date automatically. In the meantime, this is a problem to be aware of.

#PeruRumboGSoC2018 – Session 5

Today we have celebrated another session for the #PeruRumboGSoC2018 program at CCPP UNI. It was one of the longest sessions we have experienced.

We were able to cope with different packages and versions to work with WebKit and GTK on Fedora 26 (one of the students have a 32 bit arch), and on Fedora 27One of the accomplishes today was coding a Language Selector using GtkListBox and GtkLinkButton with GTK and Python, here in detail.

Newcomers bug list on gitlab was also checked today, specially of the applications gnome-todo and gnome-music. As well Fedora Docs Dev,system-config-language and the implementation of Elastic Search were evaluated and discussed as possible GSoC 2018 proposals for Fedora. Thanks @zodiacfireworkThis is the final chart of the effort of the participants. In this picture we have Cristian (18) as @pystudent1913 and Fiorella (21) as @aweba. They are the top two! 🙂We have shared a lunch and some food at the afternoon. Thanks again to our sponsors: GNOME, Fedora & Linux Foundation for the support of this challenge!gogogo PeruRumboGSoC2018!


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: #PeruRumboGSoC2018, fedora, GNOME, GSoC, GSoC Fedora proposal, GSoC GNOME gitlab, GTK 3, gtk.py, Julita Inca, Julita Inca Chiroque, LinuXatUNI, PeruGSoC, Python

December 16, 2017

Librsvg 2.40.20 is released

Today I released librsvg 2.40.20. This will be the last release in the 2.40.x series, which is deprecated effectively immediately.

People and distros are strongly encouraged to switch to librsvg 2.41.x as soon as possible. This is the version that is implemented in a mixture of C and Rust. It is 100% API and ABI compatible with 2.40.x, so it is a drop-in replacement for it. If you or your distro can compile Firefox 57, you can probably build librsvg-2.41.x without problems.

Some statistics

Here are a few runs of loc — a tool to count lines of code — when run on librsvg. The output is trimmed by hand to only include C and Rust files.

This is 2.40.20:
-------------------------------------------------------
 Language      Files   Lines   Blank   Comment    Code
-------------------------------------------------------
 C                41   20972    3438      2100   15434
 C/C++ Header     27    2377     452       625    1300
This is 2.41.latest (the master branch):
-------------------------------------------------------
 Language      Files   Lines   Blank   Comment    Code
-------------------------------------------------------
 C                34   17253    3024      1892   12337
 C/C++ Header     23    2327     501       624    1202
 Rust             38   11254    1873       675    8706
And this is 2.41.latest *without unit tests*, 
just "real source code":
-------------------------------------------------------
 Language      Files   Lines   Blank   Comment    Code
-------------------------------------------------------
 C                34   17253    3024      1892   12337
 C/C++ Header     23    2327     501       624    1202
 Rust             38    9340    1513       610    7217

Summary

Not counting blank lines nor comments:

  • The C-only version has 16734 lines of C code.

  • The C-only version has no unit tests, just some integration tests.

  • The Rust-and-C version has 13539 lines of C code, 7217 lines of Rust code, and 1489 lines of unit tests in Rust.

As for the integration tests:

  • The C-only version has 64 integration tests.

  • The Rust-and-C version has 130 integration tests.

The Rust-and-C version supports a few more SVG features, and it is A LOT more robust and spec-compliant with the SVG features that were supported in the C-only version.

The C sources in librsvg are shrinking steadily. It would be incredibly awesome if someone could run some git filter-branch magic with the loc tool and generate some pretty graphs of source lines vs. commits over time.

December 15, 2017

Some predictions for 2018

So I spent a few hours polishing my crystal ball today, so here are some predictions for Linux on the Desktop in 2018. The advantage of course for me to publish these now is that I can then later selectively quote the ones I got right to prove my brilliance and the internet can selectively quote the ones I got wrong to prove my stupidity :)

Prediction 1: Meson becomes the defacto build system of the Linux community

Meson has been going from strength to strength this year and a lot of projects
which passed on earlier attempts to replace autotools has adopted it. I predict this
trend will continue in 2018 and that by the end of the year everyone agrees that Meson
has replaced autotools as the Linux community build system of choice. That said I am not
convinced the Linux kernel itself will adopt Meson in 2018.

Prediction 2: Rust puts itself on a clear trajectory to replace C and C++ for low level programming

Another rising start of 2017 is the programming language Rust. And while its pace of adoption
will be slower than Meson I do believe that by the time 2018 comes to a close the general opinion is
that Rust is the future of low level programming, replacing old favorites like C and C++. Major projects
like GNOME and GStreamer are already adopting Rust at a rapid pace and I believe even more projects will
join them in 2018.

Prediction 3: Apples decline as a PC vendor becomes obvious

Ever since Steve Jobs died it has become quite clear in my opinion that the emphasis
on the traditional desktop is fading from Apple. The pace of hardware refreshes seems
to be slowing and MacOS X seems to be going more and more stale. Some pundits have already
started pointing this out and I predict that in 2018 Apple will be no longer consider the
cool kid on the block for people looking for laptops, especially among the tech savvy crowd.
Hopefully a good opportunity for Linux on the desktop to assert itself more.

Prediction 4: Traditional distro packaging for desktop applications
will start fading away in favour of Flatpak

From where I am standing I think 2018 will be the breakout year for Flatpak as a replacement
for gettings your desktop applications as RPMS or debs. I predict that by the end of 2018 more or
less every Linux Desktop user will be at least running 1 flatpak on their system.

Prediction 5: Linux Graphics competitive across the board

I think 2018 will be a breakout year for Linux graphics support. I think our GPU drivers and API will be competitive with any other platform both in completeness and performance. So by the end of 2018 I predict that you will see Linux game ports by major porting houses
like Aspyr and Feral that perform just as well as their Windows counterparts. What is more I also predict that by the end of 2018 discreet graphics will be considered a solved problem on Linux.

Prediction 6: H265 will be considered a failure

I predict that by the end of 2018 H265 will be considered a failed codec effort and the era of royalty bearing media codecs will effectively start coming to and end. H264 will be considered the last successful royalty bearing codec and all new codecs coming out will
all be open source and royalty free.

Feeds