GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

March 28, 2017

Emacs as C IDE and JHBuild

Although Builder clearly is The Future as GNOME IDE, I still all my coding in Emacs, mostly because I have been using it for such a long time that my brain is to all the shortcuts and workflows. But Emacs can be a good IDE too. The most obvious everyday features that I want from an IDE are good source code navigation and active assistance while editing. In the first category are tasks like jumping to symbol's definition, find all callers of a function and such things. For editing, auto-completion, immediate warnings and error reporting, semantic-aware re-factoring are a must. Specifically for GNOME related development, I need all this to also work with JHBuild.

Emacs autocompletion

Auto-completion via irony-mode

Emacs can do all these with a combination of various packages: RTags indexes C/C++ source code and provides all sorts of functionality on top of that, like follow-symbol, find-references, rename-symbol and even fix obvious errors for you (via clang's "Fix-It Hints"). Auto-completion and "online" diagnostics are currently handled via Irony-mode (via company-irony and company-flycheck); RTags could do both too, I use irony-mode mostly for historic reasons and it works quite well. Additionally, irony-mode also has integration with eldoc-mode, that will show function signatures in the mode-line.

Eldoc integration

Eldoc integration

compile database compiler wrapper

Both of these packages use (lib)clang internally, which in turn needs to know the right compile flags for all source files. CMake and other modern build system can generate a compile_commands.json file that contains that information. For automake/autoconf based projects a wrapper around gcc can in theory be used to generate the file. But for JHBuild that approach is not working, because the build directory is not the source directory, so the wrapper generates the files in the wrong place. To work around this I wrote a special gcc wrapper and small set of tools, called cdcc (for compile database cc). The cdcc-gcc wrapper will store the compile flags in a sqlite database and the cdcc-gen command can then be called with a path to the source directory to generate the corresponding compile_commands.json. The easy way to use cdcc with JHBuild is to put the following in your ~/.config/jhbuildrc (this includes a fallback if it is not installed):

if spawn.find_executable('cdcc-gcc') is not None:
    os.environ['CC'] = 'cdcc-gcc'
    os.environ['CXX'] = 'cdcc-g++'

After the build is done, the compile_commands.json for each module can then be generated with cdcc-gen * in the JHBuild checkout root. The Emacs package cmake-ide is used to automatically set up RTags and friends with the correct settings, if it detects the compile_commands.json file.

Installation

RPMs for Fedora of RTags and cdcc can be found in my copr. The rdm server for RTags can easily be started via systemd socket activation, instructions are here. I install the irony-server binary locally in my ~/.emacs.d folder via a custom script.
I load all Emacs packages with use-package, you can look at my init.el for details.

GdMainBox — the new content-view widget in libgd

Now that I have written at length about the new fluid overview grids in GNOME Photos, it is time to talk a bit about the underlying widgets doing the heavy lifting. Hopefully some of my fellow GNOME developers will find this interesting.

Background

Ever since its incubation inside Documents, libgd has had a widget called GdMainView. It is the one which shows the grid or list of items in the new GNOME applications — Boxes, Photos, Videos, etc.. It is where drag-n-drop, rubber band selection and the selection mode pattern are implemented.

However, as an application developer, I think its greatest value is in making it trivial to switch the main content view from a grid to a list and back. No need to worry about the differences in how the data will be modelled or rendered. No need to worry about all the dozens of little details that arise when the main UI of an application is switched like that. For example, this is all that the JavaScript code in Documents does:

  let view = new Gd.MainView({ shadow_type: Gtk.ShadowType.NONE });
  …
  view.view_type = Gd.MainViewType.LIST; // use a list
  …
  view.view_type = Gd.MainViewType.ICON; // use a grid


Unfortunately, GdMainView is based on GtkIconView and GtkTreeView. By this time we all know that GtkIconView has various performance and visual problems. While GtkTreeView might not be slow, the fact that it uses an entirely separate class of visual elements that are not GtkWidgets limits what one can render using it. That’s where GdMainBox comes in.

GdMainBox

GdMainBox is a replacement for GdMainView that is meant to use GtkFlowBox and GtkListBox instead.

GListModel *model;
GtkWidget *view;

model = /* a GListModel containing GdMainBoxItems */
view = gd_main_box_new (GD_MAIN_BOX_ICON);
gd_main_box_set_model (GD_MAIN_BOX (view), model);
g_signal_connect (view,
                  "item-activated",
                  G_CALLBACK (item_activated_cb),
                  data);
g_signal_connect (view,
                  "selection-mode-request",
                  G_CALLBACK (selection_mode_request_cb),
                  data);
g_signal_connect (view,
                  "selection-changed", /* not view-selection-changed */
                  G_CALLBACK (selection_changed_cb),
                  data);


If you are familiar with with old GdMainView widget, you will notice the striking similarity with it. Except one thing. The data model.

GdMainView expected applications to offer a GtkTreeModel with a certain number of columns arranged in a certain order with certain type of values in them. Nothing surprising since both GtkIconView and GtkTreeView rely on the existence of a GtkTreeModel.

In the world of GtkListBoxes and GtkFlowBoxes, the data model is GListModel, a list-like collection of GObjects [*]. Therefore, instead of columns in a table, they need objects with certain properties, and methods to access them. These are codified in the GdMainBoxItem interface which every rendered object needs to implement. You can look at this commit for an example. A nice side-effect is that an interface is inherently more type-safe than a GtkTreeModel whose expected layout is expressed as enumerated types. The compiler can not assert that a certain column does have the expected data type, so it left us vulnerable to bugs caused by inadvertent changes to either libgd or an application.

But why a new widget?

You can definitely use a GtkFlowBox or GtkListBox directly in an application, if that’s what you prefer. However, the vanilla GTK+ widgets don’t offer all the necessary features. I think there is value in consolidating the implementation of those features in a single place that can be shared across modules. It serves as a staging area for prototyping those features in a reasonably generic way so that they can eventually be moved to GTK+ itself. If nothing else, I didn’t want to duplicate the same code across the two applications that I am responsible for — Documents and Photos.

One particularly hairy thing that I encountered was the difference between how selections are handled by the stock GtkFlowBox and the intended behaviour of the content-view. Other niceties on offer are expanding thumbnails, selection mode, and drag-n-drop.

If you do decide to directly use the GTK+ widgets, then I would suggest that you at least use the same CSS style classes as GdMainBox — “content-view” for the entire view and “tile” for each child.

The future

I mentioned changing lists to grids and vice versa. Currently, GdMainBox only offers a grid of icons because Photos is the only user and it doesn’t offer a list view. That’s going to change when I port Documents to it. When that happens, changing the view is going to be just as easy as it used to be.

gd_main_view_set_view_type (GD_MAIN_BOX (view), GD_MAIN_BOX_LIST);



[*] Yes, it’s possible to use them without a model, but having a GListModel affords important future performance optimizations, so we will ignore that possibility.


2017-03-28 Tuesday.

  • Practice with the babes, mail. Built ESC stats, speculative crashreporter fix while doing it - a crash fix per week keeps the doctor at bay (or something). Sync with Andars, poked at Azorpid's proposal; nice. Lunch.
  • Mail chew; toilet flush broke: stainless pin had managed to wear itself through the cheap aluminium handle casting, drilled another hole a little further back, re-assembled & hoped. Poked at another mysterious post-dispose crasher related to context menus for Aron.
  • If someone endorses you on LinkedIn for the skill of 'subversion' - is that a good thing ?

Happy Document Freedom Day

It is with great pleasure that we are announcing Document Freedom Day celebration. As usual the event is celebrated on the last Wednesday of March (March 29th) and a few teams have already celebrated earlier such as people in Taiwan!

Due to the late announcement we are thinking to give people 1 more month to prepare for the event and run it on Wednesday April 26th!

Make sure you mention the date you’re running the event on your wiki page.

We’ve migrated most of the content from the FSFE website to ours but plan on filling up the wikis as we have done with other events to provide tips & tricks for the years to come.

We would also like to thank our sponsors Google and Freiheit who have always back us up and deserve a big thank you for that!

So without further ado celebrate Document Freedom Day fully and enjoy every moment of it!

GUADEC accommodation

At this year’s GUADEC in Manchester we have rooms available for you right at the venue in lovely modern student townhouses. As I write this there are still some available to book along with your registration. In a couple of days we have to give final numbers to the University for how many rooms we want, so it would help us out if all the folk who want a room there could register and book one now if you haven’t already done so! We’ll have some available for later booking but we have to pay up front for them now so we can’t reserve too many.

Rooms for sponsored attendees are reserved separately so you don’t need to book now if your attendance depends on travel sponsorship.

If you are looking for a hotel, we have a hotel booking service run by Visit Manchester where you can get the best rates from various hotels right up til June 2017. (If you need to arrive before Thursday 27th July then you can to contact Visit Manchester directly for your booking at abs@visitmanchester.com).

We have had some great talk submissions already but there is room for plenty more, so make sure you also submit your idea for a talk before 23rd April!


March 27, 2017

Testing LibreOffice 5.3 Notebookbar

I teach an online CSCI class about usability. The course is "The Usability of Open Source Software" and provides a background on free software and open source software, and uses that as a basis to teach usability. The rest of the class is a pretty standard CSCI usability class. We explore a few interesting cases in open source software as part of our discussion. And using open source software makes it really easy for the students to pick a program to study for their usability test final project.

I structured the class so that we learn about usability in the first half of the semester, then we practice usability in the second half. And now we are just past the halfway point.

Last week, my students worked on a usability test "mini-project." This is a usability test with one tester. By itself, that's not very useful. But the intention is for the students to experience what it's like to moderate their own usability test before they work on their usability test final project. In this way, the one-person usability test is intended to be a "dry run."

For the one-person usability test, every student moderates the same usability test on the same program. We are using LibreOffice 5.3 in Notebookbar View in Contextual Groups mode. (And LibreOffice released version 5.3.1 just before we started the usability test, but fortunately the user interface didn't change, at least in Notebookbar-Contextual Groups.) Students worked together to write scenario tasks for the usability test, and I selected eight of those scenario tasks.

By using the same scenario tasks on the same program, with one tester each, we can combine results to build an overall picture of LibreOffice's usability with the new user interface. Because the test was run by different moderators, this isn't statistically useful if you are writing an academic paper, and it's of questionable value as a qualitative measure. But I thought it would be interesting to share the results.

First, let's look at the scenario tasks. We started with one persona: an undergraduate student at a liberal arts university. Each student in my class contributed two use scenarios for LibreOffice 5.3, and three scenario tasks for each scenario. That gave a wide field of scenario tasks. There was quite a bit of overlap. And there was some variation on quality, with some great scenario tasks and some not-so-great scenario tasks.

I grouped the scenario tasks into themes, and selected eight scenario tasks that suited a "story" of a student working on a paper: a simple lab write-up for an Introduction to Physics class. I did minimal editing of the scenario tasks; I tried to leave them as-is. Most of the scenario tasks were of high quality. I included a few not-great scenario tasks so students could see how the quality of the scenario task can impact the quality of your results. So keep that in mind.

These are the scenario tasks we used. In addition to these tasks, students provided a sample lab report (every tester started with the same document) and a sample image. Every test was run in LibreOffice 5.3 or 5.3.1, which was already set to use Notebookbar View in Contextual Groups mode:
1. You’re writing a lab report for your Introduction to Physics class, but you need to change it to meet your professors formatting requirements. Change your text to use Times New Roman 12 pt. and center your title

2. There is a requirement of double spaced lines in MLA. The paper defaults to single spaced and needs to be adjusted. Change paper to double spaced.

3. After going through the paragraphs, you would like to add your drawn image at the top of your paper. Add the image stored at velocitydiagram.jpg to the top of the paper.

4. Proper header in the Document. Name, class, and date are needed to receive a grade for the week.

5. You've just finished a physics lab and have all of your data written out in a table in your notebook. The data measures the final velocity of a car going down a 1 meter ramp at 5, 10, 15, 20, and 25 degrees. Your professor wants your lab report to consist of a table of this data rather than hand-written notes. There’s a note in the document that says where to add the table.

[task also provided a 2×5 table of sample lab data]

6. You are reviewing your paper one last time before turning it into your professor. You notice some spelling errors which should not be in a professional paper. Correct the multiple spelling errors.

7. You want to save your notes so that you can look back on them when studying for the upcoming test. Save the document.

8. The report is all done! It is time to turn it in. However, the professor won’t accept Word documents and requires a PDF. Export the document as a PDF.
If those don't seem very groundbreaking, remember the point of the usability test "mini-project" was for the students to experience moderating their own usability test. I'd rather they make mistakes here, so they can learn from them before their final project.

Since each usability test was run with one tester, and we all used the same scenario tasks on the same version of LibreOffice, we can collate the results. I prefer to use a heat map to display the results of a usability test. The heat map doesn't replace the prose description of the usability test (what worked v what were the challenges) but the heat map does provide a quick overview that allows focused discussion of the results.

In a heat map, each scenario task is on a separate row, and each tester is in a separate column. At each cell, if the tester was able to complete the task with little or no difficulty, you add a green block. Use yellow for some difficulty, and orange for greater difficulty. If the tester really struggled to complete the task, use a red block. Use black if the task was so difficult the tester was unable to complete the task.

Here's our heat map, based on fourteen students each moderating a one-person usability test (a "dry run" test) using the same scenario tasks for LibreOffice 5.3 or 5.3.1:


A few things about this heat map:

Hot rows show you where to focus

Since scenario tasks are on rows, and testers are on columns, you read a heat map by looking across each row and looking for lots of "hot" items. Look for lots of black, red, or orange. Those are your "hot" rows. And rows that have a lot of green and maybe a little yellow are "cool" rows.

In this heat map, I'm seeing the most "hot" items in setting double space (#2), adding a table (#5) and checking spelling (#6). Maybe there's something in adding a header (#4) but this scenario task wasn't worded very well, so the problems here might be because of the scenario task.

So if I were a LibreOffice developer, and I did this usability test to examine the usability of MUFFIN, I would probably put most of my focus to make it easier to set double space, add tables, and check spelling. I wouldn't worry too much about adding an image, since that's mostly green. Same for saving, and saving as PDF.

The heat map doesn't replace prose description of themes

What's behind the "hot" rows? What were the testers trying to do, when they were working on these tasks? The heat map doesn't tell you that. The heat map isn't a replacement for prose text. Most usability results need to include a section about "What worked well" and "What needs improvement." The heat map doesn't replace that prose section. But it does help you to identify the areas that worked well vs the areas that need further refinement.

That discussion of themes is where you would identify that task 4 (Add a header) wasn't really a "hot" row. It looks interesting on the heat map, but this wasn't a problem area for LibreOffice. Instead, testers had problems understanding the scenario task. "Did the task want me to just put the text at the start of the document, or at the top of each page?" So results were inconsistent here. (That was expected, as this "dry run" test was a learning experience for my students. I intentionally included some scenario tasks that weren't great, so they would see for themselves how the quality of their scenario tasks can influence their test.)

Different versions are grouped together

LibreOffice released version 5.3.1 right before we started our usability test. Some students had already downloaded 5.3, and some ended up with 5.3.1. I didn't notice any user interface changes for the UI paths exercised by our scenario tasks, but did the new version have an impact?

I've sorted the results based on 5.3.1 off to the right. See the headers to see which columns represent LibreOffice 5.3 and which are 5.3.1. I don't see any substantial difference between them. The "hot" rows from 5.3 are still "hot" in 5.3.1, and the "cool" rows are still "cool."

You might use a similar method to compare different iterations of a user interface. As your program progresses from 1.0 to 1.1 to 1.2, etc, you can compare the same scenario tasks by organizing your data in this way.

You could also group different testers together

The heat map also lets you discuss testers. What happened with tester #7? There's a lot of orange and yellow in that column, even for tasks (rows) that fared well with other testers. In this case, the interview revealed that tester was having a bad day, and came into the test feeling "grumpy" and likely was impatient about any problems encountered in the test.

You can use these columns to your advantage. In this test, all testers were drawn from the same demographic: a university student around 18-22 years old, who had some to "moderate" experience with Word or Google Docs, but not LibreOffice.

But if your usability test intentionally included a variety of experience levels (a group of "beginner" users, "moderate" users, and "experienced" users) you might group these columns appropriately in the heat map. So rather than grouping by version (as above) you could have one set of columns for "beginner" testers, another set of columns for "moderate" testers and a third group for "experienced" testers.

2017-03-27 Monday.

  • Mail chew; consultancy call, bit of patch review. Lunch. Plodded through more admin in the afternoon, ESC call, contract bits.

summing up 85

summing up is a recurring series on topics & insights on how we can make sense of computers that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

Admiral Shovel and the Toilet Roll (transcript), by James Burke

In order to rectify the future I want to spend most of my time looking at the past because there’s nowhere else to look: (a) because the future hasn’t happened yet and never will, and (b) because almost all the time in any case the future is not really much more than the past with extra bits attached.

To predict you extrapolate on what’s there already. We predict the future from the past, working within the local context from within the well-known box, which may be why the future has so often in the past been a surprise. I mean, James Watt’s steam engine was just supposed to drain mines. The printing press was just supposed to print a couple of Bibles. The telephone was invested by Alexander Graham Bell just to teach deaf people to talk. The computer was made specifically to calculate artillery shell trajectories. Viagra was just supposed to be for angina. I mean; what else?

current technology is on a path to fundamentally change how our society operates. nevertheless we fail to predict the impact of technology in our society and culture. an excellent argument for the importance of an interdisciplinary approach to innovation in technology.

Thought as a Technology, by Michael Nielsen

It requires extraordinary imagination to conceive new forms of visual meaning. Many of our best-known artists and visual explorers are famous in part because they discovered such forms. When exposed to that work, other people can internalize those new cognitive technologies, and so expand the range of their own visual thinking.

Images such as these are not natural or obvious. No-one would ever have these visual thoughts without the cognitive technologies developed by Picasso, Edgerton, Beck, and many other pioneers. Of course, only a small fraction of people really internalize these ways of visual thinking. But in principle, once the technologies have been invented, most of us can learn to think in these new ways.

a marvellous article on how user interfaces impact new ways of thinking of the world. technological progress always happens in a fixed context and is almost always a form of optimization. a technological innovation however, would have to happen outside of this given, fixed context and existing rules.

The Long Web, by Jeremy Keith

Next time somebody says to you, “The internet never forgets”, just call bullshit on that. It’s absolute bollocks! Look at the data. The internet forgets all the time. The average lifespan of a web page is months, and yet people are like, “Oh, you’ve got to be careful what you put online, it’ll be there forever: Facebook never forgets, Google never forgets.” No, I would not entrust our collective culture, our society’s memory to some third party servers we don’t even know.

What we need is thinking about our culture, about our society, about our preserving what we’re putting online, and that’s kind of all I ask of you, is to think about The Long Web, to think about the long term consequences of what we’re doing because I don’t think we do it enough.

It isn’t just about what we’re doing today. We are building something greater than the Library of Alexandria could ever have been and that is an awesome—in the true sense of the word—responsibility.

with the web we're building something greater than the library of alexandria. to do this well we have to build our sites for the long haul. it’s something we don’t think about enough in the rush to create the next thing on the web.

Arranging Install Fest 2017

Next Thursday at the auditorium of the School of Computer Science, we are going to install in more than 200 new students of the university FEDORA + GNOME, since during the first year they study algorithms, C programming and GNU/Linux in general.

Thanks to the authorities of the National University of Engineering for arranging all the proper permissions and also the company Softbutterfly will provide us a Website to document all the Linux events that we have done in universities during the last years.

uni.pngSpecial thanks to our designer Leyla Marcelo who have designed some new stickers for GNOME and Fedora. Balloons and t-shirts were also been prepared for this new event! 🙂


Filed under: FEDORA, GNOME Tagged: fedora, Fedora release, GNOME, install fest 2017, install fest Peru, InstallFest2017, Julita Inca, Julita Inca Chiroque, National University of Engineering, UNI, Universidad Nacional de Ingenieria

March 26, 2017

Ten years of Codethink

32813704624_b7e3899b9f_zSpring is here and it is the 10th anniversary celebration of Codethink.  Nobody could have orchestrated it this way but we also have GUADEC happening here in Manchester in a few months and it’s the 20th anniversary of GNOME.  All roads lead to Manchester in 2017!

The company is celebrating its anniversary in various ways: cool new green-on-black T-shirts, a 10 years mug for everyone, and perhaps more significantly a big sophisticated party with a complicated cake.

The party was fun with a lot of old faces some who had travelled quite far to be there. The company was and still is a mix of very interesting and weird people and although we spend most of our time in the same room studiously not talking to eachother we do know how to celebrate things sometimes!

It was odd in a way being at a corporate party with fancy food and a function band and 150 guests in an enourmous monastery given that back when I joined the entire Manchester staff could go for lunch together and all sit at the same table. The first company party I went to was in Paul Sherwood’s conservatory, in fact the first few of them were there. It’s a good sign for sure that the company has quadrupled (or more) in size in the ensuing 6 years.

In hindsight I was quite lucky to have a world class open source software house more or less on my doorstep. I spent a long time trying to avoid working in software (and trying to avoiding working at all), but I did do a Summer of Code project back in 2009 or 2010 mentored by Allison Lortie, who then worked for Codethink and noted that I lived about 5000 miles closer to her office than she did.It was an obvious choice to apply to there when I graduated from University and luckily it was just at a time when they were hiring so I didn’t have to spend too long living on 50p a week and eating shoes for dinner. It was very surreal for the first few months of working there as a world which I’d previously only been involved via a computer turned into a world of real people (plus lots of computers), in fact the whole first year was pretty surreal what with also adapting to Manchester life and discovering the how much craziness there is underneath the surface of the technology industry.

I had no idea what the company did beforehand, and even now the Codethink website doesn’t give too much away. I saw contributions to Free Software projects such as Tracker and dconf (and various other things that were happening 7 years ago) but I didn’t know what kind of business model came out of that activity. It turned out that neither did anyone else at that point; the company grew out of consulting work from Nokia, but the Elopcalypse had just happened and so on starting I got involved in all sorts of different things as we looked for work in different areas: everything from boot speed optimizations and hardware control, to compiler testing and bugfixing, build tools, various automated testing setups, and more build tools, to Python and Ruby webapps, data visualisations, OpenStack, systems administration, report writing and more. Just before Christmas 2011 I got offered to go work in Korea, the catch being that I had to go in 2 days time, and the following year I spent another memorable month there (again with about 2 days notice). I also had month long stints in Bulgaria, and Berlin although these were actually planned in advance, plus all sorts of conferences as the company started to sponsor attendance and a couple of days off for such things. Most importantly of course I got involved in rock climbing which is now pretty much my favourite thing.

Since a long time now it’s felt like the company has a solid business model and while the work we do is still all over different sectors I think I can sum it up as bridging the gap between the worlds of corporate software projects and open-source software projects.  We have some great customers who engage us to do work upstream on Free Software projects which is ideal,  but far from everything we work on is Free Software, and we also work in various fields that I’m pretty unexcited about such as automotive and finance. It’s very hard to make money though if you  spend all your time working on something that you then give away so it’s a necessary compromise in my eyes.

And even in entirely closed source projects having knowledge of all the great Free Software that is available gives us an advantage. There are borderline-unusable proprietary tools still being sold by major vendors to do things like version control, there are unreliable proprietary hardware drivers being sold for hardware that has a functional and better open source driver, there are countless projects using medieval kernels, obsolete operating systems and all sorts of other craziness.Working for a company that trusts its employees is also pretty important, I meet operating systems engineers there are working on Linux-based devices whose corporate IT departments force them to use Windows, so right they trust them to maintain the operating system used in millions of cars but they don’t trust them to maintain the operating system on their laptop.

One thing Codethink lacks still is a model for providing engineer time to help with ongoing maintainance and development of different free software projects. There have been attempts at doing so within the company and I acknowledge it’s very difficult because the drop in, drop out nature of a consultant engineer’s time isn’t compatible with the ongoing time commitment required to be a reliable maintainer. Plus good maintenance skills require years to develop and either require someone experienced with a lot of free time to teach them to you, or they require you to maintain a real world project which you mess up continually and learn every lesson the hard way. Of course open source work that comes out of customer projects is highly regarded and if you’re lucky enough to have unallocated time it can sometimes be used to work through the backlog of bug fixes and feature additions for different tools you use that one inevitably develops as a full time software engineer. Again, it amazes me how many companies manage to actively prevent their developers from pushing things upstream.

We have been maintaining Baserock for years now (and many people have learned lots of lessons the hard way from it :-); BuildStream development is ongoing and I’m even still hopeful we can achieve the original goal of making it an order of magnitude easier to produce a high quality Free Software operating system. I should note that Codethink also contributes financially to conferences and projects in various ways.

I should also point out that we are still hiring. This wasn’t intended to be a marketing essay in which I talked up how great the company is, but it kinda has turned out out that way. I guess you take that as a good sign. My real underlying goal was to make it a bit clearer what it’s like to work here which I hope I’ve done a little.

I am quite proud of the company’s approach to hiring, we take in many graduates who show promise but never got involved in community-driven software projects or never really even got into programming except as a module in a science degree or whatever. Of course we also welcome people who do have relevant experience but they can be hard to find and focusing on them can also have an undesired effect of selecting based on certain privileges. I was debating with Tristan last week whether a consultancy is actually a good place for inexperienced developers to be, there is the problem that you don’t get to see the results of your work very often, you often move between projects fairly frequently and so you might not develop the intuition needed for being a good software maintainer, which is a complex topic but boils down to something like: “Is this going to cause problems in 5 years time?” There’s no real way around this, all we can do is give people a chance in an environment with a strong Free Software culture and that is pretty much what we do.

Ideally here I’d end with some photos from the party but I’m terrible at taking photos so it’s just all the back of people’s heads and lurid green lighting. Instead here’s a photo of a stranger taking a photo of me this afternoon while I was out biking round the river Mersey this afternoon.

stranger.jpg

Cake photo by Robert Marshall


Debian package dependency browser

The Debian package browser is great but it is somewhat limited. As an example even though you can get dependencies and build dependencies for any package but not reverse dependencies (i.e. list all packages that depend or build-depend on this package).

I wrote a simple program that downloads the package repository file, parses it and creates an SQLite DB with all the link information and a simple Flask web app to browse it. This is what it looks like when showing the information page on Flex.


In addition you can do all sorts of analysis on the data. As an example here are the 20 packages that are depended on the most followed by the number of packages that depend on them:

libc6 19759
libstdc++6 6362
libgcc1 5814
libglib2.0-0 2906
python 2802
zlib1g 2620
libcairo2 1410
libgdk-pixbuf2.0-0 1385
libpango-1.0-0 1303
libpangocairo-1.0-0 1139
libqt5core5a 1122
libatk1.0-0 1075
libgtk2.0-0 1010
libxml2 979
libfreetype6 976
perl 880
libqt5gui5 845
libfontconfig1 834
python3 825
libqtcore4 810

And here is the same for build-dependencies:

debhelper 27189
cdbs 3130
dh-autoreconf 3098
dh-python 2959
pkg-config 2698
autotools-dev 2331
python-setuptools 2193
python-all 1888
cmake 1643
python3-setuptools 1539
dpkg-dev 1446
python3-all 1412
perl 1372
zlib1g-dev 1317
dh-buildinfo 1303
python 1302
libglib2.0-dev 1133
gem2deb 1104
default-jdk 1078
libx11-dev 973

The numbers are not exact because the parser is not perfect but as rough estimates these should be reliable. Just don't use them for anything requiring actual accuracy, ok?

The code is here. I probably won't work on it any more due to other obligations, but feel free to play around with it if you wish. 

March 24, 2017

C++ Cheat Sheet

I spend most of my time writing and reading C code, but every once in a while I get to play with a C++ project and find myself doing frequent reference checks to cppreference.com. I wrote myself the most concise cheat sheet I could that still shaved off the majority of those quick checks. Maybe it helps other fellow programmers who occasionally dabble with C++.

class ClassName {
  int priv_member;  // private by default
protected:
  int protect_member;
public:
  ClassName() // constructor
  int get_priv_mem();  // just prototype of func
  virtual ~ClassName() {} // destructor
};

int ClassName::get_priv_mem() {  // define via scope
  return priv_member;
}

class ChildName : public ClassName, public CanDoMult {
public:
  ChildName() {
    protect_member = 0;
  } ...
};

class Square {
  friend class Rectangle; ... // can access private members
};


Containers: container_type<int>
 list -> linked list
  front(), back(), begin(), end(), {push/pop}_{front/back}(), insert(), erase()
 deque ->double ended queue
  [], {push/pop}_{front/back}(), insert(), erase(), front(), back(), begin()
 queue/stack -> adaptors over deque
  push(), pop(), size(), empty()
  front(), back() <- queue
  top() <- stack
 unordered_map -> hashtable
  [], at(), begin(), end(), insert(), erase(), count(), empty(), size()
 vector -> dynamic array
  [], at(), front(), back(), {push/pop}_back, insert(), erase(), size()
 map -> tree
  [], at(), insert(), erase(), begin(), end(), size(), empty(), find(), count()

 unordered_set -> hashtable just keys
 set -> tree just keys

Gtef 2.0 – GTK+ Text Editor Framework

Gtef is now hosted on gnome.org, and the 2.0 version has been released alongside GNOME 3.24. So it’s a good time for a new blog post on this new library.

The main goal of Gtef is to ease the development of text editors and IDEs based on GTK+ and GtkSourceView, by providing a higher-level API.

Some background information is written on the wiki:

In this blog post I’ll explain in more details some aspects of Gtef: why a new library was needed, why calling it a framework, and one feature that I worked on during this cycle (a new file loader). There are more stuff already in the pipeline and will maybe be covered by future blog posts, stay tuned (and see the roadmap) ;)

Iterative API design + stability guarantees

In Gtef, I want to be able to break the API at any time. Because API design is hard, it needs an iterative process. Sometimes we see possible improvements several years later. But application developers want a stable API. So the solution is simple: bumping the major version each time an API break is desirable, every 6 months if needed! Gtef 1.0 and Gtef 2.0 are parallel-installable, so an application depending on Gtef 1.0 still compiles fine.

Gtef is a small library, so it’s not a problem if there are e.g. 5 different gtef *.so loaded in memory at the same time. For a library like GTK+, releasing a new major version every 6 months would be more problematic for memory consumption and application startup time.

A concrete benefit of being able to break the API at any time: a contributor (David Rabel) wanted to implement code folding. In GtkSourceView there are several old branches for code folding, but nothing was merged because it was incomplete. In Gtef it is not a problem to merge the first iteration of a class. So even if the code folding API is not finished, there has been at least some progress: two classes have been merged in Gtef. The code will be maintained instead of bit-rotting in a branch. Unfortunately David Rabel doesn’t have the time anymore to continue contributing, but in the future if someone wants to implement code folding, the first steps are already done!

Framework

Gtef is the acronym for “GTK+ Text Editor Framework”, but the framework part is not yet finished. The idea is to provide the main application architecture for text editors and IDEs: a GtkApplication on top, containing GtkApplicationWindow’s, containing a GtkNotebook, containing tabs (GtkGrid’s), with each tab containing a GtkSourceView widget. If you look at the current Gtef API, there is only one missing subclass: GtkNotebook. So the core of the framework is almost done, I hope to finish it for GNOME 3.26. I’ll probably make the GtkNotebook part optional (if a text editor prefers only one GtkSourceView per window) or replacable by something else (e.g. a GtkStack plus GtkStackSwitcher). Let’s see what I’ll come up with.

Of course once the core of the framework is finished, to be more useful it’ll need an implementation for common features: file loading and saving, search and replace, etc. With the framework in place, it’ll be possible to offer a much higher-level API for those features than what is currently available in GtkSourceView.

Also, it’s interesting to note that there is a (somewhat) clear boundary between GtkSourceView and Gtef: the top level object in GtkSourceView is the GtkSourceView widget, while the GtkSourceView widget is at the bottom of the containment hierarchy in Gtef. I said “somewhat” because there is also GtkSourceBuffer and GtefBuffer, and both libraries have other classes for peripheral, self-contained features.

New file loader based on uchardet

The file loading and saving API in GtkSourceView is quite low-level, it contains only the backend part. In case of error, the application needs to display the error (preferably in a GtkInfoBar) and for some errors provide actions like choosing another character encoding manually. One goal of Gtef will be to provide a simpler API, taking care of all kinds of errors, showing GtkInfoBars etc.

But how the backend works has an impact on the GUI. The file loading and saving classes in GtkSourceView come from gedit, and I’m not entirely happy with the gedit UI for file loading and saving. There are several problems, one of them is that GtkFileChooserNative cannot be used with the current gedit UI so it’s problematic to sandbox the application with Flatpak.

With gedit, when we open a file from a GtkFileChooserDialog, there is a combobox for the encoding: by default the encoding is auto-detected from a configurable list of encodings, and it is possible to choose manually an encoding from that same list. I want to get rid of that combobox, to always auto-detect the encoding (it’s simpler for the user), and to be able to use GtkFileChooserNative (because custom widgets like the combobox cannot be added to a GtkFileChooserNative).

The problem with the file loader implementation in GtkSourceView is that the encoding auto-detection is not that good, hence the need for the combobox in the GtkFileChooserDialog in gedit. But to detect the encoding, there is now a simple to use library called uchardet, maintained by Jehan Pagès, and based on the Mozilla universal charset detection code. Since the encoding auto-detection is much better with uchardet, it will be possible to remove the combobox and use GtkFileChooserNative!

Jehan started to modify GtkSourceFileLoader (or, more precisely, the internal class GtkSourceBufferOutputStream) to use uchardet, but as a comment in GtkSourceBufferOutputStream explains, that code is a big headache… And the encoding detection is based only on the first 8KB of the file, which results in bugs if for example the first 8KB are only ASCII characters and a strange character appears later. Changing that implementation to take into account the whole content of the file was not easily possible, so instead, I decided to write a new implementation from scratch, in Gtef, called GtefFileLoader. It was done in Gtef and not in GtkSourceView, to not break the GtkSourceView API, and to have the time in Gtef to write the implementation and API incrementally (trying to keep the API as close as possible to the GtkSourceView API).

The new GtefFileLoader takes a simpler approach, doing things sequentially instead of doing everything at the same time (the reason for the headache). 1) Loading the content in memory, 2) determining the encoding, 3) converting the content to UTF-8 and inserting the result into the GtkTextBuffer.

Note that for step 2, determining the encoding, it would have been entirely possible without uchardet, by counting the number of invalid characters and taking the first encoding for which there are no errors (or taking the one with the fewest errors, escaping the invalid characters). And when uchardet is used, that method can serve as a nice fallback. Since all the content is in memory, it should be fast enough even if it is done on the whole content (GtkTextView doesn’t support very big files anyway, 50MB is the default maximum in GtefFileLoader).

GtefFileLoader is usable and works well, but it is still missing quite a few features compared to GtkSourceFileLoader: escaping invalid characters, loading from a GInputStream (e.g. stdin) and gzip uncompression support. And I would like to add more features: refuse to load very long lines (it is not well supported by GtkTextView) and possibly ask to split the line, and detect binary files.

The higher-level API is not yet created, GtefFileLoader is still “just” the backend part.

A Web Browser for Awesome People (Epiphany 3.24)

Are you using a sad web browser that integrates poorly with GNOME or elementary OS? Was your sad browser’s GNOME integration theme broken for most of the past year? Does that make you feel sad? Do you wish you were using an awesome web browser that feels right at home in your chosen desktop instead? If so, Epiphany 3.24 might be right for you. It will make you awesome. (Ask your doctor before switching to a new web browser. Results not guaranteed. May cause severe Internet addiction. Some content unsuitable for minors.)

Epiphany was already awesome before, but it just keeps getting better. Let’s look at some of the most-noticeable new features in Epiphany 3.24.

You Can Load Webpages!

Yeah that’s a great start, right? But seriously: some people had trouble with this before, because it was not at all clear how to get to Epiphany’s address bar. If you were in the know, you knew all you had to do was click on the title box, then the address bar would appear. But if you weren’t in the know, you could be stuck. I made the executive decision that the title box would have to go unless we could find a way to solve the discoverability problem, and wound up following through on removing it. Now the address bar is always there at the top of the screen, just like in all those sad browsers. This is without a doubt our biggest user interface change:

Screenshot showing address bar visibleDiscover GNOME 3! Discover the address bar!

You Can Set a Homepage!

A very small subset of users have complained that Epiphany did not allow setting a homepage, something we removed several years back since it felt pretty outdated. While I’m confident that not many people want this, there’s not really any good reason not to allow it — it’s not like it’s a huge amount of code to maintain or anything — so you can now set a homepage in the preferences dialog, thanks to some work by Carlos García Campos and myself. Retro! Carlos has even added a home icon to the header bar, which appears when you have a homepage set. I honestly still don’t understand why having a homepage is useful, but I hope this allows a wider audience to enjoy Epiphany.

New Bookmarks Interface

There is now a new star icon in the address bar for bookmarking pages, and another new icon for viewing bookmarks. Iulian Radu gutted our old bookmarks system as part of his Google Summer of Code project last year, replacing our old and seriously-broken bookmarks dialog with something much, much nicer. (He also successfully completed a major refactoring of non-bookmarks code as part of his project. Thanks Iulian!) Take a look:

Manage Tons of Tabs

One of our biggest complaints was that it’s hard to manage a large number of tabs. I spent a few hours throwing together the cheapest-possible solution, and the result is actually pretty decent:

Firefox has an equivalent feature, but Chrome does not. Ours is not perfect, since unfortunately the menu is not scrollable, so it still fails if there is a sufficiently-huge number of tabs. (This is actually surprisingly-difficult to fix while keeping the menu a popover, so I’m considering switching it to a traditional non-popover menu as a workaround. Help welcome.) But it works great up until the point where the popover is too big to fit on your monitor.

Note that the New Tab button has been moved to the right side of the header bar when there is only one tab open, so it has less distance to travel to appear in the tab bar when there are multiple open tabs.

Improved Tracking Protection

I modified our adblocker — which has been enabled by default for years — to subscribe to the EasyPrivacy filters provided by EasyList. You can disable it in preferences if you need to, but I haven’t noticed any problems caused by it, so it’s enabled by default, not just in incognito mode. The goal is to compete with Firefox’s Disconnect feature. How well does it work compared to Disconnect? I have no clue! But EasyPrivacy felt like the natural solution, since we already have an adblocker that supports EasyList filters.

Disclaimer: tracking protection on the Web is probably a losing battle, and you absolutely must use the Tor Browser Bundle if you really need anonymity. (And no, configuring Epiphany to use Tor is not clever, it’s very dumb.) But EasyPrivacy will at least make life harder for trackers.

Insecure Password Form Warning

Recently, Firefox and Chrome have started displaying security warnings  on webpages that contain password forms but do not use HTTPS. Now, we do too:

I had a hard time selecting the text to use for the warning. I wanted to convey the near-certainty that the insecure communication is being intercepted, but I wound up using the word “cybercriminal” when it’s probably more likely that your password is being gobbled up by various  governments. Feel free to suggest changes for 3.26 in the comments.

New Search Engine Manager

Cedric Le Moigne spent a huge amount of time gutting our smart bookmarks code — which allowed adding custom search engines to the address bar dropdown in a convoluted manner that involved creating a bookmark and manually adding %s into its URL — and replacing it with an actual real search engine manager that’s much nicer than trying to add a search engine via bookmarks. Even better, you no longer have to drop down to the command line in order to change the default search engine to something other than DuckDuckGo, Google, or Bing. Yay!

New Icon

Jakub Steiner and Lapo Calamandrei created a great new high-resolution app icon for Epiphany, which makes its debut in 3.24. Take a look.

WebKitGTK+ 2.16

WebKitGTK+ 2.16 improvements are not really an Epiphany 3.24 feature, since users of older versions of Epiphany can and must upgrade to WebKitGTK+ 2.16 as well, but it contains some big improvements that affect Epiphany. (For example, Žan Doberšek landed an important fix for JavaScript garbage collection that has resulted in massive memory reductions in long-running web processes.) But sometimes WebKit improvements are necessary for implementing new Epiphany features. That was true this cycle more than ever. For example:

  • Carlos García added a new ephemeral mode API to WebKitGTK+, and modified Epiphany to use it in order to make incognito mode much more stable and robust, avoiding corner cases where your browsing data could be leaked on disk.
  • Carlos García also added a new website data API to WebKitGTK+, and modified Epiphany to use it in the clear data dialog and cookies dialog. There are no user-visible changes in the cookies dialog, but the clear data dialog now exposes HTTP disk cache, HTML local storage, WebSQL, IndexedDB, and offline web application cache. In particular, local storage and the two databases can be thought of as “supercookies”: methods of storing arbitrary data on your computer for tracking purposes, which persist even when you clear your cookies. Unfortunately it’s still not possible to protect against this tracking, but at least you can view and delete it all now, which is not possible in Chrome or Firefox.
  • Sergio Villar Senin added new API to WebKitGTK+ to improve form detection, and modified Epiphany to use it so that it can now remember passwords on more websites. There’s still room for improvement here, but it’s a big step forward.
  • I added new API to WebKitGTK+ to improve how we handle giving websites permission to display notifications, and hooked it up in Epiphany. This fixes notification requests appearing inappropriately on websites like the https://riot.im/app/.

Notice the pattern? When there’s something we need to do in Epiphany that requires changes in WebKit, we make it happen. This is a lot more work, but it’s better for both Epiphany and WebKit in the long run. Read more about WebKitGTK+ 2.16 on Carlos García’s blog.

Future Features

Unfortunately, a couple exciting Epiphany features we were working on did not make the cut for Epiphany 3.24. The first is Firefox Sync support. This was developed by Gabriel Ivașcu during his Google Summer of Code project last year, and it’s working fairly well, but there are still a few problems. First, our current Firefox Sync code is only able to sync bookmarks, but we really want it to sync much more before releasing the feature: history and open tabs at the least. Also, although it uses Mozilla’s sync server (please thank Mozilla for their quite liberal terms of service allowing this!), it’s not actually compatible with Firefox. You can sync your Epiphany bookmarks between different Epiphany browser instances using your Firefox account, which is great, but we expect users will be quite confused that they do not sync with your Firefox bookmarks, which are stored separately. Some things, like preferences, will never be possible to sync with Firefox, but we can surely share bookmarks. Gabriel is currently working to address these issues while participating in the Igalia Coding Experience program, and we’re hopeful that sync support will be ready for prime time in Epiphany 3.26.

Also missing is HTTPS Everywhere support. It’s mostly working properly, thanks to lots of hard work from Daniel Brendle (grindhold) who created the libhttpseverywhere library we use, but it breaks a few websites and is not really robust yet, so we need more time to get this properly integrated into Epiphany. The goal is to make sure outdated HTTPS Everywhere rulesets do not break websites by falling back automatically to use of plain, insecure HTTP when a load fails. This will be much less secure than upstream HTTPS Everywhere, but websites that care about security ought to be redirecting users to HTTPS automatically (and also enabling HSTS). Our use of HTTPS Everywhere will just be to gain a quick layer of protection against passive attackers. Otherwise, we would not be able to enable it by default, since the HTTPS Everywhere rulesets are just not reliable enough. Expect HTTPS Everywhere to land for Epiphany 3.26.

Help Out

Are you a computer programmer? Found something less-than-perfect about Epiphany? We’re open for contributions, and would really appreciate it if you would try to fix that bug or add that feature instead of slinking back to using a less-awesome web browser. One frequently-requested feature is support for extensions. This is probably not going to happen anytime soon — we’d like to support WebExtensions, but that would be a huge effort — but if there’s some extension you miss from a sadder browser, ask if we’d allow building it into Epiphany as a regular feature. Replacements for popular extensions like NoScript and Greasemonkey would certainly be welcome.

Not a computer programmer? You can still help by reporting bugs on GNOME Bugzilla. If you have a crash to report, learn how to generate a good-quality stack trace so that we can try to fix it. I’ve credited many programmers for their work on Epiphany 3.24 up above, but programming work only gets us so far if we don’t know about bugs. I want to give a shout-out here to Hussam Al-Tayeb, who regularly built the latest code over the course of the 3.24 development cycle and found lots of problems for us to fix. This release would be much less awesome if not for his testing.

OK, I’m done typing stuff now. Onwards to 3.26!

March 23, 2017

Applying to Outreachy and GSoC for Fedora and GNOME

The next 30 is going to be the deadline to apply to the Outreachy program and on April 3rd to the GSoC program. Lately in Peru a group of students were so interested in applying on it, since they have heard about the programs in FLOSS events such as LinuxPlaya 2017 and HackCamp2016, among other local events.

The companies that are participating in these programs fully reflect all available information; however, crucial questions are still on the air. This fact made me write a little post about these programs. So far, the Outreachy offers for FEDORA and GNOME:

Fedora

Fedora is a Linux-based operating system, which offers versions focused on three possible uses: workstation, server, and cloud.

Internship projects:

  • Improve Bodhi, the web-system that publishes updates for Fedora (looking for applicants as of March 17) Required: Python; Optional: JavaScript, HTML, CSS

GNOME

GNOME is a GNU/Linux-based innovative desktop that is design-driven and easy to use.

Internship projects:

  • Improve GTK+ with pre-compiled GtkBuilder XML files in resource data (looking for applicants as of March 12) Required: C, ability to quickly pick up GTK+

  • Photos: Make it perfect — every detail matters (looking for applicants as of March 12) Required: C, ability to quickly pick up GLib and GTK+

  • Make mapbox-gl-native usable from GTK+

    Required: C, GTK+; optional: C++, interest in maps

  • Unit Testing Integration for GNOME Builder (looking for applicants as of March 7) Required: GTK+, and experience with either C, Python, or Vala

  • Documentation Cards for GNOME Builder

    Required: GTK+, and experience with either C, Python, or Vala

First of all, it is compulsory to know what the programs are. Boths,  GSoC and Outreachy program ask to complete a requested free and open-source software coding project online during a period of 3 months, with a stipend of $5500. It is not required to complete that tasks before you apply, or to travel abroad to complete the internship, neither a Google recruiting program. To apply you must fulfill the requirements and other five points (that I also consider important as well):

1.- Be familiar with the project

In the case of GSoc for Fedora, please see the wiki of ideas published, for the GSoC GNOME, a wiki of ideas is also posted with the list of projects, and finally, the ideas for the Outreachy that are happy to receive contributors with very different programming skills.

I think that at least a year of experience as a user and as developer is important to mention. For example, in you decide to participate in the GNOME games project, it is important to prove that you have checked or interact with the code they use. It can done by posting it in a blog or by your git hub account. Fixing newcomers bugs related the GNOME games application is also an important plus to consider:: Additionally, the bugs for Fedora.

2.- Read the requirements and provide a proof of evidence

Before submit the proposal, it is important to attach a document to prove that you are currently enroll in a university or institute.
Another important requirement is the age, +18 and they also consider the eligibility to work in the country you live. Tax forms will be asked when you are selected.
You can upload individually more than one proposal could be submitted but only one will be accepted. You can participate in both programs too, the Outreachy program and the GSoC program. Only one will be accepted.

3.- Think about your strangeness and weakness

During the application is asked to prove an evidence of any other projects that you have participated. Maybe it is coincide in my case, but all the students that I found interested in Linux IT are also leaders in their communities or universities, and have participated in other interesting projects. A proof documented of those activities are also part of the process and in case you do not have videos, or posts, search for a letter of an authority (egg. dean of the faculty) to have the letter as a voucher of your committed to the society.

4.- Be in contact with your mentor

When you are about to finish the proposal, it is asked to add a calendar of the tasks and deadlines. In this case, it is better to set up the schedule of duties with the mentor approval. It is suggested to write a mail to introduce yourself with an attached tentative calendar to achieve the request posted in the wiki ideas of the project.

Each project has a list of mentor published, GSoC GNOME wiki shows them between parenthesis aside the name of the project as well as the GSoC for Fedora. The Outreachy mentor’s wiki list for GNOME and the mentor list for Fedora is also public online.

5.- Be responsible and organize your schedule

Be sure that you will accomplish the tasks you have planned in time.

Some students enroll in more than 6 courses at university that demand overtime than another regular student, others have a partial job while they are studying. Those factors must been foreseen before applying. Success in the GSoC program at the same time to pass the courses at the university with great grades is an effort that will open many doors locally and overseas in the academic and professional fields.

  • You can find on the Web examples of previous year’s proposals of the Outrechy programGNOME GSoC and Fedora. If you have further questions, please review the official WebSite FAQ, and if you think something is missing here, you are more than welcome to comment additional tips.

Best wishes for students in Peru and around the world! 🙂


Filed under: FEDORA, GNOME Tagged: 2017, apply GSoC, fedora, FLOSS programs, GNOME, Google Summer of Code, GSoC, GSoC Fedora, GSoC GNOME, Julita Inca, Julita Inca Chiroque, Outreachy, Perú

Celebrating Release Day

Last March, the Toronto area GNOME 3.20 release party happened to fall on release day. This release day saw Nancy and me at the hospital for the birth of our second grandson, Gord and Maggie’s second boy. Name suggestions honouring the occasion (GNOME with any manner of capitalization, Portland, or Three Two Four) were politely rejected in favour of the original plan: Finnegan Walter “Finn” Hill. Welcome to Finn and the new GNOME.

GTK hackfest 2017: D-Bus communication with containers

At the GTK hackfest in London (which accidentally became mostly a Flatpak hackfest) I've mainly been looking into how to make D-Bus work better for app container technologies like Flatpak and Snap.

The initial motivating use cases are:

  • Portals: Portal authors need to be able to identify whether the container is being contacted by an uncontained process (running with the user's full privileges), or whether it is being contacted by a contained process (in a container created by Flatpak or Snap).

  • dconf: Currently, a contained app either has full read/write access to dconf, or no access. It should have read/write access to its own subtree of dconf configuration space, and no access to the rest.

At the moment, Flatpak runs a D-Bus proxy for each app instance that has access to D-Bus, connects to the appropriate bus on the app's behalf, and passes messages through. That proxy is in a container similar to the actual app instance, but not actually the same container; it is trusted to not pass messages through that it shouldn't pass through. The app-identification mechanism works in practice, but is Flatpak-specific, and has a known race condition due to process ID reuse and limitations in the metadata that the Linux kernel maintains for AF_UNIX sockets. In practice the use of X11 rather than Wayland in current systems is a much larger loophole in the container than this race condition, but we want to do better in future.

Meanwhile, Snap does its sandboxing with AppArmor, on kernels where it is enabled both at compile-time (Ubuntu, openSUSE, Debian, Debian derivatives like Tails) and at runtime (Ubuntu, openSUSE and Tails, but not Debian by default). Ubuntu's kernel has extra AppArmor features that haven't yet gone upstream, some of which provide reliable app identification via LSM labels, which dbus-daemon can learn by querying its AF_UNIX socket. However, other kernels like the ones in openSUSE and Debian don't have those. The access-control (AppArmor mediation) is implemented in upstream dbus-daemon, but again doesn't work portably, and is not sufficiently fine-grained or flexible to do some of the things we'll likely want to do, particularly in dconf.

After a lot of discussion with dconf maintainer Allison Lortie and Flatpak maintainer Alexander Larsson, I think I have a plan for fixing this.

This is all subject to change: see fd.o #100344 for the latest ideas.

Identity model

Each user (uid) has some uncontained processes, plus 0 or more containers.

The uncontained processes include dbus-daemon itself, desktop environment components such as gnome-session and gnome-shell, the container managers like Flatpak and Snap, and so on. They have the user's full privileges, and in particular they are allowed to do privileged things on the user's session bus (like running dbus-monitor), and act with the user's full privileges on the system bus. In generic information security jargon, they are the trusted computing base; in AppArmor jargon, they are unconfined.

The containers are Flatpak apps, or Snap apps, or other app-container technologies like Firejail and AppImage (if they adopt this mechanism, which I hope they will), or even a mixture (different app-container technologies can coexist on a single system). They are containers (or container instances) and not "apps", because in principle, you could install com.example.MyApp 1.0, run it, and while it's still running, upgrade to com.example.MyApp 2.0 and run that; you'd have two containers for the same app, perhaps with different permissions.

Each container has an container type, which is a reversed DNS name like org.flatpak or io.snapcraft representing the container technology, and an app identifier, an arbitrary non-empty string whose meaning is defined by the container technology. For Flatpak, that string would be another reversed DNS name like com.example.MyGreatApp; for Snap, as far as I can tell it would look like example-my-great-app.

The container technology can also put arbitrary metadata on the D-Bus representation of a container, again defined and namespaced by the container technology. For instance, Flatpak would use some serialization of the same fields that go in the Flatpak metadata file at the moment.

Finally, the container has an opaque container identifier identifying a particular container instance. For example, launching com.example.MyApp twice (maybe different versions or with different command-line options to flatpak run) might result in two containers with different privileges, so they need to have different container identifiers.

Contained server sockets

App-container managers like Flatpak and Snap would create an AF_UNIX socket inside the container, bind() it to an address that will be made available to the contained processes, and listen(), but not accept() any new connections. Instead, they would fd-pass the new socket to the dbus-daemon by calling a new method, and the dbus-daemon would proceed to accept() connections after the app-container manager has signalled that it has called both bind() and listen(). (See fd.o #100344 for full details.)

Processes inside the container must not be allowed to contact the AF_UNIX socket used by the wider, uncontained system - if they could, the dbus-daemon wouldn't be able to distinguish between them and uncontained processes and we'd be back where we started. Instead, they should have the new socket bind-mounted into their container's XDG_RUNTIME_DIR and connect to that, or have the new socket set as their DBUS_SESSION_BUS_ADDRESS and be prevented from connecting to the uncontained socket in some other way. Those familiar with the kdbus proposals a while ago might recognise this as being quite similar to kdbus' concept of endpoints, and I'm considering reusing that name.

Along with the socket, the container manager would pass in the container's identity and metadata, and the method would return a unique, opaque identifier for this particular container instance. The basic fields (container technology, technology-specific app ID, container ID) should probably be added to the result of GetConnectionCredentials(), and there should be a new API call to get all of those plus the arbitrary technology-specific metadata.

When a process from a container connects to the contained server socket, every message that it sends should also have the container instance ID in a new header field. This is OK even though dbus-daemon does not (in general) forbid sender-specified future header fields, because any dbus-daemon that supported this new feature would guarantee to set that header field correctly, the existing Flatpak D-Bus proxy already filters out unknown header fields, and adding this header field is only ever a reduction in privilege.

The reasoning for using the sender's container instance ID (as opposed to the sender's unique name) is for services like dconf to be able to treat multiple unique bus names as belonging to the same equivalence class of contained processes: instead of having to look up the container metadata once per unique name, dconf can look it up once per container instance the first time it sees a new identifier in a header field. For the second and subsequent unique names in the container, dconf can know that the container metadata and permissions are identical to the one it already saw.

Access control

In principle, we could have the new identification feature without adding any new access control, by keeping Flatpak's proxies. However, in the short term that would mean we'd be adding new API to set up a socket for a container without any access control, and having to keep the proxies anyway, which doesn't seem great; in the longer term, I think we'd find ourselves adding a second new API to set up a socket for a container with new access control. So we might as well bite the bullet and go for the version with access control immediately.

In principle, we could also avoid the need for new access control by ensuring that each service that will serve contained clients does its own. However, that makes it really hard to send broadcasts and not have them unintentionally leak information to contained clients - we would need to do something more like kdbus' approach to multicast, where services know who has subscribed to their multicast signals, and that is just not how dbus-daemon works at the moment. If we're going to have access control for broadcasts, it might as well also cover unicast.

The plan is that messages from containers to the outside world will be mediated by a new access control mechanism, in parallel with dbus-daemon's current support for firewall-style rules in the XML bus configuration, AppArmor mediation, and SELinux mediation. A message would only be allowed through if the XML configuration, the new container access control mechanism, and the LSM (if any) all agree it should be allowed.

By default, processes in a container can send broadcast signals, and send method calls and unicast signals to other processes in the same container. They can also receive method calls from outside the container (so that interfaces like org.freedesktop.Application can work), and send exactly one reply to each of those method calls. They cannot own bus names, communicate with other containers, or send file descriptors (which reduces the scope for denial of service).

Obviously, that's not going to be enough for a lot of contained apps, so we need a way to add more access. I'm intending this to be purely additive (start by denying everything except what is always allowed, then add new rules), not a mixture of adding and removing access like the current XML policy language.

There are two ways we've identified for rules to be added:

  • The container manager can pass a list of rules into the dbus-daemon at the time it attaches the contained server socket, and they'll be allowed. The obvious example is that an org.freedesktop.Application needs to be allowed to own its own bus name. Flatpak apps' implicit permission to talk to portals, and Flatpak metadata like org.gnome.SessionManager=talk, could also be added this way.

  • System or session services that are specifically designed to be used by untrusted clients, like the version of dconf that Allison is working on, could opt-in to having contained apps allowed to talk to them (effectively making them a generalization of Flatpak portals). The simplest such request, for something like a portal, is "allow connections from any container to contact this service"; but for dconf, we want to go a bit finer-grained, with all containers allowed to contact a single well-known rendezvous object path, and each container allowed to contact an additional object path subtree that is allocated by dconf on-demand for that app.

Initially, many contained apps would work in the first way (and in particular sockets=session-bus would add a rule that allows almost everything), while over time we'll probably want to head towards recommending more use of the second.

Related topics

Access control on the system bus

We talked about the possibility of using a very similar ruleset to control access to the system bus, as an alternative to the XML rules found in /etc/dbus-1/system.d and /usr/share/dbus-1/system.d. We didn't really come to a conclusion here.

Allison had the useful insight that the XML rules are acting like a firewall: they're something that is placed in front of potentially-broken services, and not part of the services themselves (which, as with firewalls like ufw, makes it seem rather odd when the services themselves install rules). D-Bus system services already have total control over what requests they will accept from D-Bus peers, and if they rely on the XML rules to mediate that access, they're essentially rejecting that responsibility and hoping the dbus-daemon will protect them. The D-Bus maintainers would much prefer it if system services took responsibility for their own access control (with or without using polkit), because fundamentally the system service is always going to understand its domain and its intended security model better than the dbus-daemon can.

Analogously, when a network service listens on all addresses and accepts requests from elsewhere on the LAN, we sometimes work around that by protecting it with a firewall, but the optimal resolution is to get that network service fixed to do proper authentication and access control instead.

For system services, we continue to recommend essentially this "firewall" configuration, filling in the ${} variables as appropriate:

<busconfig>
    <policy user="${the daemon uid under which the service runs}">
        <allow own="${the service's bus name}"/>
    </policy>
    <policy context="default">
        <allow send_destination="${the service's bus name}"/>
    </policy>
</busconfig>

We discussed the possibility of moving towards a model where the daemon uid to be allowed is written in the .service file, together with an opt-in to "modern D-Bus access control" that makes the "firewall" unnecessary; after some flag day when all significant system services follow that pattern, dbus-daemon would even have the option of no longer applying the "firewall" (moving to an allow-by-default model) and just refusing to activate system services that have not opted in to being safe to use without it. However, the "firewall" also protects system bus clients, and services like Avahi that are not bus-activatable, against unintended access, which is harder to solve via that approach; so this is going to take more thought.

For system services' clients that follow the "agent" pattern (BlueZ, polkit, NetworkManager, Geoclue), the correct "firewall" configuration is more complicated. At some point I'll try to write up a best-practice for these.

New header fields for the system bus

At the moment, it's harder than it needs to be to provide non-trivial access control on the system bus, because on receiving a method call, a service has to remember what was in the method call, then call GetConnectionCredentials() to find out who sent it, then only process the actual request when it has the information necessary to do access control.

Allison and I had hoped to resolve this by adding new D-Bus message header fields with the user ID, the LSM label, and other interesting facts for access control. These could be "opt-in" to avoid increasing message sizes for no reason: in particular, it is not typically useful for session services to receive the user ID, because only one user ID is allowed to connect to the session bus anyway.

Unfortunately, the dbus-daemon currently lets unknown fields through without modification. With hindsight this seems an unwise design choice, because header fields are a finite resource (there are 255 possible header fields) and are defined by the D-Bus Specification. The only field that can currently be trusted is the sender's unique name, because the dbus-daemon sets that field, overwriting the value in the original message (if any).

To make it safe to rely on the new fields, we would have to make the dbus-daemon filter out all unknown header fields, and introduce a mechanism for the service to check (during connection to the bus) whether the dbus-daemon is sufficiently new that it does so. If connected to an older dbus-daemon, the service would not be able to rely on the new fields being true, so it would have to ignore the new fields and treat them as unset. The specification is sufficiently vague that making new dbus-daemons filter out unknown header fields is a valid change (it just says that "Header fields with an unknown or unexpected field code must be ignored", without specifying who must ignore them, so having the dbus-daemon delete those fields seems spec-compliant).

This all seemed fine when we discussed it in person; but GDBus already has accessors for arbitrary header fields by numeric ID, and I'm concerned that this might mean it's too easy for a system service to be accidentally insecure: It would be natural (but wrong!) for an implementor to assume that if g_message_get_header (message, G_DBUS_MESSAGE_HEADER_FIELD_SENDER_UID) returned non-NULL, then that was guaranteed to be the correct, valid sender uid. As a result, fd.o #100317 might have to be abandoned. I think more thought is needed on that one.

Unrelated topics

As happens at any good meeting, we took the opportunity of high-bandwidth discussion to cover many useful things and several useless ones. Other discussions that I got into during the hackfest included, in no particular order:

  • .desktop file categories and how to adapt them for AppStream, perhaps involving using the .desktop vocabulary but relaxing some of the hierarchy restrictions so they behave more like "tags"
  • how to build a recommended/reference "app store" around Flatpak, aiming to host upstream-supported builds of major projects like LibreOffice
  • how Endless do their content-presenting and content-consuming apps in GTK, with a lot of "tile"-based UIs with automatic resizing and reflowing (similar to responsive design), and the applicability of similar widgets to GNOME and upstream GTK
  • whether and how to switch GNOME developer documentation to Hotdoc
  • whether pies, fish and chips or scotch eggs were the most British lunch available from Borough Market
  • the distinction between stout, mild and porter

More notes are available from the GNOME wiki.

Acknowledgements

The GTK hackfest was organised by GNOME and hosted by Red Hat and Endless. My attendance was sponsored by Collabora. Thanks to all the sponsors and organisers, and the developers and organisations who attended.

GNOME ED Update – Week 12

New release!

In case you haven’t seen it yet, there’s a new GNOME release – 3.24! The release is the result of 6 months’ work by the GNOME community.

The new release is a major step forward for us, with new features and improvements, and some exciting developments in how we build applications. You can read more about it in the announcement and release notes.

As always, this release was made possible partially thanks to the Friends of GNOME project. In particular, it helped us provide a Core apps hackfest in Berlin last November, which had a direct impact on this release.

Conferences

GTK+ hackfest

I’ve just come back from the GTK+ hackfest in London – thanks to RedHat and Endless for sponsoring the venues! It was great to meet a load of people who are involved with GNOME and GTK, and some great discussions were had about Flatpak and the creation of a “FlatHub” – somewhere that people can get all their latest Flatpaks from.

LibrePlanet

As I’m writing this, I’m sitting on a train going to Heathrow, for my flight to LibrePlanet 2017! If you’re going to be there, come and say hi. I’ve a load of new stickers that have been produced as well so these can brighten up your laptop.

March 22, 2017

Another media codec on the way!

One of the thing we are working hard at currently is ensuring you have the codecs you need available in Fedora Workstation. Our main avenue for doing this is looking at the various codecs out there and trying to determine if the intellectual property situation allows us to start shipping all or parts of the technologies involved. This was how we were able to start shipping mp3 playback support for Fedora Workstation 25. Of course in cases where this is obviously not the case we have things like the agreement with our friends at Cisco allowing us to offer H264 support using their licensed codec, which is how OpenH264 started being available in Fedora Workstation 24.

As you might imagine clearing a codec for shipping is a slow and labour intensive process with lawyers and engineers spending a lot of time reviewing stuff to figure out what can be shipped when and how. I am hoping to have more announcements like this coming out during the course of the year.

So I am very happy to announce today that we are now working on packaging the codec known as AC3 (also known as A52) for Fedora Workstation 26. The name AC3 might not be very well known to you, but AC3 is part of a set of technologies developed by Dolby and marketed as Dolby Surround. This means that if you have video files with surround sound audio it is most likely something we can playback with an AC3 decoder. AC3/A52 is also used for surround sound TV broadcasts in the US and it is the audio format used by some Sony and Panasonic video cameras.

We will be offering AC3 playback in Fedora Workstation 26 and we are looking into options for offering an encoder. To be clear there are nothing stopping us from offering an encoder apart from finding an implementation that is possible to package and ship with Fedora with an reasonable amount of effort. The most well known open source implementation we know about is the one found in ffmpeg/libav, but extracting a single codec to ship from ffmpeg or libav is a lot of work and not something we currently have the resources to do. We found another implementation called aften, but that seems to be unmaintaned for years, but we will look at it to see if it could be used.
But if you are interested in AC3 encoding support we would love it if someone started working on a standalone AC3 encoder we could ship, be that by picking up maintership of Aften, splitting out AC3 encoding from libav or ffmpeg or writting something new.

If you want to learn more about AC3 the best place to look is probably the Wikipedia page for Dolby Digital or the a52 ATSC audio standard document for more of a technical deep dive.

your business in munich podcast

a few days ago i was invited to be a guest on the your business in munich podcast. it was a rather freewheeling but very enjoyable conversation on topics like my consultancy, interaction with humans & computers, open source and ux. i was especially impressed with the detailed preparation and unerring questions by my host, ryan l. sink.

feel free to listen to our conversation below, on the your business in munich website or read the transcript below.


Welcome to Your Business In Munich. The podcast where entrepreneurs throughout the Munich area share their stories. And now your host, an experienced coach and entrepreneur from the United States, Ryan L. Sink.

Ryan L. Sink: Welcome to Your Business In Munich. Today I'm here with Daniel Siegel, he is a digital product architect in Munich, Germany, but originally from Merano, Italy, which I have learned is a small town in northern Italy. Could you tell us a little more about where you're from? Give us a picture of what it's like to be in Merano, Italy.

Daniel G. Siegel: Merano's actually a really small city close to the northern border of Italy, to Austria and Switzerland. It's located in between the Alps, so in winter we have a lot of snow, in summer we have a lot of sunshine. It has kind of a Mediterranean feeling to it. We have some palm trees in there as well. It's quite funny, especially maybe in April, May where you can see the palm trees there and you still have the mountain with some snow on top.

Ryan: Palm trees in the Alps.

Daniel: Yeah, probably one of the only places you where you can find that setting.

Ryan: I can't even picture that.

Daniel: It's kind of a melange between Italian, a bit Swiss, Austrian and German stuff. If you go back into the history, you can see these people meeting there. It was kind of always a border region, shall we say.

Ryan: Did you grow up with these other cultures as well or languages?

Daniel: Yeah. You normally grow up speaking Italian and German. German is more like a dialect so it's not the proper high German but you learn that as well in school. You normally learn the three languages German, Italian and English of course. There are a few smaller valleys where you have some additional languages, kind of like mixtures between Italian, Latin and German or something like that.

Ryan: Is this the Romance?

Daniel: Exactly.

Ryan: Okay. Do you speak this as well?

Daniel: No, not at all. I'm from a different valley. [laughs]

Ryan: So it depends on which valley you're from.

Daniel: Exactly.

Ryan: Which valley you're in between.

Daniel: It's actually quite hard to understand people because every valley has a different dialect. Of course if you go up there, you can tell who's from where but if you're an outsider you probably have a hard time to learn each individual dialect.

Ryan: I bet. You can tell by the type of palm trees they have growing. [laughs]

Daniel: No, those are only in Merano. [laughs]

Ryan: Growing up in Merano, how did you first get interested in digital products?

Daniel: Well I think I was always fascinated by the computer. I can't remember exactly but we had computers back home quite early. My father was building up his own business with computers. He did some consulting, bookkeeping with computers back in the 70s, 80s, which were the first real computers you could bring home and work on them. Well, sometimes he brought his computers back home and we could actually play the early games. I remember friends coming over and we were playing a game together in front of the computer, and that kind of thing. I think there's where my fascination comes from.

Ryan: Even as a kid do you think you were already looking at the usability of it and then thinking of these things?

Daniel: I think I always broke stuff. [laughs] That's where I'm coming from because I was kind of fascinated to see what's behind the scene and how is it working. I tried to get a glimpse behind the scenes. Of course, breaking all the stuff when I was young.

Ryan: I'm sure your dad was happy with...

Daniel: Yeah he was happy all the time. [laughs] But I think that lead me to programming at some point. Actually tried to get into deep stuff and into open source as well. I started to create my first own programs and it slowly moved with that, actually.

Ryan: What kinds of programs have you created yourself?

Daniel: Actually a few. I think the first open source program I put out there was a small driver for a modem which was working with USB I think and there was no driver for Linux but actually I had to get on the internet. It was a small thing and we had a really slow connection. Nevertheless I was working on that thing and actually created the first driver for that one.

Ryan: Cool.

Daniel: I put it out there and lots of people were sending me emails and so on. That was actually the first thing I did. Then later on I think a more prominent project was a program for the GNOME Desktop which is one of the most used desktop interfaces for Linux. If you ever heard of Ubuntu for example, they actually used GNOME and a lot of GNOME infrastructure. There I created a small webcam tool similar to PhotoBooth on Mac where you have some filters and you can apply them to your face, and that kind of thing. That was actually a lot of fun.

Ryan: It sounds like it. Obviously there's not a lot of money in open source if you're giving it all away.

Daniel: Actually I was paid by Google at some point for that.

Ryan: Yeah? Great.

Daniel: That was part of the Summer of Code program back in the days. I think I got a whopping $5000 dollars for like three months working in the summer, which of course, being a student, was just like, I've got my whole year settled just by that.

Ryan: Just the opportunity alone, doing it for free or paying them to do it.

Daniel: I also was invited to lots of conferences and so on, it was a win-win experience for me because I learned a lot of stuff, like how to work remotely with people. I learned a lot working with designers. I learned a lot how to create open source programs and how to lead them and how to do marketing for your open source projects as well. I think that was one of my biggest experiences I could actually use when doing actual work later on.

Ryan: When did you start working for yourself?

Daniel: That was actually two years ago. I always did some smaller projects next to what I was doing at that time or point. But for two years I have a small boutique consultancy here in Munich, where I help my clients create effective websites that are able to tell their story perfectly and convert visitors to customers. It's actually a bit more than websites because I always describe myself as bringing sales strategies and sales processes to the digital world. That's also where the term digital product architect comes from because a website is a product as well, as a newsletter, as well as your actual product or service you‘re selling. Especially in these days, you have to bridge the gap between the online-offline world and you have to think about how these things work together and how you can actually reach out to your customers and talk to them and stay in contact and so on.

Ryan: What were you doing before you started doing this with the boutique consultancy?

Daniel: I think my first real job, you can say, ignoring all the stuff I did as a student was creating my own startup back in 2007, which is now the world's biggest platform for young fashion designers.

Ryan: Cool.

Daniel: It's called Not Just A Label and I co-founded it with my brother and two friends of ours back in 2007. I was working there as a CTO and building that from the ground up.

Ryan: What's going on with that platform at the moment?

Daniel: It's still the biggest one of the world. [laughs] I did a small exit in 2012-2013. Since then I'm consulting them, we have semi-regular calls and I help them with some issues they're struggling with and so on.

Ryan: Nice, and still with the website as well?

Daniel: Yeah, sometimes. Of course they need a lot of strategy help as well, especially because I built it from the ground up, I know a lot of things which went wrong which the current people don't have an idea about so I can usually help them out with some things. They actually expanded to L.A. last year.

Ryan: It's still based in Italy?

Daniel: No, actually it was based in London, we founded it in London.

Ryan: In London. Okay.

Daniel: Lots of traveling back and forth at that time. Now they have an office in London and L.A. as well.

Ryan: What made you decide to exit because growing and..?

Daniel: Well, there were a few things in there. I'm really interested in the interaction between humans and computers. That's my main vision and my main goal. The Fashion world was just something I saw and I could help the people, but it was not my main interest. I was kind of bored at some point because we scaled up the platform quite big – we had like, 20,000 designers and a lot more daily and weekly visitors. It was a struggle to scale it up but we had a really well laid out structure and strategy. So basically by doing that, I unemployed myself. [laughs] I didn't have much stuff to do anymore at that point.

Ryan: So, it's your fault. [laughs]

Daniel: Yeah, probably. Then actually the same time a friend of mine came up to me and told me – he was working for Accenture at that time – that they are building up a small team in Germany. They already had a team world-wide, called Emerging Technology Innovation and they wanted to build up the same team in Germany as well. The team focused on bringing emerging technologies to their clients. Basically each member of that team had to have a few technologies and they were playing around with them, doing small prototypes, light-house projects and so on for the clients. And then we could actually see if something is going to get big or not, and if it would get big, then we integrate it into Accenture and build it up inside there. There I did a lot of HTML5 stuff, JavaScript stuff, and of course also that interaction between humans and websites, humans and web applications, basically a lot of user experience and so on.

Ryan: So, this job is what brought you to Munich?

Daniel: No, actually I was – the first time I came to Munich was in 2006 for studying. I studied computer science and psychology. You see, the same things are repeating themselves in my life. [laughs] Of course you can‘t see it at that point, but looking back, you can always see that computers and humans – that these are two topics which are repeating themselves in my life. And then of course I scaled my studies down while working on my startup, and then I did some exams, and back to the startup. So it was a hassle back and fourth until at some point I was able to finish my studies as well.

Ryan: And with connecting these two worlds, you know the psychology, the human side, and the computer technology side, what do you do to connect these two worlds?

Daniel: I often tell people I do websites, but that's not the entire truth, because of course the end result is most often a website and a let's say marketing funnel, be it a newsletter, email course, or something like that. But in the end, that's not what my clients actually need. My clients need a way to be able to talk to new clients, to new leads, to be in contact with them, and to actually help them to solve their own problems. And that's what I'm enabling them to do. I see my skills as a tool set, and I pick an individual tool to solve exactly that problem. You know, there's a saying that, "You don't need a drilling machine, you need like a picture on the wall." And that's what I actually do with my clients.

I see this trend where people just throw technology at the wall and see what sticks, you know you need like a website, and you need to do SEO, then you need to Facebook Ads, LinkedIn Ads, and social media ads, and then you need to buy traffic from there, and then you have to be active on Twitter, and then you have to have a newsletter, and then you have to talk, and then you have to have a YouTube Channel, and then you have to... and so on, and so on. But by doing everything you're actually doing nothing. They struggle, and then they come to me and tell me, "Uh it's so hard to find clients online." And I'm always telling them, "Yeah okay, if you’re doing everything, how do you wanna reach your clients? How do you want to have a meaningful conversation with a person?" You're not able to.

So, niche down, focus on one or two ways to communicate with your clients, and do it really, really well. People are really grateful for that, like if you actually take the time to talk with them, that's a really good thing. I mean we were talking before about calling people on their birthday, and how surprised some people act just by showing them that you take time yourself to talk to them. And that's something you can actually integrate in your business, you know. For me automation for example is a really important thing. I see the website and everything you do online as a digital version of yourself, which works while you are sleeping. So, for example, you're in a meeting with clients, you go back home, go to bed, the client might still think about the meeting, goes on your website, looks at stuff you do, stuff you did, and then makes a decision on whether to go ahead or not.

In that sense, the website has to really talk to your client as if it would be you, right? And that's something which is actually missing. And as far as automation I don't see it as a thing which makes you obsolete, it's more the other way round. It should should enable yourself to scale up more efficiently, but still have a meaningful conversation with each of your contacts. By using all the range of tools today for example – let's talk about a newsletter. You can actually get really, really specific about who you are talking to.

For example if you go to a conference, and you come back with lets say 10 business cards, just take the time to write some notes on them about what you did talk about, what was she interested in, and so on. Put it in there, and then with that data, you can actually craft meaningful conversations. And even if you send it out to all 10 people, but just by adding like one or two lines on like, "Hey it was great to meet you. I especially liked the discussion about online marketing“, or whatever you were talking about – that alone is enough to get some gratefulness out of a person.

Ryan: To show that you were listening there right?

Daniel: Yeah, exactly. And that's something which is missing.

Ryan: Yeah, this human touch, this personal touch on everything you do. Not just the standard, automated emails. I mean, even if they're clever, I've seen good examples. You've probably seen the story of like Derek Sivers first company with the customer's story that he told when they were getting everything ready, sending them their CVs. Even that, it's clever, but it's not personal, right. So you just need these little things that connect you to that other person. Could you tell us about the business card example as well? I thought that was great what you were saying about personalized – you don't have to share if you don't want to.

Daniel: No, I gladfully will, because my old business card had a really cool design on the back side. I had my email address on there, and actually in my email address, you can find my website, and my Twitter handle. And so I made a graphic layout to actually show which part is which part, and so on. It was really looking great, and people were telling me like, "It looks awesome, and so on." But if I look at results, like nobody was following me on Twitter, just because of my card. Nobody was visiting my website, just because of my card. No one was writing me an email, just because of my card. Then again I was thinking like, that's a thing I do with my clients, like bridging the gap between offline and online, so what could I do to create that interaction from a meeting to maybe talk to them about their problems or something like that.

What I added to my business cards was a custom URL for each business card, and if I give you one, you can visit that URL. I obviously will tell you that I prepared a small gift for you, and you can have a look there. And that's a custom crafted landing page, which actually greets you, and tells you like, "Hey it was nice to meet you", tells you some other stuff, what I'm doing, where you can find my stuff and so on. But it also gives you like a small gift, which is individualized to you, to that person.

Ryan: So do you also have to tell them at the same time, "Hey please wait three days, so that I can get the page ready before you use this URL?" [laughs]

Daniel: Well, you can start with the easiest example. So, for example, just take your URL/hello. You don't have to publicize that page, but just craft it to like, "Hey it was nice to meet you." And maybe say something like, "Not many people take action, but you are like one of the few, so this is why I want to give you something in return." And then announce it while talking to that person. That's already enough. Of course you can get much more fancy with that, but just in doing like that little thing. Just that 1% more than other people do, which will propel you in front of let's say 100% of other people, because nobody's doing it. If you go to an event, people just like throw out their business cards, and kind of like playing "I have to throw away 100 business cards today, let's see who's faster" or something like that. [laughs]

Ryan: Really? Yeah, it would almost be the same for some people to just put them all in the trash can away in the door that day.

Daniel: I mean of course everybody's busy today, so if you go home, you have like 10 business cards, what should I do with them? Like okay, I'll probably add them to LinkedIn, or XING, or whatever, and that's it. Then two days later you forget about them, and so on. But what if you could have a process starting there. Let's say, we meet at a meetup, and you tell me, "Yeah I have problems with my website. I don't get enough customers." And I tell you, "Look, I wrote an email course for that, or an ebook, or something like that, exactly about your problem. If you want, just give me your business card, I'll add you to it, and I will send it to you automatically." You probably would say "Yes", right?

Ryan: Yeah, sure. [laughs]

Daniel: Exactly. So, it's easy as that right. Just give your people something. Something that can help them.

Ryan: Yeah, doing something extra, or even creative. Like I have to say again, I love your contact page. [laughs] I think it's the only contact page on any website ever, where I actually laughed out loud while I was reading it, because you have all these great email addresses, if you want to buy you a beer, if you want to I think donate money, or give you April Fools tips, or know your shoe size, all these great things. But another question to that, have people actually used these? Besides just maybe your close friends screwing around with you, have people actually used these, and written you April Fools jokes ideas, or asked for your shoe size?

Daniel: Actually, a few people did.

Ryan: Yeah? [laughs]

Daniel: Yeah, so I think I had a movie request email address at some point. Some people were writing there, beer and coffee is used often, especially by people who – you know those people where you set up a date, and it fails because of some reason and they reconnect a few months later on? Those are the people who will use the beer or coffee email address. [laughs]

Ryan: As a way to say, "Sorry for the first meeting. Not working out, they get you a beer."

Daniel: Actually I had a few clients who contacted me with the "Give me money" address. I thought this was really funny. [laughs]

Ryan: That's really great. Yeah, you should hook up your invoicing systems...

Daniel: Oh yeah definitely.

Ryan: Money@... [laughs]

Daniel: Well you see, it's these small things you can add, to make interactions more... everything's so boring, kind of strict in some way, and I enjoy these little things. Just like having an Easter egg somewhere. I don't know if you're familiar with that?

Ryan: Yeah.

Daniel: It's basically some small, I don't know, how would you explain Easter egg?

Ryan: Yeah, something hidden, you know, like you're playing a game. It's something that only you could know, or the only way to get it, is really either tripping over it, or actually maybe hearing from someone that it's there. Like the invisible blocks from the old Mario game. I remember a friend told me, I knew where the block was, I jump at the right place, there was my Easter egg, sort of.

Daniel: So, actually when I was working on that webcam tool, it was called Cheese, we had several Easter eggs in there, just to make it fun. Actually, it was so funny because we didn't tell anybody, but then someone probably got an email about people who discovered it. And I remember one Easter egg was, if you made a photo, it was making a photo sound, like a click, or something like that. And if you pressed a certain combination, it would change that sound to – we actually recorded a few voices, making some comments about the people. Like there was somebody laughing at the person, or "Oh you look so sweet." Stuff like that. People were freaking out over that.

Ryan: And they just happened upon these combinations?

Daniel: Yeah, it was not too hard to figure it out, but you actually had to find it. We had several things like these in there. Coming back to your other question about the human side in computers, I think we lost a lot of that, because if you go back to 60s and 70s, when the computer was a really new tool, and people were still discovering what you could actually do with a computer. A lot of people were coming from biology, or literature, or physics, or mathematics and so on. They always thought, "Okay, this is like a machine, and I want to use it for... to make my life easier." We had a lot of progress in that time.

And there were actually, in that time, there were two camps, the one was the AI camp and the other was the IA camp. The Artificial Intelligence camp was thinking about, "Okay lets make computers really intelligent, and we don't have to do anything." And the other camp was Intelligence Augmentation, which was coming from the other way, "No let's make humans smarter, let's evolve computers into tools humans can use. Make humans smarter, and then when humans are smarter, we can actually make even better tools for us." And so have like a continual co-evolution. I actually gave a talk last year in September in Belgrade about that topic called the Lost Medium.

Ryan: Nice.

Daniel: Where I was talking about how we lost that thing, and that we actually have to see the computer as a tool to make our lives better. We can see some small examples nowadays, like for example, you're in a foreign city, and you have Google Maps with you. You're safe. You just like look at your phone, and it will guide you to where we need to go.

Ryan: As long as you have a data plan.

Daniel: Do you remember when you were like going on holidays with your mom and pap, like I don't know, 20 years ago, and they were fighting over where you have to go, and like, "You have to tell me earlier...," like it was really stressful. And now look, we have a GPS there, and it‘s like, "Oh okay, we‘re arriving in 20 minutes, and everything's fine and so on."

Ryan: There's no question, just who's app is maybe more correct than the other one.

Daniel: Yeah, and that's a perfect example of like augmenting humans. There are lots of other examples, but we fail in many areas. Like for example, if I want to share an article, and add a comment to why it's important for you, because we were talking about the thing, it's really hard to do. I mean your probably write an email but then it's not really searchable, and you forget about it, and these are not really difficult things. Or the other thing, like before we met I wanted to call you, my phone didn't know your number. Why not? I mean we had a conversation over email before, and over LinkedIn–

Ryan: So it would make sense that we could–

Daniel: It would make sense that my computer's smart enough to figure out we‘re connected, and I mean you don't have to know my bank account number, but you can know my, I don't know, street address maybe, and my phone number for example. And then, let's say we have a drink afterwards, and you don't have any money there, it would be easier to just like give you access to my bank account, you just like transfer money there, and that's it. You could easily create an app for that, or a tool or something like that, and there were many examples of startups doing exactly that. But I think that's not the entire solution, because we have to think...

We have to go a step back, and think about what's the actual problem here. The actual problem is not that I don't have your contact information, it's more about how can I stay in contact with you? That's the main thing. Like for example, if you change your number, email address and so on. And by looking at that you see – you start to notice these small issues all the time. Like for example a few days ago I wanted to send an email to my brother-in-law, and I wrote an email, and just before sending it, I remembered myself that he actually changed his company, so his email address was not the correct one anymore. And I mean it's my brother-in-law, so of course we connected, but why don't I have his new work email?

Ryan: Yeah, and that could have been your only way to get in touch with him, and then there would have broken an entire–

Daniel: Now I have to actually send him a text asking him like, "Hey, what's your email address?" And then he was just like, "Yeah, you can call me as well and we'll talk." It‘s just struggle all the time, like with these small things.

Ryan: Not smooth.

Daniel: Exactly.

Ryan: So, what are you doing at the moment to fix that problem? Are you creating more apps, or tools like that?

Daniel: The thing that we were talking about is the vision behind my business, which is fueling it, and of course, I'm doing websites, web processes, so that's what I help my clients with. On the side, in my spare time I prepare some talks, some prototypes, the Lost Medium is an example of that one. Just to figure out what's going on, like are there different solutions to that? Then I have a more or less monthly series on my blog and my newsletter called summing up where I try to collect puzzle pieces in that area, and try to put them together, and see what's missing there. Because I don't think that there's a single solution to the problem.

I think the main thing is we kind of lost the idea that – let me put it in another way: We think the computer is on the top of it's game. And if you look back, each year, the computer gets better. A phone gets better, it gets faster, it gets bigger, or smaller, depending on what you want to do, we have more devices and so on. People have the impression that the computer's state of the art, which of course it's not. And I think it's... It's about like sharing that idea that we're not at the end yet.

Ryan: Yeah, maybe just at the beginning actually.

Daniel: Oh definitely at the beginning. I mean the computer is a completely new medium, and if you look at what happened to of course TV and radio, but also the printing press, or cars, they completely changed the whole world. And there is a great saying by Marshall McLuhan, he said, "We shape our tools and thereafter our tools shape us." And if you look at cars, it's the perfect example. You see like we have streets everywhere. A city is made up of streets right, 50% is streets, if not more. And this wasn't the case before.

Ryan: Yeah well, it definitely seems like you're on the right path.

Daniel: I hope so. [laughs]

Ryan: Yeah, I mean I've seen a lot of things recently for example showing you know, that the best solution is not a human working alone, or a machine working alone, it's them working together. Especially like in chess, some of these things that they've done, so it seems like in the future hopefully that'll continue to be the best solution. And as you said before, keep making ourselves better, so we can create even better tools and then we'll see how they shape us though. At the moment you got, the entire world with their heads down, 15°, looking into their phones. That definitely has helped out chiropractors for sure. [laughs]

We'll see what other effects it has. So, for someone interested in contacting you, getting to know more about your business, where should they go? What's the best place to check your business out.

Daniel: Definitely my website, it's dgsiegel.net. We probably put it in the show notes.

Ryan: Yeah, it'll definitely be in the show notes.

Daniel: You can find out what I'm doing there. I also got a blog and a newsletter, but for your listeners, I prepared a small cheat sheet, where you can learn how to create the perfect landing page which actually converts visitors to customers, and you can find it at dgsiegel.net/ybim.

Ryan: Thank you.

Daniel: Of course.

Ryan: Yeah, I'll definitely be downloading that myself. [laughs] At least you'll get one from that.

Daniel: There's everything there.

Ryan: Perfect, along with any email address you want.

Daniel: Exactly, choose any. [laughs]

Ryan: Very cool. Well thanks again for taking the time to share your experience, and your business with us.

Daniel: Sure, thank you for your time.

Thank you for listening to Your Business in Munich. For more information, on maximizing your personal and professional potential, go to ryanlsink.com. Have a great rest of your day.

March 21, 2017

Announcing the Shim review process

Shim has been hugely successful, to the point of being used by the majority of significant Linux distributions and many other third party products (even, apparently, Solaris). The aim was to ensure that it would remain possible to install free operating systems on UEFI Secure Boot platforms while still allowing machine owners to replace their bootloaders and kernels, and it's achieved this goal.

However, a legitimate criticism has been that there's very little transparency in Microsoft's signing process. Some people have waited for significant periods of time before being receiving a response. A large part of this is simply that demand has been greater than expected, and Microsoft aren't in the best position to review code that they didn't write in the first place.

To that end, we're adopting a new model. A mailing list has been created at shim-review@lists.freedesktop.org, and members of this list will review submissions and provide a recommendation to Microsoft on whether these should be signed or not. The current set of expectations around binaries to be signed documented here and the current process here - it is expected that this will evolve slightly as we get used to the process, and we'll provide a more formal set of documentation once things have settled down.

This is a new initiative and one that will probably take a little while to get working smoothly, but we hope it'll make it much easier to get signed releases of Shim out without compromising security in the process.

comment count unavailable comments

The time of the year…

Springtime is releasetime!

Monday saw a couple of new releases:

Shotwell

Shotwell 0.26.0 “Aachen” was released. No “grand” new features, more slashing of papercuts and internal reworks. I removed a big chunk of deprecated functions from it, with more to come for 0.28 on our way to GTK+4 and laid the groundworks for better integration into desktop online account systems such as UOA and GOA.

GExiv2 also received a bugfix release with its main highlight of proper documentation generation.
 

Rygel

In Rygel, things are more quiet. Version 0.34.0 moved some helpful classes for configuration handling to librygel-core and a couple of bugs were fixed. GSSDP and GUPnP also saw a small bugfix release.

LibreOffice 5.3.1 is out

Last week, LibreOffice released version 5.3.1. This seems to be an incremental release over 5.3 and doesn't seem to change the new user interface in any noticeable way.

This is both good and bad news for me. As you know, I have been experimenting with LibreOffice 5.3 since LibreOffice updated the user interface. Version 5.3 introduced the "MUFFIN" interface. MUFFIN stands for My User Friendly Flexible INterface. Because someone clearly wanted that acronym to spell "MUFFIN." The new interface is still experimental, so you'll need to activate it through Settings→Advanced. When you restart LibreOffice, you can use the View menu to change modes.

So on the one hand, I'm very excited for the new release!

But on the other hand, the timing is not great. Next week would have been better. Clearly, LibreOffice did not have my interests in mind when they made this release.

You see, I teach an online CSCI class about the Usability of Open Source Software. Really, it's just a standard CSCI usability class. The topic is open source software because there are some interesting usability cases there that bear discussion. And it allows students to pick their own favorite open source software project that they use in a real usability test for their final project.

This week, we are doing a usability test "mini-project." This is a "dry run" for the students to do their own usability test for the first time. Each student is doing the test with one participant each, but using the same program. We're testing the new user interface in LibreOffice 5.3, using Notebookbar in Contexttual Groups mode.

So we did all this work to prep for the usability test "mini-project" using LibreOffice 5.3, only for the project to release version 5.3.1 right before we do the test. So that's great timing, there.

But I kid. And the new version 5.3.1 seems to have the same user interface path in Notebookbar-Contextual Groups. So our test should bear the same results in 5.3 or 5.3.1.

This is an undergraduate class project, and will not generate statistically significant results like a formal usability test in academic research. But the results of our test may be useful, nonetheless. I'll share an overview of our results next week.

GNOME Photos 3.24.0

After exploring new territory with sharing and non-destructive editing over the last two releases, it was time for some introspection. We looked at some of the long-standing problems within our existing feature set and tried to iron out a few of them.

Overview Grids

It was high time that we overhauled our old GtkIconView-based overview grids. Their inability to reflow the thumbnails leads to a an ugly vertical gutter of empty space unless the window is just the right size. The other problem was performance. GtkIconView gets extremely slow when the icons are updated, which usually happens when content is detected for the first time and start getting thumbnailed.

gnome-photos-flowbox-1

Fixing this has been a recurrent theme in Photos since the middle of the previous development cycle. The end goal was to use a GtkFlowBox-based grid, but it involved a lot more work than replacing one user interface component with another.
Too many things relied on the existence of a GtkTreeModel, and had to be ported to our custom GListModel implementation before we could achieve any user-visible improvement. Once all those yaks had been shaved, we finally started working on the widget at the Core Apps Hackfest last year.

Anyway, I am happy that all that effort has to come fruition now.

Thumbnails

Closely related to our overview grids are the thumbnails inside them. Photos has perpetually suffered from GIO’s inability to let an application specifically request a high resolution thumbnail. While that is definitely a fixable problem, the fact that we store our edits non-destructively as serialized GEGL graphs makes it very hard to use the desktop-wide infrastructure for thumbnails. One cannot expect a generic thumbnailer to interpret the edits and apply them to the original image because their representation will vary greatly from one application to another. That led to the other problem where the thumbnails wouldn’t reflect the edited state of an image.

Therefore, starting from version 3.24.0, Photos has its own out-of-process thumbnailer and a separate thumbnail cache. They ensure that the thumbnails are of a suitably high resolution, and the edited state of an image is never ignored.

Exposure and Blacks

Personally, I have been a heavy user of Darktable’s exposure and blacks adjustment tool, and I really missed something like that in GNOME Photos. Ultimately, at this year’s WilberWeek I fixed gegl:exposure to imitate its Darktable counterpart, and exposed it as a tool in Photos. I am happy with the outcome and I have so far enjoyed dogfooding this little addition.


March 20, 2017

Buying a Utah teapot

The Utah teapot was one of the early 3D reference objects. It's canonically a Melitta but hasn't been part of their range in a long time, so I'd been watching Ebay in the hope of one turning up. Until last week, when I discovered that a company called Friesland had apparently bought a chunk of Melitta's range some years ago and sell the original teapot[1]. I've just ordered one, and am utterly unreasonably excited about this.

Update: Friesland have apparently always produced the Utah teapot, but were part of the Melitta group for some time - they didn't buy the range from Melitta.

[1] They have them in 0.35, 0.85 and 1.4 litre sizes. I believe (based on the measurements here) that the 1.4 litre one matches the Utah teapot.

comment count unavailable comments

WebKitGTK+ 2.16

The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

Memory consumption

After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

CSS Grid Layout

Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.

New API

The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

Hardware acceleration policy

Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

Network proxy settings

Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

Private browsing

WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

Website data

WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

Dynamically added forms

Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

Custom print settings

The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

Notification improvements

Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

Debugging tools

Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

Memory sampler

This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

$ WEBKIT_SAMPLE_MEMORY=1 MiniBrowser 
Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036

The files contain a list of sample reports like this one:

Timestamp                          1490004807
Total Program Bytes                1960214528
Resident Set Bytes                 84127744
Resident Shared Bytes              68661248
Text Bytes                         4096
Library Bytes                      0
Data + Stack Bytes                 87068672
Dirty Bytes                        0
Fast Malloc In Use                 86466560
Fast Malloc Committed Memory       86466560
JavaScript Heap In Use             0
JavaScript Heap Committed Memory   49152
JavaScript Stack Bytes             2472
JavaScript JIT Bytes               8192
Total Memory In Use                86477224
Total Committed Memory             86526376
System Total Bytes                 16729788416
Available Bytes                    5788946432
Shared Bytes                       1037447168
Buffer Bytes                       844214272
Total Swap Bytes                   1996484608
Available Swap Bytes               1991532544

Resource usage overlay

The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

We plan to add more information to the overlay in the future like memory cache status.

Blender Constraints

Last time I wrote about artistic constraints being useful to remain focus and be able to push yourself to the max. In the near future I plan to dive into the new contstraint based layout of gtk4, Emeus. Today I’ll briefly touch on another type of constraint, the Blender object constraint!

So what are they and how are they useful in the context of a GNOME designer? We make quite a few prototypes and one of the things to decide whether a behavior is clear and comprehensible is motion design, particularly transitions. And while we do not use tools directly linked to out stack, it helps to build simple rigs to lower the manual labor required to make sometimes similar motion designs and limit the number of mistakes that can be done. Even simple animations usually consist of many keyframes (defined, non-computed states in time). Defining relationships between objects and createing setups, “rigs”, is a way to create of a sort of working model of the object we are trying to mock up.

Blender Constraints Blender Constraints

Constraints in Blender allow to define certain behaviors of objects in relation to others. Constraints allow you to limit movement of an object to specific ranges (a scrollbar not being able to be dragged outside of its gutter), or to convert certain motion of an object to a different transformation of another (a slider adjusting a horizon of an image, ie. rotating it).

The simplest method of defining relation is through a hierarchy. An object can become a parent of another, and thus all children will inherit movements/transforms of a parent. However there are cases — like interactions of a cursor with other objects — where this relationship is only temporary. Again, constraints help here, in particular the copy location constraint. This is because you can define the influence strength of a constraint. Like everything in Blender, this can also be keyframed, so at some point you can follow the cursor and later disengage this tight relationship. Btw if you ever though you can manualy keyframe two animations manually so they do not slide, think again.

Inverse transform in Blender Inverse transform in Blender

The GIF screencasts have been created using Peek, which is available to download as a flatpak.

Peek, a GIF screencasting app. Peek, a GIF screencasting app.

Builder 3.24

I’m excited to announce that Builder 3.24 is here and ready for you to play with!

It should look familiar because most of the work this cycle was underneath the hood. I’m pretty happy with all the stabilization efforts from the past couple of weeks. I’d like to give a special thanks to everyone who took the time to file bugs, some of whom also filed patches.

With Outreachy and GSoC starting soon, I’m hoping that this will help reduce any difficulty for newcomers to start contributing to GNOME applications. I expect we’ll continue to polish that experience for our next couple of patch releases.

March 18, 2017

Builder on the Lunduke Hour

In case you missed it, I was on the Lunduke Hour last week talking about Builder. In reality it turned into a discussion about everything from why Gtk 4, efficient text editor design, creating UI designers, Flatpak, security implications of the base OS, and more.

Vala 0.36 Released

This cycle Vala have received a lot of love from their users and maintainers. Users and maintainers, have pushed hard to get a lot of bug fixes in place, thanks to a lot of patches attached to bug reports.

List of new features an bug fixes are in NEWS file in repository. Bindings have received lot of fixes too, checkout them and see if you need a workaround.

Many thanks to all contributors and maintainers for make this release a big one.

Highlights

  • Update manual using DocBook from wiki.gnome.org as source [#779090]
  • Add support for array-parameters with rank > 1 in signals [#778632]
  • Use GTask instead of GSimpleAsyncResult with GLib 2.36/2.44 target [#763345]
  • Deny access to protected constructors [#760031]
  • Support [DBus (signature = …)] for properties [#744595]
  • Add [CCode (“finish_instance = …”)] attribute [#710103]
  • Support [HasEmitter] for vala sources [#681356]
  • Add support for the \v escape charactor [#664689]
  • Add explicit copy method for arrays [#650663]
  • Allow underscores in type parameter names [#644938]
  • Support [FormatArg] attribute for parameters
  • Ignore –thread commandline option and drop gthread-2.0 references
  • Check inferred generic-types of MemberAccess [#775466]
  • Check generic-types count of DelegateType [#772204]
  • Fix type checking when using generics in combination with subtype [#615830]
  • Fix type parameter check for overriding generic methods
  • Use g_signal_emit where possible [#641828]
  • Only emit notify of properties if value actually changed [#631267] [#779955]
  • Mark chained relational expressions as stable [#677022]
  • Perform more thorough compatibility check of inherited properties [#779038]
  • Handle nullable ValueTypes in signals delegates properly [#758816]

Defence against the Dark Arts involves controlling your hardware

In light of the Vault 7 documents leak (and the rise to power of Lord Voldemort this year), it might make sense to rethink just how paranoid we need to be.  Jarrod Carmichael puts it quite vividly:

I find the general surprise… surprising. After all, this is in line with what Snowden told us years ago, which was already in line with what many computer geeks thought deep down inside for years prior. In the good words of monsieur Crête circa 2013, the CIA (and to an extent the NSA, FBI, etc.) is a spy agency. They are spies. Spying is what they’re supposed to do! 😁

Well, if these agencies are really on to you, you’re already in quite a bit of trouble to begin with. Good luck escaping them, other than living in an embassy or airport for the next decade or so. But that doesn’t mean the repercussions of their technological recklessness—effectively poisoning the whole world’s security well—are not something you should ward against.

It’s not enough to just run FLOSS apps. When you don’t control the underlying OS and hardware, you are inherently compromised. It’s like driving over a minefield with a consumer-grade Hummer while dodging rockets (at least use a hovercraft or something!) and thinking “Well, I’m not driving a Ford Pinto!” (but see this post where Todd weaver explains the implications much more eloquently—and seriously—than I do).

Considering the political context we now find ourselves in, pushing for privacy and software freedom has never been more relevant, as Karen Sandler pointed out at the end of the year. This is why I’m excited that Purism’s work on coreboot is coming to fruition and that it will be neutralizing the Intel Management Engine on its laptops, because this is finally providing an option for security-concerned people other than running exotic or technologically obsolete hardware.

March 17, 2017

Recipes 1.0

Recipes 1.0 is here, in time for GNOME 3.24 next week. You can get it here:

https://download.gnome.org/sources/gnome-recipes/1.0/

A flatpak is available here:

https://matthiasclasen.github.io/recipes-releases/gnome-recipes.flatpakref

and can be installed with

flatpak install https://matthiasclasen.github.io/recipes-releases/gnome-recipes.flatpakref

Thanks to everybody who helped us to reach this point by contributing recipes, sending patches, translations or bug reports!

Documentation

Recipes looks pretty good in GNOME Software already, but one thing is missing: No documentation costs us a perfect rating. Thankfully, Paul Cutler has shown up and started to fill this gap, so we can get the last icon turned blue with the next release.

Since one of the goals of Recipes is to be an exemplaric Flatpak app, I took this opportunity to investigate how we can handle documentation for sandboxed applications.

One option is to just put all the docs on the web and launch a web browser, but that feels a bit like cheating. Another option is to export all the documentation files from the sandbox and launch the host help browser on it. But this would require us to recursively export an entire directory full of possibly malicious content – so far, we’ve been careful to only export individual, known files like the desktop file or the app icon.

Therefore, we decided that we should instead ship a help browser in the GNOME runtime and launch it inside the sandbox. This turns out to work reasonably well, and will be used by more GNOME apps in the near future.

Interns

Apart from this ongoing work on documentation, a number of bug fixes and small improvements have found there way into the 1.0 release. For example, you can now find recipes by searching for the chef by name. And we ask for confirmation if you are about to close the window with unsaved changes.

Some of these changes were contributed by prospective Outreachy interns.

Roadmap

I have mentioned it before, you can find some information about our future plans for recipes here:

https://wiki.gnome.org/Apps/Recipes/Development

Your help is more than welcome!

GUADEC 2017 on the cheap

I’ve just booked flight and hotel for GUADEC 2017, which will be held in Manchester. André suggested that I should decide this time. We’ll be staying a wheelchair accessible (the room is slightly bigger :P) room with Easyhotel. It’s 184 GBP for 5 nights and NOT close to the venue (but not bad via public transport). Easyhotel works like a budget airline. You’ll have to pay more for WiFi, cleaning, breakfast, a remote, etc. I ignored all of these essential things which means André has to do without that as well. The paid WiFi might even be iffy, so rather use my mobile data, plus per half June that shouldn’t cost anything extra thanks to new EU regulations. Before GUADEC I might switch to another mobile phone company to get 4-5GB/month for 18 EUR/month. André will probably want to work remotely. Let’s see closer to the date what’s a good solution (share my data?).

Flight wise for me Easyjet is cheapest (70 EUR) and it’s the fastest method. Funny to combine Easyjet with Easyhotel. I usually use a combination of Google flights and Skyscanner to see the cheapest options. However, rome2rio works as well. The latter will also check alternative methods to get to Manchester, e.g. via Liverpool and so on. For Skyscanner, somehow the Dutch version often gives me cheaper options than Skyscanner in other languages. Google flights usually is much more expensive. I only use Google flights to determine the cheapest days, then switch to Skyscanner to get the lowest price.

 

A simple house-moving tip: use tape to mark empty cupboards

When you've emptied a cupboard, put masking tape across it, ideally in a colour that's highly visible. This way you immediately see know which ones are finished and which ones still need attention. You won't keep opening the cupboard a million times to check and after the move it takes merely seconds to undo.

March 16, 2017

Is static linking the solution to all of our problems?

Almost all programming languages designed in the last couple of years have a strong emphasis on static linking. Their approach to dependencies is to have them all in source which is compiled for each project separately. This provides many benefits, such as binaries that can be deployed everywhere and not needing to have or maintain a stable ABI in the language. Since everything is always recompiled and linked from scratch (apart from the standard library), ABI is not an issue.

The proponents of static linking often claim that shared libraries are unnecessary. Recompiling is fast and disks are big, thus it makes more sense to link statically than define and maintain ABI for shared libraries, which is a whole lot of ungrateful and hard work.

To see if this is the case, let's do an approximation experiment.

Enter the Dost!

Let's assume a new programming language called Dost. This language is special in that it provides code that is just as performant as the equivalent C code and takes the same amount of space (which is no small feat). It has every functionality anyone would ever need, does not require a garbage collector and whose syntax is loved by all. The only thing it does not do is dynamic linking. Let us further imagine that, by magic, all open source projects in the world get rewritten in Dost overnight. How will this affect a typical Linux distro?

Take for example the executables in /usr/bin. They are all implemented in Dost, and thus are linked statically. They are probably a bit larger than their original C versions which were linked dynamically. But by how much? How would we find out?

Science to the rescue

Getting a rough ballpark estimate is simple. Running ldd /usr/bin/executable gives a list of all libraries the given executable links against. If it were linked statically, the executable would have a duplicate copy of all these libraries. Said in another way, each executable grows by the size of its dependencies. Then it is a matter of writing a script that goes through all the executables, looks up their dependencies, removes language standard libraries (libc, stdlibc++, a few others) and adds up how much extra space these duplicated libraries would take.

The script to do this can be downloaded from this Github repo. Feel free to run it on your own machines to verify the results.

Measurement results

Running that script on a Raspberry Pi with Rasbian used for running an IRC client and random compile tests says that statically linked binaries would take an extra 4 gigabytes of space.

Yes, really.

Four gigabytes is more space than many people have on their Raspi SD card. Wasting all that on duplicates of the exact same data does not seem like the best use of those bits. The original shared libraries take only about 5% of this, static linking expands them 20 fold. Running the measurement script on a VirtualBox Ubuntu install says that on that machine the duplicates would take over 10 gigabytes. You can fit an entire Ubuntu install in that space. Twice. Even if this were not in issue for disk space, it would be catastrophic for instruction caches.

A counterargument people often make is that static linking is more efficient than dynamic linking because the linker can throw away those parts of dependencies that are not used. If we assume that the linker did this perfectly, executables would need to use only 5% of the code in their dependencies for static linking to take less space than dynamic linking. This seems unlikely to be the case in practice.

In conclusion

Static linking is great for many use cases. These include embedded software, firmwares and end user applications. If your use case is running a single application in a container or VM, static linking is a great solution that simplifies deployment and increases performance.

On the other hand claiming that a systems programming language that does not provide a stable ABI and shared libraries can be used to build the entire userland of a Linux distribution is delusional. 

Bosch Connected Experience: Eclipse Hono and MsgFlo

I’ve been attending the Bosch Connected Experience IoT hackathon this week at Station Berlin. Bosch brought a lot of different devices to the event, all connected to send telemetry to Eclipse Hono. To make them more discoverable, and enable rapid prototyping I decided to expose them all to Flowhub via the MsgFlo distributed FBP runtime.

The result is msgflo-hono, a tool that discovers devices from the Hono backend and exposes them as foreign participants in a MsgFlo network.

BCX Open Hack

This means that when you connect Flowhub to your MsgFlo coordinator, you have all connected devices appear there, with port for each sensor they expose. And since this is MsgFlo, you can easily pipe their telemetry data to any Node.js, Python, Rust, or other program.

Hackathon project

Since this is a hackathon, there is a competition on projects make in this event. To make the Hono-to-MsgFlo connectivity, and Flowhub visual programming capabilities more demoable, I ended up hacking together a quick example project — a Bosch XDK controlled air theremin.

This comes in three parts. First of all, we have the XDK exposed as a MsgFlo participant, and connected to a NoFlo graph running on Node.js

Hono telemetry on MsgFlo

The NoFlo graph starts a web server and forwards the telemetry data to a WebSocket client.

NoFlo websocket server

Then we have a forked version of Vilson’s webaudio theremin that uses the telemetry received via WebSockets to make sound.

NoFlo air theremin

The whole setup seems to work pretty well. The XDK is connected to WiFi here, transmits its telemetry to a Hono instance running on AWS. This data gets forwarded to the MsgFlo MQTT network, and from there via WebSocket to a browser. And all of these steps can be debugged and experimented with in a visual way.

Relevant links:

Update: we won the Open Hack Challenge award for technical brilliance with this project.

BCX entrance

March 15, 2017

guile 2.2 omg!!!

Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

high fives all around

As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

a digression on the nature of seeking and knowledge

I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

getting the goods

It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

(If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)

Karton – running Linux programs on macOS, a different Linux distro, or a different architecture

At work I use Linux, but my personal laptop is a Mac (due to my previous job developing for macOS).

A few months ago, I decided I want to be able to do some work from home without carrying my work laptop home every day.
I considered using a VM, but I don’t like the experience of mixing two operating systems. On Mac I want to use the native key bindings and applications, not a confusing mix of Linux and Mac UI applications.

In the end, I wrote Karton, a program which, using Docker, manages semi-persistent containers with easy to use automatic folder sharing and lots of small details which make the experience smooth. You shouldn’t notice you are using command line programs from a different OS.

Karton logo

After defining which distro and packages you need (this is called an “image”), you can just execute Linux programs by prefixing them with karton run IMAGE-NAME LINUX-COMMAND. For example:

$ uname -a # Running on macOS.
Darwin my-hostname 16.4.0 Darwin Kernel Version 16.4.0 [...]

$ # Run the compiler in the Ubuntu image we use for work
$ # (which we called "ubuntu-work"):
$ karton run ubuntu-work gcc -o test_linux test.c

$ # Verify that the program is actually a Linux one.
$ # The files are shared and available both on your
$ # system and in the image:
$ file test_linux
test_linux: ELF 64-bit LSB executable, x86-64, [...]

Karton runs on Linux as well, so you can do development targeting a different distro or a different architecture (for instance ARMv7 while using an x86_64 computer).

For more examples, installation instructions, etc. see the Karton website.

March 14, 2017

Approaching 3.24

So, we have just entered code freeze approaching the GNOME 3.24 release, which is scheduled for next week.
In Maps, I just released the final beta (3.23.92) tarball yesterday, but since I made a git mistake (shit happens), I had to push an additional tag, so if you want to check out the release using the tag (and not using a tarball), go for “v3.23.92-real”.
Right after the release I got some reports of Maps not working when running sandboxed from a Flatpak package. It seems that the GOA (GNOME Online Accounts) service is not reachable via dbus from inside the sandbox, and while this is something that should be fixed (for applications that needs this privilege when running sandboxed) we still shouldn't crash. Actually I think this might also affect people wanting to run Maps in a more minimalistic non-GNOME environment. So I have proposed a patch that should fix this issue, and hopefully we can get time to review this and get freeze exception for this as a last-minute fix.

Another thing I thought I should point out after reading the article on Phoronix after my last blog post about landing the transit routing feature, and I was perhaps a bit unclear on this, is that using the debug environment variable to specify the URL for an OpenTripPlanner server requires that you have an instance to point it to, such as running your own server or using a third-party server. We will not have the infrastructure in place for a GNOME-hosted server in time for the 3.24 release.

Looking ahead towards 3.26, I have some ideas both as proof-of-concept implementation and some thoughts, but more on that later!

New GtkTester project

In my recent private developments, I need to create Gtk+ widgets libraries, and test them before use them in applications.

There are plenty of efforts to provide automated GUI testing, this is another one working in my case, I would like to share. It is written in Vala, is a GTK+ library with just one top window, you can attach your widget to test, can add test cases, check status and finish by calling asserts. Feel free to ask any thing you need or add issues, in order to improve it.

Sorry if the name is too GTKish and some one would like to change it to avoid any “Official Backup from GNOME”, which is not the case.

Hope to improve this library, adding more documentation in order to help others to use it, if they found useful.

Enjoy it at GitHub.

Feeds