GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

March 24, 2017

C++ Cheat Sheet

I spend most of my time writing and reading C code, but every once in a while I get to play with a C++ project and find myself doing frequent reference checks to cppreference.com. I wrote myself the most concise cheat sheet I could that still shaved off the majority of those quick checks. Maybe it helps other fellow programmers who occasionally dabble with C++.

class ClassName {
  int priv_member;  // private by default
protected:
  int protect_member;
public:
  ClassName() // constructor
  int get_priv_mem();  // just prototype of func
  virtual ~ClassName() {} // destructor
};

int ClassName::get_priv_mem() {  // define via scope
  return priv_member;
}

class ChildName : public ClassName, public CanDoMult {
public:
  ChildName() {
    protect_member = 0;
  } ...
};

class Square {
  friend class Rectangle; ... // can access private members
};


Containers: container_type<int>
 list -> linked list
  front(), back(), begin(), end(), {push/pop}_{front/back}(), insert(), erase()
 deque ->double ended queue
  [], {push/pop}_{front/back}(), insert(), erase(), front(), back(), begin()
 queue/stack -> adaptors over deque
  push(), pop(), size(), empty()
  front(), back() <- queue
  top() <- stack
 unordered_map -> hashtable
  [], at(), begin(), end(), insert(), erase(), count(), empty(), size()
 vector -> dynamic array
  [], at(), front(), back(), {push/pop}_back, insert(), erase(), size()
 map -> tree
  [], at(), insert(), erase(), begin(), end(), size(), empty(), find(), count()

 unordered_set -> hashtable just keys
 set -> tree just keys

Gtef 2.0 – GTK+ Text Editor Framework

Gtef is now hosted on gnome.org, and the 2.0 version has been released alongside GNOME 3.24. So it’s a good time for a new blog post on this new library.

The main goal of Gtef is to ease the development of text editors and IDEs based on GTK+ and GtkSourceView, by providing a higher-level API.

Some background information is written on the wiki:

In this blog post I’ll explain in more details some aspects of Gtef: why a new library was needed, why calling it a framework, and one feature that I worked on during this cycle (a new file loader). There are more stuff already in the pipeline and will maybe be covered by future blog posts, stay tuned (and see the roadmap) ;)

Iterative API design + stability guarantees

In Gtef, I want to be able to break the API at any time. Because API design is hard, it needs an iterative process. Sometimes we see possible improvements several years later. But application developers want a stable API. So the solution is simple: bumping the major version each time an API break is desirable, every 6 months if needed! Gtef 1.0 and Gtef 2.0 are parallel-installable, so an application depending on Gtef 1.0 still compiles fine.

Gtef is a small library, so it’s not a problem if there are e.g. 5 different gtef *.so loaded in memory at the same time. For a library like GTK+, releasing a new major version every 6 months would be more problematic for memory consumption and application startup time.

A concrete benefit of being able to break the API at any time: a contributor (David Rabel) wanted to implement code folding. In GtkSourceView there are several old branches for code folding, but nothing was merged because it was incomplete. In Gtef it is not a problem to merge the first iteration of a class. So even if the code folding API is not finished, there has been at least some progress: two classes have been merged in Gtef. The code will be maintained instead of bit-rotting in a branch. Unfortunately David Rabel doesn’t have the time anymore to continue contributing, but in the future if someone wants to implement code folding, the first steps are already done!

Framework

Gtef is the acronym for “GTK+ Text Editor Framework”, but the framework part is not yet finished. The idea is to provide the main application architecture for text editors and IDEs: a GtkApplication on top, containing GtkApplicationWindow’s, containing a GtkNotebook, containing tabs (GtkGrid’s), with each tab containing a GtkSourceView widget. If you look at the current Gtef API, there is only one missing subclass: GtkNotebook. So the core of the framework is almost done, I hope to finish it for GNOME 3.26. I’ll probably make the GtkNotebook part optional (if a text editor prefers only one GtkSourceView per window) or replacable by something else (e.g. a GtkStack plus GtkStackSwitcher). Let’s see what I’ll come up with.

Of course once the core of the framework is finished, to be more useful it’ll need an implementation for common features: file loading and saving, search and replace, etc. With the framework in place, it’ll be possible to offer a much higher-level API for those features than what is currently available in GtkSourceView.

Also, it’s interesting to note that there is a (somewhat) clear boundary between GtkSourceView and Gtef: the top level object in GtkSourceView is the GtkSourceView widget, while the GtkSourceView widget is at the bottom of the containment hierarchy in Gtef. I said “somewhat” because there is also GtkSourceBuffer and GtefBuffer, and both libraries have other classes for peripheral, self-contained features.

New file loader based on uchardet

The file loading and saving API in GtkSourceView is quite low-level, it contains only the backend part. In case of error, the application needs to display the error (preferably in a GtkInfoBar) and for some errors provide actions like choosing another character encoding manually. One goal of Gtef will be to provide a simpler API, taking care of all kinds of errors, showing GtkInfoBars etc.

But how the backend works has an impact on the GUI. The file loading and saving classes in GtkSourceView come from gedit, and I’m not entirely happy with the gedit UI for file loading and saving. There are several problems, one of them is that GtkFileChooserNative cannot be used with the current gedit UI so it’s problematic to sandbox the application with Flatpak.

With gedit, when we open a file from a GtkFileChooserDialog, there is a combobox for the encoding: by default the encoding is auto-detected from a configurable list of encodings, and it is possible to choose manually an encoding from that same list. I want to get rid of that combobox, to always auto-detect the encoding (it’s simpler for the user), and to be able to use GtkFileChooserNative (because custom widgets like the combobox cannot be added to a GtkFileChooserNative).

The problem with the file loader implementation in GtkSourceView is that the encoding auto-detection is not that good, hence the need for the combobox in the GtkFileChooserDialog in gedit. But to detect the encoding, there is now a simple to use library called uchardet, maintained by Jehan Pagès, and based on the Mozilla universal charset detection code. Since the encoding auto-detection is much better with uchardet, it will be possible to remove the combobox and use GtkFileChooserNative!

Jehan started to modify GtkSourceFileLoader (or, more precisely, the internal class GtkSourceBufferOutputStream) to use uchardet, but as a comment in GtkSourceBufferOutputStream explains, that code is a big headache… And the encoding detection is based only on the first 8KB of the file, which results in bugs if for example the first 8KB are only ASCII characters and a strange character appears later. Changing that implementation to take into account the whole content of the file was not easily possible, so instead, I decided to write a new implementation from scratch, in Gtef, called GtefFileLoader. It was done in Gtef and not in GtkSourceView, to not break the GtkSourceView API, and to have the time in Gtef to write the implementation and API incrementally (trying to keep the API as close as possible to the GtkSourceView API).

The new GtefFileLoader takes a simpler approach, doing things sequentially instead of doing everything at the same time (the reason for the headache). 1) Loading the content in memory, 2) determining the encoding, 3) converting the content to UTF-8 and inserting the result into the GtkTextBuffer.

Note that for step 2, determining the encoding, it would have been entirely possible without uchardet, by counting the number of invalid characters and taking the first encoding for which there are no errors (or taking the one with the fewest errors, escaping the invalid characters). And when uchardet is used, that method can serve as a nice fallback. Since all the content is in memory, it should be fast enough even if it is done on the whole content (GtkTextView doesn’t support very big files anyway, 50MB is the default maximum in GtefFileLoader).

GtefFileLoader is usable and works well, but it is still missing quite a few features compared to GtkSourceFileLoader: escaping invalid characters, loading from a GInputStream (e.g. stdin) and gzip uncompression support. And I would like to add more features: refuse to load very long lines (it is not well supported by GtkTextView) and possibly ask to split the line, and detect binary files.

The higher-level API is not yet created, GtefFileLoader is still “just” the backend part.

A Web Browser for Awesome People (Epiphany 3.24)

Are you using a sad web browser that integrates poorly with GNOME or elementary OS? Was your sad browser’s GNOME integration theme broken for most of the past year? Does that make you feel sad? Do you wish you were using an awesome web browser that feels right at home in your chosen desktop instead? If so, Epiphany 3.24 might be right for you. It will make you awesome. (Ask your doctor before switching to a new web browser. Results not guaranteed. May cause severe Internet addiction. Some content unsuitable for minors.)

Epiphany was already awesome before, but it just keeps getting better. Let’s look at some of the most-noticeable new features in Epiphany 3.24.

You Can Load Webpages!

Yeah that’s a great start, right? But seriously: some people had trouble with this before, because it was not at all clear how to get to Epiphany’s address bar. If you were in the know, you knew all you had to do was click on the title box, then the address bar would appear. But if you weren’t in the know, you could be stuck. I made the executive decision that the title box would have to go unless we could find a way to solve the discoverability problem, and wound up following through on removing it. Now the address bar is always there at the top of the screen, just like in all those sad browsers. This is without a doubt our biggest user interface change:

Screenshot showing address bar visibleDiscover GNOME 3! Discover the address bar!

You Can Set a Homepage!

A very small subset of users have complained that Epiphany did not allow setting a homepage, something we removed several years back since it felt pretty outdated. While I’m confident that not many people want this, there’s not really any good reason not to allow it — it’s not like it’s a huge amount of code to maintain or anything — so you can now set a homepage in the preferences dialog, thanks to some work by Carlos García Campos and myself. Retro! Carlos has even added a home icon to the header bar, which appears when you have a homepage set. I honestly still don’t understand why having a homepage is useful, but I hope this allows a wider audience to enjoy Epiphany.

New Bookmarks Interface

There is now a new star icon in the address bar for bookmarking pages, and another new icon for viewing bookmarks. Iulian Radu gutted our old bookmarks system as part of his Google Summer of Code project last year, replacing our old and seriously-broken bookmarks dialog with something much, much nicer. (He also successfully completed a major refactoring of non-bookmarks code as part of his project. Thanks Iulian!) Take a look:

Manage Tons of Tabs

One of our biggest complaints was that it’s hard to manage a large number of tabs. I spent a few hours throwing together the cheapest-possible solution, and the result is actually pretty decent:

Firefox has an equivalent feature, but Chrome does not. Ours is not perfect, since unfortunately the menu is not scrollable, so it still fails if there is a sufficiently-huge number of tabs. (This is actually surprisingly-difficult to fix while keeping the menu a popover, so I’m considering switching it to a traditional non-popover menu as a workaround. Help welcome.) But it works great up until the point where the popover is too big to fit on your monitor.

Note that the New Tab button has been moved to the right side of the header bar when there is only one tab open, so it has less distance to travel to appear in the tab bar when there are multiple open tabs.

Improved Tracking Protection

I modified our adblocker — which has been enabled by default for years — to subscribe to the EasyPrivacy filters provided by EasyList. You can disable it in preferences if you need to, but I haven’t noticed any problems caused by it, so it’s enabled by default, not just in incognito mode. The goal is to compete with Firefox’s Disconnect feature. How well does it work compared to Disconnect? I have no clue! But EasyPrivacy felt like the natural solution, since we already have an adblocker that supports EasyList filters.

Disclaimer: tracking protection on the Web is probably a losing battle, and you absolutely must use the Tor Browser Bundle if you really need anonymity. (And no, configuring Epiphany to use Tor is not clever, it’s very dumb.) But EasyPrivacy will at least make life harder for trackers.

Insecure Password Form Warning

Recently, Firefox and Chrome have started displaying security warnings  on webpages that contain password forms but do not use HTTPS. Now, we do too:

I had a hard time selecting the text to use for the warning. I wanted to convey the near-certainty that the insecure communication is being intercepted, but I wound up using the word “cybercriminal” when it’s probably more likely that your password is being gobbled up by various  governments. Feel free to suggest changes for 3.26 in the comments.

New Search Engine Manager

Cedric Le Moigne spent a huge amount of time gutting our smart bookmarks code — which allowed adding custom search engines to the address bar dropdown in a convoluted manner that involved creating a bookmark and manually adding %s into its URL — and replacing it with an actual real search engine manager that’s much nicer than trying to add a search engine via bookmarks. Even better, you no longer have to drop down to the command line in order to change the default search engine to something other than DuckDuckGo, Google, or Bing. Yay!

New Icon

Jakub Steiner and Lapo Calamandrei created a great new high-resolution app icon for Epiphany, which makes its debut in 3.24. Take a look.

WebKitGTK+ 2.16

WebKitGTK+ 2.16 improvements are not really an Epiphany 3.24 feature, since users of older versions of Epiphany can and must upgrade to WebKitGTK+ 2.16 as well, but it contains some big improvements that affect Epiphany. (For example, Žan Doberšek landed an important fix for JavaScript garbage collection that has resulted in massive memory reductions in long-running web processes.) But sometimes WebKit improvements are necessary for implementing new Epiphany features. That was true this cycle more than ever. For example:

  • Carlos García added a new ephemeral mode API to WebKitGTK+, and modified Epiphany to use it in order to make incognito mode much more stable and robust, avoiding corner cases where your browsing data could be leaked on disk.
  • Carlos García also added a new website data API to WebKitGTK+, and modified Epiphany to use it in the clear data dialog and cookies dialog. There are no user-visible changes in the cookies dialog, but the clear data dialog now exposes HTTP disk cache, HTML local storage, WebSQL, IndexedDB, and offline web application cache. In particular, local storage and the two databases can be thought of as “supercookies”: methods of storing arbitrary data on your computer for tracking purposes, which persist even when you clear your cookies. Unfortunately it’s still not possible to protect against this tracking, but at least you can view and delete it all now, which is not possible in Chrome or Firefox.
  • Sergio Villar Senin added new API to WebKitGTK+ to improve form detection, and modified Epiphany to use it so that it can now remember passwords on more websites. There’s still room for improvement here, but it’s a big step forward.
  • I added new API to WebKitGTK+ to improve how we handle giving websites permission to display notifications, and hooked it up in Epiphany. This fixes notification requests appearing inappropriately on websites like the https://riot.im/app/.

Notice the pattern? When there’s something we need to do in Epiphany that requires changes in WebKit, we make it happen. This is a lot more work, but it’s better for both Epiphany and WebKit in the long run. Read more about WebKitGTK+ 2.16 on Carlos García’s blog.

Future Features

Unfortunately, a couple exciting Epiphany features we were working on did not make the cut for Epiphany 3.24. The first is Firefox Sync support. This was developed by Gabriel Ivașcu during his Google Summer of Code project last year, and it’s working fairly well, but there are still a few problems. First, our current Firefox Sync code is only able to sync bookmarks, but we really want it to sync much more before releasing the feature: history and open tabs at the least. Also, although it uses Mozilla’s sync server (please thank Mozilla for their quite liberal terms of service allowing this!), it’s not actually compatible with Firefox. You can sync your Epiphany bookmarks between different Epiphany browser instances using your Firefox account, which is great, but we expect users will be quite confused that they do not sync with your Firefox bookmarks, which are stored separately. Some things, like preferences, will never be possible to sync with Firefox, but we can surely share bookmarks. Gabriel is currently working to address these issues while participating in the Igalia Coding Experience program, and we’re hopeful that sync support will be ready for prime time in Epiphany 3.26.

Also missing is HTTPS Everywhere support. It’s mostly working properly, thanks to lots of hard work from Daniel Brendle (grindhold) who created the libhttpseverywhere library we use, but it breaks a few websites and is not really robust yet, so we need more time to get this properly integrated into Epiphany. The goal is to make sure outdated HTTPS Everywhere rulesets do not break websites by falling back automatically to use of plain, insecure HTTP when a load fails. This will be much less secure than upstream HTTPS Everywhere, but websites that care about security ought to be redirecting users to HTTPS automatically (and also enabling HSTS). Our use of HTTPS Everywhere will just be to gain a quick layer of protection against passive attackers. Otherwise, we would not be able to enable it by default, since the HTTPS Everywhere rulesets are just not reliable enough. Expect HTTPS Everywhere to land for Epiphany 3.26.

Help Out

Are you a computer programmer? Found something less-than-perfect about Epiphany? We’re open for contributions, and would really appreciate it if you would try to fix that bug or add that feature instead of slinking back to using a less-awesome web browser. One frequently-requested feature is support for extensions. This is probably not going to happen anytime soon — we’d like to support WebExtensions, but that would be a huge effort — but if there’s some extension you miss from a sadder browser, ask if we’d allow building it into Epiphany as a regular feature. Replacements for popular extensions like NoScript and Greasemonkey would certainly be welcome.

Not a computer programmer? You can still help by reporting bugs on GNOME Bugzilla. If you have a crash to report, learn how to generate a good-quality stack trace so that we can try to fix it. I’ve credited many programmers for their work on Epiphany 3.24 up above, but programming work only gets us so far if we don’t know about bugs. I want to give a shout-out here to Hussam Al-Tayeb, who regularly built the latest code over the course of the 3.24 development cycle and found lots of problems for us to fix. This release would be much less awesome if not for his testing.

OK, I’m done typing stuff now. Onwards to 3.26!

March 23, 2017

Applying to Outreachy and GSoC for Fedora and GNOME

The next 30 is going to be the deadline to apply to the Outreachy program and on April 3rd to the GSoC program. Lately in Peru a group of students were so interested in applying on it, since they have heard about the programs in FLOSS events such as LinuxPlaya 2017 and HackCamp2016, among other local events.

The companies that are participating in these programs fully reflect all available information; however, crucial questions are still on the air. This fact made me write a little post about these programs. So far, the Outreachy offers for FEDORA and GNOME:

Fedora

Fedora is a Linux-based operating system, which offers versions focused on three possible uses: workstation, server, and cloud.

Internship projects:

  • Improve Bodhi, the web-system that publishes updates for Fedora (looking for applicants as of March 17) Required: Python; Optional: JavaScript, HTML, CSS

GNOME

GNOME is a GNU/Linux-based innovative desktop that is design-driven and easy to use.

Internship projects:

  • Improve GTK+ with pre-compiled GtkBuilder XML files in resource data (looking for applicants as of March 12) Required: C, ability to quickly pick up GTK+

  • Photos: Make it perfect — every detail matters (looking for applicants as of March 12) Required: C, ability to quickly pick up GLib and GTK+

  • Make mapbox-gl-native usable from GTK+

    Required: C, GTK+; optional: C++, interest in maps

  • Unit Testing Integration for GNOME Builder (looking for applicants as of March 7) Required: GTK+, and experience with either C, Python, or Vala

  • Documentation Cards for GNOME Builder

    Required: GTK+, and experience with either C, Python, or Vala

First of all, it is compulsory to know what the programs are. Boths,  GSoC and Outreachy program ask to complete a requested free and open-source software coding project online during a period of 3 months, with a stipend of $5500. It is not required to complete that tasks before you apply, or to travel abroad to complete the internship, neither a Google recruiting program. To apply you must fulfill the requirements and other five points (that I also consider important as well):

1.- Be familiar with the project

In the case of GSoc for Fedora, please see the wiki of ideas published, for the GSoC GNOME, a wiki of ideas is also posted with the list of projects, and finally, the ideas for the Outreachy that are happy to receive contributors with very different programming skills.

I think that at least a year of experience as a user and as developer is important to mention. For example, in you decide to participate in the GNOME games project, it is important to prove that you have checked or interact with the code they use. It can done by posting it in a blog or by your git hub account. Fixing newcomers bugs related the GNOME games application is also an important plus to consider:: Additionally, the bugs for Fedora.

2.- Read the requirements and provide a proof of evidence

Before submit the proposal, it is important to attach a document to prove that you are currently enroll in a university or institute.
Another important requirement is the age, +18 and they also consider the eligibility to work in the country you live. Tax forms will be asked when you are selected.
You can upload individually more than one proposal could be submitted but only one will be accepted. You can participate in both programs too, the Outreachy program and the GSoC program. Only one will be accepted.

3.- Think about your strangeness and weakness

During the application is asked to prove an evidence of any other projects that you have participated. Maybe it is coincide in my case, but all the students that I found interested in Linux IT are also leaders in their communities or universities, and have participated in other interesting projects. A proof documented of those activities are also part of the process and in case you do not have videos, or posts, search for a letter of an authority (egg. dean of the faculty) to have the letter as a voucher of your committed to the society.

4.- Be in contact with your mentor

When you are about to finish the proposal, it is asked to add a calendar of the tasks and deadlines. In this case, it is better to set up the schedule of duties with the mentor approval. It is suggested to write a mail to introduce yourself with an attached tentative calendar to achieve the request posted in the wiki ideas of the project.

Each project has a list of mentor published, GSoC GNOME wiki shows them between parenthesis aside the name of the project as well as the GSoC for Fedora. The Outreachy mentor’s wiki list for GNOME and the mentor list for Fedora is also public online.

5.- Be responsible and organize your schedule

Be sure that you will accomplish the tasks you have planned in time.

Some students enroll in more than 6 courses at university that demand overtime than another regular student, others have a partial job while they are studying. Those factors must been foreseen before applying. Success in the GSoC program at the same time to pass the courses at the university with great grades is an effort that will open many doors locally and overseas in the academic and professional fields.

  • You can find on the Web examples of previous year’s proposals of the Outrechy programGNOME GSoC and Fedora. If you have further questions, please review the official WebSite FAQ, and if you think something is missing here, you are more than welcome to comment additional tips.

Best wishes for students in Peru and around the world! 🙂


Filed under: FEDORA, GNOME Tagged: 2017, apply GSoC, fedora, FLOSS programs, GNOME, Google Summer of Code, GSoC, GSoC Fedora, GSoC GNOME, Julita Inca, Julita Inca Chiroque, Outreachy, Perú

Celebrating Release Day

Last March, the Toronto area GNOME 3.20 release party happened to fall on release day. This release day saw Nancy and me at the hospital for the birth of our second grandson, Gord and Maggie’s second boy. Name suggestions honouring the occasion (GNOME with any manner of capitalization, Portland, or Three Two Four) were politely rejected in favour of the original plan: Finnegan Walter “Finn” Hill. Welcome to Finn and the new GNOME.

GTK hackfest 2017: D-Bus communication with containers

At the GTK hackfest in London (which accidentally became mostly a Flatpak hackfest) I've mainly been looking into how to make D-Bus work better for app container technologies like Flatpak and Snap.

The initial motivating use cases are:

  • Portals: Portal authors need to be able to identify whether the container is being contacted by an uncontained process (running with the user's full privileges), or whether it is being contacted by a contained process (in a container created by Flatpak or Snap).

  • dconf: Currently, a contained app either has full read/write access to dconf, or no access. It should have read/write access to its own subtree of dconf configuration space, and no access to the rest.

At the moment, Flatpak runs a D-Bus proxy for each app instance that has access to D-Bus, connects to the appropriate bus on the app's behalf, and passes messages through. That proxy is in a container similar to the actual app instance, but not actually the same container; it is trusted to not pass messages through that it shouldn't pass through. The app-identification mechanism works in practice, but is Flatpak-specific, and has a known race condition due to process ID reuse and limitations in the metadata that the Linux kernel maintains for AF_UNIX sockets. In practice the use of X11 rather than Wayland in current systems is a much larger loophole in the container than this race condition, but we want to do better in future.

Meanwhile, Snap does its sandboxing with AppArmor, on kernels where it is enabled both at compile-time (Ubuntu, openSUSE, Debian, Debian derivatives like Tails) and at runtime (Ubuntu, openSUSE and Tails, but not Debian by default). Ubuntu's kernel has extra AppArmor features that haven't yet gone upstream, some of which provide reliable app identification via LSM labels, which dbus-daemon can learn by querying its AF_UNIX socket. However, other kernels like the ones in openSUSE and Debian don't have those. The access-control (AppArmor mediation) is implemented in upstream dbus-daemon, but again doesn't work portably, and is not sufficiently fine-grained or flexible to do some of the things we'll likely want to do, particularly in dconf.

After a lot of discussion with dconf maintainer Allison Lortie and Flatpak maintainer Alexander Larsson, I think I have a plan for fixing this.

This is all subject to change: see fd.o #100344 for the latest ideas.

Identity model

Each user (uid) has some uncontained processes, plus 0 or more containers.

The uncontained processes include dbus-daemon itself, desktop environment components such as gnome-session and gnome-shell, the container managers like Flatpak and Snap, and so on. They have the user's full privileges, and in particular they are allowed to do privileged things on the user's session bus (like running dbus-monitor), and act with the user's full privileges on the system bus. In generic information security jargon, they are the trusted computing base; in AppArmor jargon, they are unconfined.

The containers are Flatpak apps, or Snap apps, or other app-container technologies like Firejail and AppImage (if they adopt this mechanism, which I hope they will), or even a mixture (different app-container technologies can coexist on a single system). They are containers (or container instances) and not "apps", because in principle, you could install com.example.MyApp 1.0, run it, and while it's still running, upgrade to com.example.MyApp 2.0 and run that; you'd have two containers for the same app, perhaps with different permissions.

Each container has an container type, which is a reversed DNS name like org.flatpak or io.snapcraft representing the container technology, and an app identifier, an arbitrary non-empty string whose meaning is defined by the container technology. For Flatpak, that string would be another reversed DNS name like com.example.MyGreatApp; for Snap, as far as I can tell it would look like example-my-great-app.

The container technology can also put arbitrary metadata on the D-Bus representation of a container, again defined and namespaced by the container technology. For instance, Flatpak would use some serialization of the same fields that go in the Flatpak metadata file at the moment.

Finally, the container has an opaque container identifier identifying a particular container instance. For example, launching com.example.MyApp twice (maybe different versions or with different command-line options to flatpak run) might result in two containers with different privileges, so they need to have different container identifiers.

Contained server sockets

App-container managers like Flatpak and Snap would create an AF_UNIX socket inside the container, bind() it to an address that will be made available to the contained processes, and listen(), but not accept() any new connections. Instead, they would fd-pass the new socket to the dbus-daemon by calling a new method, and the dbus-daemon would proceed to accept() connections after the app-container manager has signalled that it has called both bind() and listen(). (See fd.o #100344 for full details.)

Processes inside the container must not be allowed to contact the AF_UNIX socket used by the wider, uncontained system - if they could, the dbus-daemon wouldn't be able to distinguish between them and uncontained processes and we'd be back where we started. Instead, they should have the new socket bind-mounted into their container's XDG_RUNTIME_DIR and connect to that, or have the new socket set as their DBUS_SESSION_BUS_ADDRESS and be prevented from connecting to the uncontained socket in some other way. Those familiar with the kdbus proposals a while ago might recognise this as being quite similar to kdbus' concept of endpoints, and I'm considering reusing that name.

Along with the socket, the container manager would pass in the container's identity and metadata, and the method would return a unique, opaque identifier for this particular container instance. The basic fields (container technology, technology-specific app ID, container ID) should probably be added to the result of GetConnectionCredentials(), and there should be a new API call to get all of those plus the arbitrary technology-specific metadata.

When a process from a container connects to the contained server socket, every message that it sends should also have the container instance ID in a new header field. This is OK even though dbus-daemon does not (in general) forbid sender-specified future header fields, because any dbus-daemon that supported this new feature would guarantee to set that header field correctly, the existing Flatpak D-Bus proxy already filters out unknown header fields, and adding this header field is only ever a reduction in privilege.

The reasoning for using the sender's container instance ID (as opposed to the sender's unique name) is for services like dconf to be able to treat multiple unique bus names as belonging to the same equivalence class of contained processes: instead of having to look up the container metadata once per unique name, dconf can look it up once per container instance the first time it sees a new identifier in a header field. For the second and subsequent unique names in the container, dconf can know that the container metadata and permissions are identical to the one it already saw.

Access control

In principle, we could have the new identification feature without adding any new access control, by keeping Flatpak's proxies. However, in the short term that would mean we'd be adding new API to set up a socket for a container without any access control, and having to keep the proxies anyway, which doesn't seem great; in the longer term, I think we'd find ourselves adding a second new API to set up a socket for a container with new access control. So we might as well bite the bullet and go for the version with access control immediately.

In principle, we could also avoid the need for new access control by ensuring that each service that will serve contained clients does its own. However, that makes it really hard to send broadcasts and not have them unintentionally leak information to contained clients - we would need to do something more like kdbus' approach to multicast, where services know who has subscribed to their multicast signals, and that is just not how dbus-daemon works at the moment. If we're going to have access control for broadcasts, it might as well also cover unicast.

The plan is that messages from containers to the outside world will be mediated by a new access control mechanism, in parallel with dbus-daemon's current support for firewall-style rules in the XML bus configuration, AppArmor mediation, and SELinux mediation. A message would only be allowed through if the XML configuration, the new container access control mechanism, and the LSM (if any) all agree it should be allowed.

By default, processes in a container can send broadcast signals, and send method calls and unicast signals to other processes in the same container. They can also receive method calls from outside the container (so that interfaces like org.freedesktop.Application can work), and send exactly one reply to each of those method calls. They cannot own bus names, communicate with other containers, or send file descriptors (which reduces the scope for denial of service).

Obviously, that's not going to be enough for a lot of contained apps, so we need a way to add more access. I'm intending this to be purely additive (start by denying everything except what is always allowed, then add new rules), not a mixture of adding and removing access like the current XML policy language.

There are two ways we've identified for rules to be added:

  • The container manager can pass a list of rules into the dbus-daemon at the time it attaches the contained server socket, and they'll be allowed. The obvious example is that an org.freedesktop.Application needs to be allowed to own its own bus name. Flatpak apps' implicit permission to talk to portals, and Flatpak metadata like org.gnome.SessionManager=talk, could also be added this way.

  • System or session services that are specifically designed to be used by untrusted clients, like the version of dconf that Allison is working on, could opt-in to having contained apps allowed to talk to them (effectively making them a generalization of Flatpak portals). The simplest such request, for something like a portal, is "allow connections from any container to contact this service"; but for dconf, we want to go a bit finer-grained, with all containers allowed to contact a single well-known rendezvous object path, and each container allowed to contact an additional object path subtree that is allocated by dconf on-demand for that app.

Initially, many contained apps would work in the first way (and in particular sockets=session-bus would add a rule that allows almost everything), while over time we'll probably want to head towards recommending more use of the second.

Related topics

Access control on the system bus

We talked about the possibility of using a very similar ruleset to control access to the system bus, as an alternative to the XML rules found in /etc/dbus-1/system.d and /usr/share/dbus-1/system.d. We didn't really come to a conclusion here.

Allison had the useful insight that the XML rules are acting like a firewall: they're something that is placed in front of potentially-broken services, and not part of the services themselves (which, as with firewalls like ufw, makes it seem rather odd when the services themselves install rules). D-Bus system services already have total control over what requests they will accept from D-Bus peers, and if they rely on the XML rules to mediate that access, they're essentially rejecting that responsibility and hoping the dbus-daemon will protect them. The D-Bus maintainers would much prefer it if system services took responsibility for their own access control (with or without using polkit), because fundamentally the system service is always going to understand its domain and its intended security model better than the dbus-daemon can.

Analogously, when a network service listens on all addresses and accepts requests from elsewhere on the LAN, we sometimes work around that by protecting it with a firewall, but the optimal resolution is to get that network service fixed to do proper authentication and access control instead.

For system services, we continue to recommend essentially this "firewall" configuration, filling in the ${} variables as appropriate:

<busconfig>
    <policy user="${the daemon uid under which the service runs}">
        <allow own="${the service's bus name}"/>
    </policy>
    <policy context="default">
        <allow send_destination="${the service's bus name}"/>
    </policy>
</busconfig>

We discussed the possibility of moving towards a model where the daemon uid to be allowed is written in the .service file, together with an opt-in to "modern D-Bus access control" that makes the "firewall" unnecessary; after some flag day when all significant system services follow that pattern, dbus-daemon would even have the option of no longer applying the "firewall" (moving to an allow-by-default model) and just refusing to activate system services that have not opted in to being safe to use without it. However, the "firewall" also protects system bus clients, and services like Avahi that are not bus-activatable, against unintended access, which is harder to solve via that approach; so this is going to take more thought.

For system services' clients that follow the "agent" pattern (BlueZ, polkit, NetworkManager, Geoclue), the correct "firewall" configuration is more complicated. At some point I'll try to write up a best-practice for these.

New header fields for the system bus

At the moment, it's harder than it needs to be to provide non-trivial access control on the system bus, because on receiving a method call, a service has to remember what was in the method call, then call GetConnectionCredentials() to find out who sent it, then only process the actual request when it has the information necessary to do access control.

Allison and I had hoped to resolve this by adding new D-Bus message header fields with the user ID, the LSM label, and other interesting facts for access control. These could be "opt-in" to avoid increasing message sizes for no reason: in particular, it is not typically useful for session services to receive the user ID, because only one user ID is allowed to connect to the session bus anyway.

Unfortunately, the dbus-daemon currently lets unknown fields through without modification. With hindsight this seems an unwise design choice, because header fields are a finite resource (there are 255 possible header fields) and are defined by the D-Bus Specification. The only field that can currently be trusted is the sender's unique name, because the dbus-daemon sets that field, overwriting the value in the original message (if any).

To make it safe to rely on the new fields, we would have to make the dbus-daemon filter out all unknown header fields, and introduce a mechanism for the service to check (during connection to the bus) whether the dbus-daemon is sufficiently new that it does so. If connected to an older dbus-daemon, the service would not be able to rely on the new fields being true, so it would have to ignore the new fields and treat them as unset. The specification is sufficiently vague that making new dbus-daemons filter out unknown header fields is a valid change (it just says that "Header fields with an unknown or unexpected field code must be ignored", without specifying who must ignore them, so having the dbus-daemon delete those fields seems spec-compliant).

This all seemed fine when we discussed it in person; but GDBus already has accessors for arbitrary header fields by numeric ID, and I'm concerned that this might mean it's too easy for a system service to be accidentally insecure: It would be natural (but wrong!) for an implementor to assume that if g_message_get_header (message, G_DBUS_MESSAGE_HEADER_FIELD_SENDER_UID) returned non-NULL, then that was guaranteed to be the correct, valid sender uid. As a result, fd.o #100317 might have to be abandoned. I think more thought is needed on that one.

Unrelated topics

As happens at any good meeting, we took the opportunity of high-bandwidth discussion to cover many useful things and several useless ones. Other discussions that I got into during the hackfest included, in no particular order:

  • .desktop file categories and how to adapt them for AppStream, perhaps involving using the .desktop vocabulary but relaxing some of the hierarchy restrictions so they behave more like "tags"
  • how to build a recommended/reference "app store" around Flatpak, aiming to host upstream-supported builds of major projects like LibreOffice
  • how Endless do their content-presenting and content-consuming apps in GTK, with a lot of "tile"-based UIs with automatic resizing and reflowing (similar to responsive design), and the applicability of similar widgets to GNOME and upstream GTK
  • whether and how to switch GNOME developer documentation to Hotdoc
  • whether pies, fish and chips or scotch eggs were the most British lunch available from Borough Market
  • the distinction between stout, mild and porter

More notes are available from the GNOME wiki.

Acknowledgements

The GTK hackfest was organised by GNOME and hosted by Red Hat and Endless. My attendance was sponsored by Collabora. Thanks to all the sponsors and organisers, and the developers and organisations who attended.

GNOME ED Update – Week 12

New release!

In case you haven’t seen it yet, there’s a new GNOME release – 3.24! The release is the result of 6 months’ work by the GNOME community.

The new release is a major step forward for us, with new features and improvements, and some exciting developments in how we build applications. You can read more about it in the announcement and release notes.

As always, this release was made possible partially thanks to the Friends of GNOME project. In particular, it helped us provide a Core apps hackfest in Berlin last November, which had a direct impact on this release.

Conferences

GTK+ hackfest

I’ve just come back from the GTK+ hackfest in London – thanks to RedHat and Endless for sponsoring the venues! It was great to meet a load of people who are involved with GNOME and GTK, and some great discussions were had about Flatpak and the creation of a “FlatHub” – somewhere that people can get all their latest Flatpaks from.

LibrePlanet

As I’m writing this, I’m sitting on a train going to Heathrow, for my flight to LibrePlanet 2017! If you’re going to be there, come and say hi. I’ve a load of new stickers that have been produced as well so these can brighten up your laptop.

March 22, 2017

Another media codec on the way!

One of the thing we are working hard at currently is ensuring you have the codecs you need available in Fedora Workstation. Our main avenue for doing this is looking at the various codecs out there and trying to determine if the intellectual property situation allows us to start shipping all or parts of the technologies involved. This was how we were able to start shipping mp3 playback support for Fedora Workstation 25. Of course in cases where this is obviously not the case we have things like the agreement with our friends at Cisco allowing us to offer H264 support using their licensed codec, which is how OpenH264 started being available in Fedora Workstation 24.

As you might imagine clearing a codec for shipping is a slow and labour intensive process with lawyers and engineers spending a lot of time reviewing stuff to figure out what can be shipped when and how. I am hoping to have more announcements like this coming out during the course of the year.

So I am very happy to announce today that we are now working on packaging the codec known as AC3 (also known as A52) for Fedora Workstation 26. The name AC3 might not be very well known to you, but AC3 is part of a set of technologies developed by Dolby and marketed as Dolby Surround. This means that if you have video files with surround sound audio it is most likely something we can playback with an AC3 decoder. AC3/A52 is also used for surround sound TV broadcasts in the US and it is the audio format used by some Sony and Panasonic video cameras.

We will be offering AC3 playback in Fedora Workstation 26 and we are looking into options for offering an encoder. To be clear there are nothing stopping us from offering an encoder apart from finding an implementation that is possible to package and ship with Fedora with an reasonable amount of effort. The most well known open source implementation we know about is the one found in ffmpeg/libav, but extracting a single codec to ship from ffmpeg or libav is a lot of work and not something we currently have the resources to do. We found another implementation called aften, but that seems to be unmaintaned for years, but we will look at it to see if it could be used.
But if you are interested in AC3 encoding support we would love it if someone started working on a standalone AC3 encoder we could ship, be that by picking up maintership of Aften, splitting out AC3 encoding from libav or ffmpeg or writting something new.

If you want to learn more about AC3 the best place to look is probably the Wikipedia page for Dolby Digital or the a52 ATSC audio standard document for more of a technical deep dive.

your business in munich podcast

a few days ago i was invited to be a guest on the your business in munich podcast. it was a rather freewheeling but very enjoyable conversation on topics like my consultancy, interaction with humans & computers, open source and ux. i was especially impressed with the detailed preparation and unerring questions by my host, ryan l. sink.

feel free to listen to our conversation below, on the your business in munich website or read the transcript below.


Welcome to Your Business In Munich. The podcast where entrepreneurs throughout the Munich area share their stories. And now your host, an experienced coach and entrepreneur from the United States, Ryan L. Sink.

Ryan L. Sink: Welcome to Your Business In Munich. Today I'm here with Daniel Siegel, he is a digital product architect in Munich, Germany, but originally from Merano, Italy, which I have learned is a small town in northern Italy. Could you tell us a little more about where you're from? Give us a picture of what it's like to be in Merano, Italy.

Daniel G. Siegel: Merano's actually a really small city close to the northern border of Italy, to Austria and Switzerland. It's located in between the Alps, so in winter we have a lot of snow, in summer we have a lot of sunshine. It has kind of a Mediterranean feeling to it. We have some palm trees in there as well. It's quite funny, especially maybe in April, May where you can see the palm trees there and you still have the mountain with some snow on top.

Ryan: Palm trees in the Alps.

Daniel: Yeah, probably one of the only places you where you can find that setting.

Ryan: I can't even picture that.

Daniel: It's kind of a melange between Italian, a bit Swiss, Austrian and German stuff. If you go back into the history, you can see these people meeting there. It was kind of always a border region, shall we say.

Ryan: Did you grow up with these other cultures as well or languages?

Daniel: Yeah. You normally grow up speaking Italian and German. German is more like a dialect so it's not the proper high German but you learn that as well in school. You normally learn the three languages German, Italian and English of course. There are a few smaller valleys where you have some additional languages, kind of like mixtures between Italian, Latin and German or something like that.

Ryan: Is this the Romance?

Daniel: Exactly.

Ryan: Okay. Do you speak this as well?

Daniel: No, not at all. I'm from a different valley. [laughs]

Ryan: So it depends on which valley you're from.

Daniel: Exactly.

Ryan: Which valley you're in between.

Daniel: It's actually quite hard to understand people because every valley has a different dialect. Of course if you go up there, you can tell who's from where but if you're an outsider you probably have a hard time to learn each individual dialect.

Ryan: I bet. You can tell by the type of palm trees they have growing. [laughs]

Daniel: No, those are only in Merano. [laughs]

Ryan: Growing up in Merano, how did you first get interested in digital products?

Daniel: Well I think I was always fascinated by the computer. I can't remember exactly but we had computers back home quite early. My father was building up his own business with computers. He did some consulting, bookkeeping with computers back in the 70s, 80s, which were the first real computers you could bring home and work on them. Well, sometimes he brought his computers back home and we could actually play the early games. I remember friends coming over and we were playing a game together in front of the computer, and that kind of thing. I think there's where my fascination comes from.

Ryan: Even as a kid do you think you were already looking at the usability of it and then thinking of these things?

Daniel: I think I always broke stuff. [laughs] That's where I'm coming from because I was kind of fascinated to see what's behind the scene and how is it working. I tried to get a glimpse behind the scenes. Of course, breaking all the stuff when I was young.

Ryan: I'm sure your dad was happy with...

Daniel: Yeah he was happy all the time. [laughs] But I think that lead me to programming at some point. Actually tried to get into deep stuff and into open source as well. I started to create my first own programs and it slowly moved with that, actually.

Ryan: What kinds of programs have you created yourself?

Daniel: Actually a few. I think the first open source program I put out there was a small driver for a modem which was working with USB I think and there was no driver for Linux but actually I had to get on the internet. It was a small thing and we had a really slow connection. Nevertheless I was working on that thing and actually created the first driver for that one.

Ryan: Cool.

Daniel: I put it out there and lots of people were sending me emails and so on. That was actually the first thing I did. Then later on I think a more prominent project was a program for the GNOME Desktop which is one of the most used desktop interfaces for Linux. If you ever heard of Ubuntu for example, they actually used GNOME and a lot of GNOME infrastructure. There I created a small webcam tool similar to PhotoBooth on Mac where you have some filters and you can apply them to your face, and that kind of thing. That was actually a lot of fun.

Ryan: It sounds like it. Obviously there's not a lot of money in open source if you're giving it all away.

Daniel: Actually I was paid by Google at some point for that.

Ryan: Yeah? Great.

Daniel: That was part of the Summer of Code program back in the days. I think I got a whopping $5000 dollars for like three months working in the summer, which of course, being a student, was just like, I've got my whole year settled just by that.

Ryan: Just the opportunity alone, doing it for free or paying them to do it.

Daniel: I also was invited to lots of conferences and so on, it was a win-win experience for me because I learned a lot of stuff, like how to work remotely with people. I learned a lot working with designers. I learned a lot how to create open source programs and how to lead them and how to do marketing for your open source projects as well. I think that was one of my biggest experiences I could actually use when doing actual work later on.

Ryan: When did you start working for yourself?

Daniel: That was actually two years ago. I always did some smaller projects next to what I was doing at that time or point. But for two years I have a small boutique consultancy here in Munich, where I help my clients create effective websites that are able to tell their story perfectly and convert visitors to customers. It's actually a bit more than websites because I always describe myself as bringing sales strategies and sales processes to the digital world. That's also where the term digital product architect comes from because a website is a product as well, as a newsletter, as well as your actual product or service you‘re selling. Especially in these days, you have to bridge the gap between the online-offline world and you have to think about how these things work together and how you can actually reach out to your customers and talk to them and stay in contact and so on.

Ryan: What were you doing before you started doing this with the boutique consultancy?

Daniel: I think my first real job, you can say, ignoring all the stuff I did as a student was creating my own startup back in 2007, which is now the world's biggest platform for young fashion designers.

Ryan: Cool.

Daniel: It's called Not Just A Label and I co-founded it with my brother and two friends of ours back in 2007. I was working there as a CTO and building that from the ground up.

Ryan: What's going on with that platform at the moment?

Daniel: It's still the biggest one of the world. [laughs] I did a small exit in 2012-2013. Since then I'm consulting them, we have semi-regular calls and I help them with some issues they're struggling with and so on.

Ryan: Nice, and still with the website as well?

Daniel: Yeah, sometimes. Of course they need a lot of strategy help as well, especially because I built it from the ground up, I know a lot of things which went wrong which the current people don't have an idea about so I can usually help them out with some things. They actually expanded to L.A. last year.

Ryan: It's still based in Italy?

Daniel: No, actually it was based in London, we founded it in London.

Ryan: In London. Okay.

Daniel: Lots of traveling back and forth at that time. Now they have an office in London and L.A. as well.

Ryan: What made you decide to exit because growing and..?

Daniel: Well, there were a few things in there. I'm really interested in the interaction between humans and computers. That's my main vision and my main goal. The Fashion world was just something I saw and I could help the people, but it was not my main interest. I was kind of bored at some point because we scaled up the platform quite big – we had like, 20,000 designers and a lot more daily and weekly visitors. It was a struggle to scale it up but we had a really well laid out structure and strategy. So basically by doing that, I unemployed myself. [laughs] I didn't have much stuff to do anymore at that point.

Ryan: So, it's your fault. [laughs]

Daniel: Yeah, probably. Then actually the same time a friend of mine came up to me and told me – he was working for Accenture at that time – that they are building up a small team in Germany. They already had a team world-wide, called Emerging Technology Innovation and they wanted to build up the same team in Germany as well. The team focused on bringing emerging technologies to their clients. Basically each member of that team had to have a few technologies and they were playing around with them, doing small prototypes, light-house projects and so on for the clients. And then we could actually see if something is going to get big or not, and if it would get big, then we integrate it into Accenture and build it up inside there. There I did a lot of HTML5 stuff, JavaScript stuff, and of course also that interaction between humans and websites, humans and web applications, basically a lot of user experience and so on.

Ryan: So, this job is what brought you to Munich?

Daniel: No, actually I was – the first time I came to Munich was in 2006 for studying. I studied computer science and psychology. You see, the same things are repeating themselves in my life. [laughs] Of course you can‘t see it at that point, but looking back, you can always see that computers and humans – that these are two topics which are repeating themselves in my life. And then of course I scaled my studies down while working on my startup, and then I did some exams, and back to the startup. So it was a hassle back and fourth until at some point I was able to finish my studies as well.

Ryan: And with connecting these two worlds, you know the psychology, the human side, and the computer technology side, what do you do to connect these two worlds?

Daniel: I often tell people I do websites, but that's not the entire truth, because of course the end result is most often a website and a let's say marketing funnel, be it a newsletter, email course, or something like that. But in the end, that's not what my clients actually need. My clients need a way to be able to talk to new clients, to new leads, to be in contact with them, and to actually help them to solve their own problems. And that's what I'm enabling them to do. I see my skills as a tool set, and I pick an individual tool to solve exactly that problem. You know, there's a saying that, "You don't need a drilling machine, you need like a picture on the wall." And that's what I actually do with my clients.

I see this trend where people just throw technology at the wall and see what sticks, you know you need like a website, and you need to do SEO, then you need to Facebook Ads, LinkedIn Ads, and social media ads, and then you need to buy traffic from there, and then you have to be active on Twitter, and then you have to have a newsletter, and then you have to talk, and then you have to have a YouTube Channel, and then you have to... and so on, and so on. But by doing everything you're actually doing nothing. They struggle, and then they come to me and tell me, "Uh it's so hard to find clients online." And I'm always telling them, "Yeah okay, if you’re doing everything, how do you wanna reach your clients? How do you want to have a meaningful conversation with a person?" You're not able to.

So, niche down, focus on one or two ways to communicate with your clients, and do it really, really well. People are really grateful for that, like if you actually take the time to talk with them, that's a really good thing. I mean we were talking before about calling people on their birthday, and how surprised some people act just by showing them that you take time yourself to talk to them. And that's something you can actually integrate in your business, you know. For me automation for example is a really important thing. I see the website and everything you do online as a digital version of yourself, which works while you are sleeping. So, for example, you're in a meeting with clients, you go back home, go to bed, the client might still think about the meeting, goes on your website, looks at stuff you do, stuff you did, and then makes a decision on whether to go ahead or not.

In that sense, the website has to really talk to your client as if it would be you, right? And that's something which is actually missing. And as far as automation I don't see it as a thing which makes you obsolete, it's more the other way round. It should should enable yourself to scale up more efficiently, but still have a meaningful conversation with each of your contacts. By using all the range of tools today for example – let's talk about a newsletter. You can actually get really, really specific about who you are talking to.

For example if you go to a conference, and you come back with lets say 10 business cards, just take the time to write some notes on them about what you did talk about, what was she interested in, and so on. Put it in there, and then with that data, you can actually craft meaningful conversations. And even if you send it out to all 10 people, but just by adding like one or two lines on like, "Hey it was great to meet you. I especially liked the discussion about online marketing“, or whatever you were talking about – that alone is enough to get some gratefulness out of a person.

Ryan: To show that you were listening there right?

Daniel: Yeah, exactly. And that's something which is missing.

Ryan: Yeah, this human touch, this personal touch on everything you do. Not just the standard, automated emails. I mean, even if they're clever, I've seen good examples. You've probably seen the story of like Derek Sivers first company with the customer's story that he told when they were getting everything ready, sending them their CVs. Even that, it's clever, but it's not personal, right. So you just need these little things that connect you to that other person. Could you tell us about the business card example as well? I thought that was great what you were saying about personalized – you don't have to share if you don't want to.

Daniel: No, I gladfully will, because my old business card had a really cool design on the back side. I had my email address on there, and actually in my email address, you can find my website, and my Twitter handle. And so I made a graphic layout to actually show which part is which part, and so on. It was really looking great, and people were telling me like, "It looks awesome, and so on." But if I look at results, like nobody was following me on Twitter, just because of my card. Nobody was visiting my website, just because of my card. No one was writing me an email, just because of my card. Then again I was thinking like, that's a thing I do with my clients, like bridging the gap between offline and online, so what could I do to create that interaction from a meeting to maybe talk to them about their problems or something like that.

What I added to my business cards was a custom URL for each business card, and if I give you one, you can visit that URL. I obviously will tell you that I prepared a small gift for you, and you can have a look there. And that's a custom crafted landing page, which actually greets you, and tells you like, "Hey it was nice to meet you", tells you some other stuff, what I'm doing, where you can find my stuff and so on. But it also gives you like a small gift, which is individualized to you, to that person.

Ryan: So do you also have to tell them at the same time, "Hey please wait three days, so that I can get the page ready before you use this URL?" [laughs]

Daniel: Well, you can start with the easiest example. So, for example, just take your URL/hello. You don't have to publicize that page, but just craft it to like, "Hey it was nice to meet you." And maybe say something like, "Not many people take action, but you are like one of the few, so this is why I want to give you something in return." And then announce it while talking to that person. That's already enough. Of course you can get much more fancy with that, but just in doing like that little thing. Just that 1% more than other people do, which will propel you in front of let's say 100% of other people, because nobody's doing it. If you go to an event, people just like throw out their business cards, and kind of like playing "I have to throw away 100 business cards today, let's see who's faster" or something like that. [laughs]

Ryan: Really? Yeah, it would almost be the same for some people to just put them all in the trash can away in the door that day.

Daniel: I mean of course everybody's busy today, so if you go home, you have like 10 business cards, what should I do with them? Like okay, I'll probably add them to LinkedIn, or XING, or whatever, and that's it. Then two days later you forget about them, and so on. But what if you could have a process starting there. Let's say, we meet at a meetup, and you tell me, "Yeah I have problems with my website. I don't get enough customers." And I tell you, "Look, I wrote an email course for that, or an ebook, or something like that, exactly about your problem. If you want, just give me your business card, I'll add you to it, and I will send it to you automatically." You probably would say "Yes", right?

Ryan: Yeah, sure. [laughs]

Daniel: Exactly. So, it's easy as that right. Just give your people something. Something that can help them.

Ryan: Yeah, doing something extra, or even creative. Like I have to say again, I love your contact page. [laughs] I think it's the only contact page on any website ever, where I actually laughed out loud while I was reading it, because you have all these great email addresses, if you want to buy you a beer, if you want to I think donate money, or give you April Fools tips, or know your shoe size, all these great things. But another question to that, have people actually used these? Besides just maybe your close friends screwing around with you, have people actually used these, and written you April Fools jokes ideas, or asked for your shoe size?

Daniel: Actually, a few people did.

Ryan: Yeah? [laughs]

Daniel: Yeah, so I think I had a movie request email address at some point. Some people were writing there, beer and coffee is used often, especially by people who – you know those people where you set up a date, and it fails because of some reason and they reconnect a few months later on? Those are the people who will use the beer or coffee email address. [laughs]

Ryan: As a way to say, "Sorry for the first meeting. Not working out, they get you a beer."

Daniel: Actually I had a few clients who contacted me with the "Give me money" address. I thought this was really funny. [laughs]

Ryan: That's really great. Yeah, you should hook up your invoicing systems...

Daniel: Oh yeah definitely.

Ryan: Money@... [laughs]

Daniel: Well you see, it's these small things you can add, to make interactions more... everything's so boring, kind of strict in some way, and I enjoy these little things. Just like having an Easter egg somewhere. I don't know if you're familiar with that?

Ryan: Yeah.

Daniel: It's basically some small, I don't know, how would you explain Easter egg?

Ryan: Yeah, something hidden, you know, like you're playing a game. It's something that only you could know, or the only way to get it, is really either tripping over it, or actually maybe hearing from someone that it's there. Like the invisible blocks from the old Mario game. I remember a friend told me, I knew where the block was, I jump at the right place, there was my Easter egg, sort of.

Daniel: So, actually when I was working on that webcam tool, it was called Cheese, we had several Easter eggs in there, just to make it fun. Actually, it was so funny because we didn't tell anybody, but then someone probably got an email about people who discovered it. And I remember one Easter egg was, if you made a photo, it was making a photo sound, like a click, or something like that. And if you pressed a certain combination, it would change that sound to – we actually recorded a few voices, making some comments about the people. Like there was somebody laughing at the person, or "Oh you look so sweet." Stuff like that. People were freaking out over that.

Ryan: And they just happened upon these combinations?

Daniel: Yeah, it was not too hard to figure it out, but you actually had to find it. We had several things like these in there. Coming back to your other question about the human side in computers, I think we lost a lot of that, because if you go back to 60s and 70s, when the computer was a really new tool, and people were still discovering what you could actually do with a computer. A lot of people were coming from biology, or literature, or physics, or mathematics and so on. They always thought, "Okay, this is like a machine, and I want to use it for... to make my life easier." We had a lot of progress in that time.

And there were actually, in that time, there were two camps, the one was the AI camp and the other was the IA camp. The Artificial Intelligence camp was thinking about, "Okay lets make computers really intelligent, and we don't have to do anything." And the other camp was Intelligence Augmentation, which was coming from the other way, "No let's make humans smarter, let's evolve computers into tools humans can use. Make humans smarter, and then when humans are smarter, we can actually make even better tools for us." And so have like a continual co-evolution. I actually gave a talk last year in September in Belgrade about that topic called the Lost Medium.

Ryan: Nice.

Daniel: Where I was talking about how we lost that thing, and that we actually have to see the computer as a tool to make our lives better. We can see some small examples nowadays, like for example, you're in a foreign city, and you have Google Maps with you. You're safe. You just like look at your phone, and it will guide you to where we need to go.

Ryan: As long as you have a data plan.

Daniel: Do you remember when you were like going on holidays with your mom and pap, like I don't know, 20 years ago, and they were fighting over where you have to go, and like, "You have to tell me earlier...," like it was really stressful. And now look, we have a GPS there, and it‘s like, "Oh okay, we‘re arriving in 20 minutes, and everything's fine and so on."

Ryan: There's no question, just who's app is maybe more correct than the other one.

Daniel: Yeah, and that's a perfect example of like augmenting humans. There are lots of other examples, but we fail in many areas. Like for example, if I want to share an article, and add a comment to why it's important for you, because we were talking about the thing, it's really hard to do. I mean your probably write an email but then it's not really searchable, and you forget about it, and these are not really difficult things. Or the other thing, like before we met I wanted to call you, my phone didn't know your number. Why not? I mean we had a conversation over email before, and over LinkedIn–

Ryan: So it would make sense that we could–

Daniel: It would make sense that my computer's smart enough to figure out we‘re connected, and I mean you don't have to know my bank account number, but you can know my, I don't know, street address maybe, and my phone number for example. And then, let's say we have a drink afterwards, and you don't have any money there, it would be easier to just like give you access to my bank account, you just like transfer money there, and that's it. You could easily create an app for that, or a tool or something like that, and there were many examples of startups doing exactly that. But I think that's not the entire solution, because we have to think...

We have to go a step back, and think about what's the actual problem here. The actual problem is not that I don't have your contact information, it's more about how can I stay in contact with you? That's the main thing. Like for example, if you change your number, email address and so on. And by looking at that you see – you start to notice these small issues all the time. Like for example a few days ago I wanted to send an email to my brother-in-law, and I wrote an email, and just before sending it, I remembered myself that he actually changed his company, so his email address was not the correct one anymore. And I mean it's my brother-in-law, so of course we connected, but why don't I have his new work email?

Ryan: Yeah, and that could have been your only way to get in touch with him, and then there would have broken an entire–

Daniel: Now I have to actually send him a text asking him like, "Hey, what's your email address?" And then he was just like, "Yeah, you can call me as well and we'll talk." It‘s just struggle all the time, like with these small things.

Ryan: Not smooth.

Daniel: Exactly.

Ryan: So, what are you doing at the moment to fix that problem? Are you creating more apps, or tools like that?

Daniel: The thing that we were talking about is the vision behind my business, which is fueling it, and of course, I'm doing websites, web processes, so that's what I help my clients with. On the side, in my spare time I prepare some talks, some prototypes, the Lost Medium is an example of that one. Just to figure out what's going on, like are there different solutions to that? Then I have a more or less monthly series on my blog and my newsletter called summing up where I try to collect puzzle pieces in that area, and try to put them together, and see what's missing there. Because I don't think that there's a single solution to the problem.

I think the main thing is we kind of lost the idea that – let me put it in another way: We think the computer is on the top of it's game. And if you look back, each year, the computer gets better. A phone gets better, it gets faster, it gets bigger, or smaller, depending on what you want to do, we have more devices and so on. People have the impression that the computer's state of the art, which of course it's not. And I think it's... It's about like sharing that idea that we're not at the end yet.

Ryan: Yeah, maybe just at the beginning actually.

Daniel: Oh definitely at the beginning. I mean the computer is a completely new medium, and if you look at what happened to of course TV and radio, but also the printing press, or cars, they completely changed the whole world. And there is a great saying by Marshall McLuhan, he said, "We shape our tools and thereafter our tools shape us." And if you look at cars, it's the perfect example. You see like we have streets everywhere. A city is made up of streets right, 50% is streets, if not more. And this wasn't the case before.

Ryan: Yeah well, it definitely seems like you're on the right path.

Daniel: I hope so. [laughs]

Ryan: Yeah, I mean I've seen a lot of things recently for example showing you know, that the best solution is not a human working alone, or a machine working alone, it's them working together. Especially like in chess, some of these things that they've done, so it seems like in the future hopefully that'll continue to be the best solution. And as you said before, keep making ourselves better, so we can create even better tools and then we'll see how they shape us though. At the moment you got, the entire world with their heads down, 15°, looking into their phones. That definitely has helped out chiropractors for sure. [laughs]

We'll see what other effects it has. So, for someone interested in contacting you, getting to know more about your business, where should they go? What's the best place to check your business out.

Daniel: Definitely my website, it's dgsiegel.net. We probably put it in the show notes.

Ryan: Yeah, it'll definitely be in the show notes.

Daniel: You can find out what I'm doing there. I also got a blog and a newsletter, but for your listeners, I prepared a small cheat sheet, where you can learn how to create the perfect landing page which actually converts visitors to customers, and you can find it at dgsiegel.net/ybim.

Ryan: Thank you.

Daniel: Of course.

Ryan: Yeah, I'll definitely be downloading that myself. [laughs] At least you'll get one from that.

Daniel: There's everything there.

Ryan: Perfect, along with any email address you want.

Daniel: Exactly, choose any. [laughs]

Ryan: Very cool. Well thanks again for taking the time to share your experience, and your business with us.

Daniel: Sure, thank you for your time.

Thank you for listening to Your Business in Munich. For more information, on maximizing your personal and professional potential, go to ryanlsink.com. Have a great rest of your day.

March 21, 2017

Announcing the Shim review process

Shim has been hugely successful, to the point of being used by the majority of significant Linux distributions and many other third party products (even, apparently, Solaris). The aim was to ensure that it would remain possible to install free operating systems on UEFI Secure Boot platforms while still allowing machine owners to replace their bootloaders and kernels, and it's achieved this goal.

However, a legitimate criticism has been that there's very little transparency in Microsoft's signing process. Some people have waited for significant periods of time before being receiving a response. A large part of this is simply that demand has been greater than expected, and Microsoft aren't in the best position to review code that they didn't write in the first place.

To that end, we're adopting a new model. A mailing list has been created at shim-review@lists.freedesktop.org, and members of this list will review submissions and provide a recommendation to Microsoft on whether these should be signed or not. The current set of expectations around binaries to be signed documented here and the current process here - it is expected that this will evolve slightly as we get used to the process, and we'll provide a more formal set of documentation once things have settled down.

This is a new initiative and one that will probably take a little while to get working smoothly, but we hope it'll make it much easier to get signed releases of Shim out without compromising security in the process.

comment count unavailable comments

The time of the year…

Springtime is releasetime!

Monday saw a couple of new releases:

Shotwell

Shotwell 0.26.0 “Aachen” was released. No “grand” new features, more slashing of papercuts and internal reworks. I removed a big chunk of deprecated functions from it, with more to come for 0.28 on our way to GTK+4 and laid the groundworks for better integration into desktop online account systems such as UOA and GOA.

GExiv2 also received a bugfix release with its main highlight of proper documentation generation.
 

Rygel

In Rygel, things are more quiet. Version 0.34.0 moved some helpful classes for configuration handling to librygel-core and a couple of bugs were fixed. GSSDP and GUPnP also saw a small bugfix release.

LibreOffice 5.3.1 is out

Last week, LibreOffice released version 5.3.1. This seems to be an incremental release over 5.3 and doesn't seem to change the new user interface in any noticeable way.

This is both good and bad news for me. As you know, I have been experimenting with LibreOffice 5.3 since LibreOffice updated the user interface. Version 5.3 introduced the "MUFFIN" interface. MUFFIN stands for My User Friendly Flexible INterface. Because someone clearly wanted that acronym to spell "MUFFIN." The new interface is still experimental, so you'll need to activate it through Settings→Advanced. When you restart LibreOffice, you can use the View menu to change modes.

So on the one hand, I'm very excited for the new release!

But on the other hand, the timing is not great. Next week would have been better. Clearly, LibreOffice did not have my interests in mind when they made this release.

You see, I teach an online CSCI class about the Usability of Open Source Software. Really, it's just a standard CSCI usability class. The topic is open source software because there are some interesting usability cases there that bear discussion. And it allows students to pick their own favorite open source software project that they use in a real usability test for their final project.

This week, we are doing a usability test "mini-project." This is a "dry run" for the students to do their own usability test for the first time. Each student is doing the test with one participant each, but using the same program. We're testing the new user interface in LibreOffice 5.3, using Notebookbar in Contexttual Groups mode.

So we did all this work to prep for the usability test "mini-project" using LibreOffice 5.3, only for the project to release version 5.3.1 right before we do the test. So that's great timing, there.

But I kid. And the new version 5.3.1 seems to have the same user interface path in Notebookbar-Contextual Groups. So our test should bear the same results in 5.3 or 5.3.1.

This is an undergraduate class project, and will not generate statistically significant results like a formal usability test in academic research. But the results of our test may be useful, nonetheless. I'll share an overview of our results next week.

GNOME Photos 3.24.0

After exploring new territory with sharing and non-destructive editing over the last two releases, it was time for some introspection. We looked at some of the long-standing problems within our existing feature set and tried to iron out a few of them.

Overview Grids

It was high time that we overhauled our old GtkIconView-based overview grids. Their inability to reflow the thumbnails leads to a an ugly vertical gutter of empty space unless the window is just the right size. The other problem was performance. GtkIconView gets extremely slow when the icons are updated, which usually happens when content is detected for the first time and start getting thumbnailed.

gnome-photos-flowbox-1

Fixing this has been a recurrent theme in Photos since the middle of the previous development cycle. The end goal was to use a GtkFlowBox-based grid, but it involved a lot more work than replacing one user interface component with another.
Too many things relied on the existence of a GtkTreeModel, and had to be ported to our custom GListModel implementation before we could achieve any user-visible improvement. Once all those yaks had been shaved, we finally started working on the widget at the Core Apps Hackfest last year.

Anyway, I am happy that all that effort has to come fruition now.

Thumbnails

Closely related to our overview grids are the thumbnails inside them. Photos has perpetually suffered from GIO’s inability to let an application specifically request a high resolution thumbnail. While that is definitely a fixable problem, the fact that we store our edits non-destructively as serialized GEGL graphs makes it very hard to use the desktop-wide infrastructure for thumbnails. One cannot expect a generic thumbnailer to interpret the edits and apply them to the original image because their representation will vary greatly from one application to another. That led to the other problem where the thumbnails wouldn’t reflect the edited state of an image.

Therefore, starting from version 3.24.0, Photos has its own out-of-process thumbnailer and a separate thumbnail cache. They ensure that the thumbnails are of a suitably high resolution, and the edited state of an image is never ignored.

Exposure and Blacks

Personally, I have been a heavy user of Darktable’s exposure and blacks adjustment tool, and I really missed something like that in GNOME Photos. Ultimately, at this year’s WilberWeek I fixed gegl:exposure to imitate its Darktable counterpart, and exposed it as a tool in Photos. I am happy with the outcome and I have so far enjoyed dogfooding this little addition.


2017-03-21 Tuesday.

  • Up early, noticed the ice-crusher motor relay is the same as the compressor one (and presumably far far less used), de-soldered both relays, swapped them and bingo: a working compressor. Bench testing suggests the duff relay is only good at making nice clicking noises, rather than power switching; good.
  • Poked mail; pushed fixes.

March 20, 2017

2017-03-20 Monday.

  • H. ill, fridge compressor appears to have packed-in; spent a while dis-assembling it variously; annoying. Consultancy call. Lunch, sync. with Andras & Tor, code review.
  • Poked at the Daewoo FRN-U20DA in the evening ; no error codes, nothing doing, compressor not running, but starter seems ok, and continuity good. Brian over to help; jumped the compressor all working well there in the plumbing section - good. Set to work on the electronics; found the RT (Room Temperature) sensor, seems to be fine; hmm. Ordered a new compressor relay: OMIH-SS-112LM.

Buying a Utah teapot

The Utah teapot was one of the early 3D reference objects. It's canonically a Melitta but hasn't been part of their range in a long time, so I'd been watching Ebay in the hope of one turning up. Until last week, when I discovered that a company called Friesland had apparently bought a chunk of Melitta's range some years ago and sell the original teapot[1]. I've just ordered one, and am utterly unreasonably excited about this.

[1] They have them in 0.35, 0.85 and 1.4 litre sizes. I believe (based on the measurements here) that the 1.4 litre one matches the Utah teapot.

comment count unavailable comments

WebKitGTK+ 2.16

The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

Memory consumption

After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

CSS Grid Layout

Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.

New API

The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

Hardware acceleration policy

Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

Network proxy settings

Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

Private browsing

WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

Website data

WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

Dynamically added forms

Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

Custom print settings

The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

Notification improvements

Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

Debugging tools

Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

Memory sampler

This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

$ WEBKIT_SAMPLE_MEMORY=1 MiniBrowser 
Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036

The files contain a list of sample reports like this one:

Timestamp                          1490004807
Total Program Bytes                1960214528
Resident Set Bytes                 84127744
Resident Shared Bytes              68661248
Text Bytes                         4096
Library Bytes                      0
Data + Stack Bytes                 87068672
Dirty Bytes                        0
Fast Malloc In Use                 86466560
Fast Malloc Committed Memory       86466560
JavaScript Heap In Use             0
JavaScript Heap Committed Memory   49152
JavaScript Stack Bytes             2472
JavaScript JIT Bytes               8192
Total Memory In Use                86477224
Total Committed Memory             86526376
System Total Bytes                 16729788416
Available Bytes                    5788946432
Shared Bytes                       1037447168
Buffer Bytes                       844214272
Total Swap Bytes                   1996484608
Available Swap Bytes               1991532544

Resource usage overlay

The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

We plan to add more information to the overlay in the future like memory cache status.

Blender Constraints

Last time I wrote about artistic constraints being useful to remain focus and be able to push yourself to the max. In the near future I plan to dive into the new contstraint based layout of gtk4, Emeus. Today I’ll briefly touch on another type of constraint, the Blender object constraint!

So what are they and how are they useful in the context of a GNOME designer? We make quite a few prototypes and one of the things to decide whether a behavior is clear and comprehensible is motion design, particularly transitions. And while we do not use tools directly linked to out stack, it helps to build simple rigs to lower the manual labor required to make sometimes similar motion designs and limit the number of mistakes that can be done. Even simple animations usually consist of many keyframes (defined, non-computed states in time). Defining relationships between objects and createing setups, “rigs”, is a way to create of a sort of working model of the object we are trying to mock up.

Blender Constraints Blender Constraints

Constraints in Blender allow to define certain behaviors of objects in relation to others. Constraints allow you to limit movement of an object to specific ranges (a scrollbar not being able to be dragged outside of its gutter), or to convert certain motion of an object to a different transformation of another (a slider adjusting a horizon of an image, ie. rotating it).

The simplest method of defining relation is through a hierarchy. An object can become a parent of another, and thus all children will inherit movements/transforms of a parent. However there are cases — like interactions of a cursor with other objects — where this relationship is only temporary. Again, constraints help here, in particular the copy location constraint. This is because you can define the influence strength of a constraint. Like everything in Blender, this can also be keyframed, so at some point you can follow the cursor and later disengage this tight relationship. Btw if you ever though you can manualy keyframe two animations manually so they do not slide, think again.

Inverse transform in Blender Inverse transform in Blender

The GIF screencasts have been created using Peek, which is available to download as a flatpak.

Peek, a GIF screencasting app. Peek, a GIF screencasting app.

Builder 3.24

I’m excited to announce that Builder 3.24 is here and ready for you to play with!

It should look familiar because most of the work this cycle was underneath the hood. I’m pretty happy with all the stabilization efforts from the past couple of weeks. I’d like to give a special thanks to everyone who took the time to file bugs, some of whom also filed patches.

With Outreachy and GSoC starting soon, I’m hoping that this will help reduce any difficulty for newcomers to start contributing to GNOME applications. I expect we’ll continue to polish that experience for our next couple of patch releases.

March 18, 2017

Builder on the Lunduke Hour

In case you missed it, I was on the Lunduke Hour last week talking about Builder. In reality it turned into a discussion about everything from why Gtk 4, efficient text editor design, creating UI designers, Flatpak, security implications of the base OS, and more.

Vala 0.36 Released

This cycle Vala have received a lot of love from their users and maintainers. Users and maintainers, have pushed hard to get a lot of bug fixes in place, thanks to a lot of patches attached to bug reports.

List of new features an bug fixes are in NEWS file in repository. Bindings have received lot of fixes too, checkout them and see if you need a workaround.

Many thanks to all contributors and maintainers for make this release a big one.

Highlights

  • Update manual using DocBook from wiki.gnome.org as source [#779090]
  • Add support for array-parameters with rank > 1 in signals [#778632]
  • Use GTask instead of GSimpleAsyncResult with GLib 2.36/2.44 target [#763345]
  • Deny access to protected constructors [#760031]
  • Support [DBus (signature = …)] for properties [#744595]
  • Add [CCode (“finish_instance = …”)] attribute [#710103]
  • Support [HasEmitter] for vala sources [#681356]
  • Add support for the \v escape charactor [#664689]
  • Add explicit copy method for arrays [#650663]
  • Allow underscores in type parameter names [#644938]
  • Support [FormatArg] attribute for parameters
  • Ignore –thread commandline option and drop gthread-2.0 references
  • Check inferred generic-types of MemberAccess [#775466]
  • Check generic-types count of DelegateType [#772204]
  • Fix type checking when using generics in combination with subtype [#615830]
  • Fix type parameter check for overriding generic methods
  • Use g_signal_emit where possible [#641828]
  • Only emit notify of properties if value actually changed [#631267] [#779955]
  • Mark chained relational expressions as stable [#677022]
  • Perform more thorough compatibility check of inherited properties [#779038]
  • Handle nullable ValueTypes in signals delegates properly [#758816]

Will miss GUADEC 2017

Registration is now open for GUADEC 2017! This year, the GNOME Users And Developers European Conference (GUADEC) will be hosted in beautiful Manchester, UK between 28th July and 2nd August.

Unfortunately, I can't make it.

I work in local government, and just like last year, GUADEC falls during our budget time at the county. Our county budget is on a biennium. That means during an "on" year, we make our budget proposals for the next two years. In the "off" year, we share a budget status.

I missed GUADEC last year because I was giving a budget status in our "off" year. And guess what? This year, department budget presentations again happen during GUADEC.

During GUADEC, I'll be making our budget proposal for IT. This is our one opportunity to share with the Board our budget priorities for the next two years, and to defend any budget adjustment. I can't miss this meeting.

Defence against the Dark Arts involves controlling your hardware

In light of the Vault 7 documents leak (and the rise to power of Lord Voldemort this year), it might make sense to rethink just how paranoid we need to be.  Jarrod Carmichael puts it quite vividly:

I find the general surprise… surprising. After all, this is in line with what Snowden told us years ago, which was already in line with what many computer geeks thought deep down inside for years prior. In the good words of monsieur Crête circa 2013, the CIA (and to an extent the NSA, FBI, etc.) is a spy agency. They are spies. Spying is what they’re supposed to do! 😁

Well, if these agencies are really on to you, you’re already in quite a bit of trouble to begin with. Good luck escaping them, other than living in an embassy or airport for the next decade or so. But that doesn’t mean the repercussions of their technological recklessness—effectively poisoning the whole world’s security well—are not something you should ward against.

It’s not enough to just run FLOSS apps. When you don’t control the underlying OS and hardware, you are inherently compromised. It’s like driving over a minefield with a consumer-grade Hummer while dodging rockets (at least use a hovercraft or something!) and thinking “Well, I’m not driving a Ford Pinto!” (but see this post where Todd weaver explains the implications much more eloquently—and seriously—than I do).

Considering the political context we now find ourselves in, pushing for privacy and software freedom has never been more relevant, as Karen Sandler pointed out at the end of the year. This is why I’m excited that Purism’s work on coreboot is coming to fruition and that it will be neutralizing the Intel Management Engine on its laptops, because this is finally providing an option for security-concerned people other than running exotic or technologically obsolete hardware.

March 17, 2017

Recipes 1.0

Recipes 1.0 is here, in time for GNOME 3.24 next week. You can get it here:

https://download.gnome.org/sources/gnome-recipes/1.0/

A flatpak is available here:

https://matthiasclasen.github.io/recipes-releases/gnome-recipes.flatpakref

and can be installed with

flatpak install https://matthiasclasen.github.io/recipes-releases/gnome-recipes.flatpakref

Thanks to everybody who helped us to reach this point by contributing recipes, sending patches, translations or bug reports!

Documentation

Recipes looks pretty good in GNOME Software already, but one thing is missing: No documentation costs us a perfect rating. Thankfully, Paul Cutler has shown up and started to fill this gap, so we can get the last icon turned blue with the next release.

Since one of the goals of Recipes is to be an exemplaric Flatpak app, I took this opportunity to investigate how we can handle documentation for sandboxed applications.

One option is to just put all the docs on the web and launch a web browser, but that feels a bit like cheating. Another option is to export all the documentation files from the sandbox and launch the host help browser on it. But this would require us to recursively export an entire directory full of possibly malicious content – so far, we’ve been careful to only export individual, known files like the desktop file or the app icon.

Therefore, we decided that we should instead ship a help browser in the GNOME runtime and launch it inside the sandbox. This turns out to work reasonably well, and will be used by more GNOME apps in the near future.

Interns

Apart from this ongoing work on documentation, a number of bug fixes and small improvements have found there way into the 1.0 release. For example, you can now find recipes by searching for the chef by name. And we ask for confirmation if you are about to close the window with unsaved changes.

Some of these changes were contributed by prospective Outreachy interns.

Roadmap

I have mentioned it before, you can find some information about our future plans for recipes here:

https://wiki.gnome.org/Apps/Recipes/Development

Your help is more than welcome!

GUADEC 2017 on the cheap

I’ve just booked flight and hotel for GUADEC 2017, which will be held in Manchester. André suggested that I should decide this time. We’ll be staying a wheelchair accessible (the room is slightly bigger :P) room with Easyhotel. It’s 184 GBP for 5 nights and NOT close to the venue (but not bad via public transport). Easyhotel works like a budget airline. You’ll have to pay more for WiFi, cleaning, breakfast, a remote, etc. I ignored all of these essential things which means André has to do without that as well. The paid WiFi might even be iffy, so rather use my mobile data, plus per half June that shouldn’t cost anything extra thanks to new EU regulations. Before GUADEC I might switch to another mobile phone company to get 4-5GB/month for 18 EUR/month. André will probably want to work remotely. Let’s see closer to the date what’s a good solution (share my data?).

Flight wise for me Easyjet is cheapest (70 EUR) and it’s the fastest method. Funny to combine Easyjet with Easyhotel. I usually use a combination of Google flights and Skyscanner to see the cheapest options. However, rome2rio works as well. The latter will also check alternative methods to get to Manchester, e.g. via Liverpool and so on. For Skyscanner, somehow the Dutch version often gives me cheaper options than Skyscanner in other languages. Google flights usually is much more expensive. I only use Google flights to determine the cheapest days, then switch to Skyscanner to get the lowest price.

 

A simple house-moving tip: use tape to mark empty cupboards

When you've emptied a cupboard, put masking tape across it, ideally in a colour that's highly visible. This way you immediately see know which ones are finished and which ones still need attention. You won't keep opening the cupboard a million times to check and after the move it takes merely seconds to undo.

March 16, 2017

Is static linking the solution to all of our problems?

Almost all programming languages designed in the last couple of years have a strong emphasis on static linking. Their approach to dependencies is to have them all in source which is compiled for each project separately. This provides many benefits, such as binaries that can be deployed everywhere and not needing to have or maintain a stable ABI in the language. Since everything is always recompiled and linked from scratch (apart from the standard library), ABI is not an issue.

The proponents of static linking often claim that shared libraries are unnecessary. Recompiling is fast and disks are big, thus it makes more sense to link statically than define and maintain ABI for shared libraries, which is a whole lot of ungrateful and hard work.

To see if this is the case, let's do an approximation experiment.

Enter the Dost!

Let's assume a new programming language called Dost. This language is special in that it provides code that is just as performant as the equivalent C code and takes the same amount of space (which is no small feat). It has every functionality anyone would ever need, does not require a garbage collector and whose syntax is loved by all. The only thing it does not do is dynamic linking. Let us further imagine that, by magic, all open source projects in the world get rewritten in Dost overnight. How will this affect a typical Linux distro?

Take for example the executables in /usr/bin. They are all implemented in Dost, and thus are linked statically. They are probably a bit larger than their original C versions which were linked dynamically. But by how much? How would we find out?

Science to the rescue

Getting a rough ballpark estimate is simple. Running ldd /usr/bin/executable gives a list of all libraries the given executable links against. If it were linked statically, the executable would have a duplicate copy of all these libraries. Said in another way, each executable grows by the size of its dependencies. Then it is a matter of writing a script that goes through all the executables, looks up their dependencies, removes language standard libraries (libc, stdlibc++, a few others) and adds up how much extra space these duplicated libraries would take.

The script to do this can be downloaded from this Github repo. Feel free to run it on your own machines to verify the results.

Measurement results

Running that script on a Raspberry Pi with Rasbian used for running an IRC client and random compile tests says that statically linked binaries would take an extra 4 gigabytes of space.

Yes, really.

Four gigabytes is more space than many people have on their Raspi SD card. Wasting all that on duplicates of the exact same data does not seem like the best use of those bits. The original shared libraries take only about 5% of this, static linking expands them 20 fold. Running the measurement script on a VirtualBox Ubuntu install says that on that machine the duplicates would take over 10 gigabytes. You can fit an entire Ubuntu install in that space. Twice. Even if this were not in issue for disk space, it would be catastrophic for instruction caches.

A counterargument people often make is that static linking is more efficient than dynamic linking because the linker can throw away those parts of dependencies that are not used. If we assume that the linker did this perfectly, executables would need to use only 5% of the code in their dependencies for static linking to take less space than dynamic linking. This seems unlikely to be the case in practice.

In conclusion

Static linking is great for many use cases. These include embedded software, firmwares and end user applications. If your use case is running a single application in a container or VM, static linking is a great solution that simplifies deployment and increases performance.

On the other hand claiming that a systems programming language that does not provide a stable ABI and shared libraries can be used to build the entire userland of a Linux distribution is delusional. 

Bosch Connected Experience: Eclipse Hono and MsgFlo

I’ve been attending the Bosch Connected Experience IoT hackathon this week at Station Berlin. Bosch brought a lot of different devices to the event, all connected to send telemetry to Eclipse Hono. To make them more discoverable, and enable rapid prototyping I decided to expose them all to Flowhub via the MsgFlo distributed FBP runtime.

The result is msgflo-hono, a tool that discovers devices from the Hono backend and exposes them as foreign participants in a MsgFlo network.

BCX Open Hack

This means that when you connect Flowhub to your MsgFlo coordinator, you have all connected devices appear there, with port for each sensor they expose. And since this is MsgFlo, you can easily pipe their telemetry data to any Node.js, Python, Rust, or other program.

Hackathon project

Since this is a hackathon, there is a competition on projects make in this event. To make the Hono-to-MsgFlo connectivity, and Flowhub visual programming capabilities more demoable, I ended up hacking together a quick example project — a Bosch XDK controlled air theremin.

This comes in three parts. First of all, we have the XDK exposed as a MsgFlo participant, and connected to a NoFlo graph running on Node.js

Hono telemetry on MsgFlo

The NoFlo graph starts a web server and forwards the telemetry data to a WebSocket client.

NoFlo websocket server

Then we have a forked version of Vilson’s webaudio theremin that uses the telemetry received via WebSockets to make sound.

NoFlo air theremin

The whole setup seems to work pretty well. The XDK is connected to WiFi here, transmits its telemetry to a Hono instance running on AWS. This data gets forwarded to the MsgFlo MQTT network, and from there via WebSocket to a browser. And all of these steps can be debugged and experimented with in a visual way.

Relevant links:

Update: we won the Open Hack Challenge award for technical brilliance with this project.

BCX entrance

March 15, 2017

guile 2.2 omg!!!

Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

high fives all around

As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

a digression on the nature of seeking and knowledge

I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

getting the goods

It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

(If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)

Karton – running Linux programs on macOS, a different Linux distro, or a different architecture

At work I use Linux, but my personal laptop is a Mac (due to my previous job developing for macOS).

A few months ago, I decided I want to be able to do some work from home without carrying my work laptop home every day.
I considered using a VM, but I don’t like the experience of mixing two operating systems. On Mac I want to use the native key bindings and applications, not a confusing mix of Linux and Mac UI applications.

In the end, I wrote Karton, a program which, using Docker, manages semi-persistent containers with easy to use automatic folder sharing and lots of small details which make the experience smooth. You shouldn’t notice you are using command line programs from a different OS.

Karton logo

After defining which distro and packages you need (this is called an “image”), you can just execute Linux programs by prefixing them with karton run IMAGE-NAME LINUX-COMMAND. For example:

$ uname -a # Running on macOS.
Darwin my-hostname 16.4.0 Darwin Kernel Version 16.4.0 [...]

$ # Run the compiler in the Ubuntu image we use for work
$ # (which we called "ubuntu-work"):
$ karton run ubuntu-work gcc -o test_linux test.c

$ # Verify that the program is actually a Linux one.
$ # The files are shared and available both on your
$ # system and in the image:
$ file test_linux
test_linux: ELF 64-bit LSB executable, x86-64, [...]

Karton runs on Linux as well, so you can do development targeting a different distro or a different architecture (for instance ARMv7 while using an x86_64 computer).

For more examples, installation instructions, etc. see the Karton website.

March 14, 2017

Approaching 3.24

So, we have just entered code freeze approaching the GNOME 3.24 release, which is scheduled for next week.
In Maps, I just released the final beta (3.23.92) tarball yesterday, but since I made a git mistake (shit happens), I had to push an additional tag, so if you want to check out the release using the tag (and not using a tarball), go for “v3.23.92-real”.
Right after the release I got some reports of Maps not working when running sandboxed from a Flatpak package. It seems that the GOA (GNOME Online Accounts) service is not reachable via dbus from inside the sandbox, and while this is something that should be fixed (for applications that needs this privilege when running sandboxed) we still shouldn't crash. Actually I think this might also affect people wanting to run Maps in a more minimalistic non-GNOME environment. So I have proposed a patch that should fix this issue, and hopefully we can get time to review this and get freeze exception for this as a last-minute fix.

Another thing I thought I should point out after reading the article on Phoronix after my last blog post about landing the transit routing feature, and I was perhaps a bit unclear on this, is that using the debug environment variable to specify the URL for an OpenTripPlanner server requires that you have an instance to point it to, such as running your own server or using a third-party server. We will not have the infrastructure in place for a GNOME-hosted server in time for the 3.24 release.

Looking ahead towards 3.26, I have some ideas both as proof-of-concept implementation and some thoughts, but more on that later!

New GtkTester project

In my recent private developments, I need to create Gtk+ widgets libraries, and test them before use them in applications.

There are plenty of efforts to provide automated GUI testing, this is another one working in my case, I would like to share. It is written in Vala, is a GTK+ library with just one top window, you can attach your widget to test, can add test cases, check status and finish by calling asserts. Feel free to ask any thing you need or add issues, in order to improve it.

Sorry if the name is too GTKish and some one would like to change it to avoid any “Official Backup from GNOME”, which is not the case.

Hope to improve this library, adding more documentation in order to help others to use it, if they found useful.

Enjoy it at GitHub.

How I became a GNOME contributor

Recently, I was asked by my fellow GNOME friends to write how did I transitioned from nothing to a GNOME contributor. The intention is to motivate people to engage. I don’t think my story is that exciting, but, well, why not? If someone gets motivated and start contributing, goal achieved. But beware: there ain’t any TL;DR here. it’s just a long story.

So, where should I start?

From Childhood to Early Contributions

My relationship with computers began at early age, since I was literate by my parents using an old computer we gained from a relative. I remember searching “how to create exe files” around 2004, and that search result yelled me a Yahoo! Answers question with “you need to learn a programming language. Start with C”.

And so I did.

My first app was a command line calculator. Moving to Linux was a natural step. And that introduced me to GNOME, and Gtk+. By that time, Gtk+ was still 2.6. I did some small apps for personal use, like an app to track new habits. Unfortunately, those apps are long gone.

And that’s how it all started.

Translations…

Fast forward some years and, one day, while using a GNOME app (can’t remember which one!), I saw an unstranslated string! I researched a bit and found out that I could translate and contribute to the Brazilian Portuguese translation team. This was 2012, GNOME 3 was just released, everybody were still very angry, things were very confusing.

My first upstream contributions back to the end of 2012, fixing and adding some missing translations.

At that time, there was one project that draw my attention: a new, unreleased app called GNOME Calendar. It was under heavy development in a branch called ui-review, no packages were available… and yet, I simply loved it. The developer, Erick Perez, is highly skilled. This app was a hidden gem.

Unfortunately, development of Calendar stalled and I spent more than 1 year checking the logs almost every day, waiting for changes.

They never came.

… and GNOME Calendar

“This project was too good to die”, I thought. “I have to do somthing.” First, I tried to finish the design specs of Calendar. You can still find it in my DeviantArt page (click here to check it out).

Then I ran Gedit, opened some files and started hacking Calendar. My very first work was on the Week view prototype we had in there. The video of this new feature is still available:

Notice that the second article of this blog talks about that 🙂

Erick saw that, sent me an email, I attached some patches to Bugzilla, and this is how I officially started contributing to GNOME. At the same year, GNOME Calendar had its first release ever – and this makes the third article of this blog.

GNOME Foundation and Google Summer of Code

After increasing my contribution rate at GNOME Calendar, Erick suggested me to get a Foundation membership. In Januray 2015, I applied and was accepted (yay!!). At the same year, I decided to apply for a Google Summer of Code project at Nautilus. The mentor was Carlos Soriano.

The GSoC project was very successfull, and not only Nautilus received the improvements, but also Gtk+ itself. This is when the “Other Locations” view was introduced, and a big code refactor landed (and allowed Nautilus to improve much more!)

Captura de tela de 2015-12-11 02-45-11The results of that Summer of Code project

Fun fact: while organizing the tasks of the GSoC, I couldn’t find any personal task manager for GNOME that satisfied me. I took one week and create GNOME To Do to organize my tasks.

With that done, the only step left was presenting that work at GUADEC.

GUADEC and Endless

The presentation at GUADEC was good enough, and meeting the people behind GNOME was one of the most remarkable experiences in my life. At that GUADEC, I met Cosimo, who works at Endless. I remember he downloading the new Gtk+ version and saying “look at this cute ‘Other Locations’ button!” (!!!) 🙂

A few weeks after that, I was contacted by Endless, where I am up to this day. Here at Endless, I’m lucky enough to be able to work upstream almost by default. I’m also happy to be able to make GNOME (through Endless OS) reach more people around the world.

What’s next?

New Control Center, more features in Calendar and To Do, and some other surprises I’ll write about in the future 🙂

March 13, 2017

Dependencies and unity builds with Meson

Prebuilt dependencies provided by the distro are awesome. You just install them and start working on your own project. Unfortunately there are many cases where distro packages are either not available or too old. This is especially common when compiling on non-Linux platforms such as Windows but happens on Linux as well when using jhbuild, Flatpak, Snappy or one of the many other aggregators. Dependencies obtained via Meson subprojects also fall into this category.

Unity builds

There is a surprisingly simple way of compiling projects faster: unity builds. The basic principle is that if your target has files foo.cpp, bar.cpp and baz.cpp, you don't compile them. Instead you generate a file called target-unity.cpp, whose contents are this:

#include<foo.cpp>
#include<bar.cpp>
#include<baz.cpp>

Then you compile just this file. This makes the compilation go faster. A lot faster. As an example converting Qt Creator to compile as a Unity build made it compile 90% faster. Counterintuitively it is even faster to compile a unity file with one core than to use four cores to build the files individually. If this is the first time you have encountered unity builds, this probably feels like a sham, something that just can't be possible. Not only is this possible, but unity builds are used in production in many places, especially in game development. As a different kind of example SQLite ships as a single "amalgamation file", which is the same thing as a unity build. Unity builds also act as a sort of poor man's link time optimization, which works even on compilers that do not support LTO natively.

Unity builds have their own limitations and problems. Some of them are discussed at the end of this article.

Dependencies

Meson has had unity build support available for a long time. The unfortunate downside is that incremental builds take a lot more time. This makes the edit-compile-debug cycle slower which is annoying. There are now two merge requests outstanding (number one, number two) that aim to make the user experience better.

With these changes you can tell Meson to unity build only subprojects, not the main project. This means that all your deps build really fast but the master project is built incrementally. In most use cases subprojects are read only. Only the master project is edited. For most people dependencies are slower to build than projects using them, so this gives a nice productivity boost. The other merge request enables finer grained control by allowing the user to override unityness for each target separately.

Please try out the branches and write your feedback and comments to the merge requests.

Problems of unity builds

Not all projects can be built as unity builds out of the box. The most common problem is having static variables and functions with the same name in different source files. In a unity build these will clash and not compile. There are other problems of similar nature, especially for projects that do crazy things with the preprocessor. These problems are fixable but take some effort. Most open source projects probably won't compile as unity builds out of the box.

Any project that wants to provide for a unity build must, in practice, have gating CI that compiles the source as a unity build. Otherwise it will break every now and then.

Post -Linux Playa at PUCP

Yesterday we have a session post-event #LinuxPlaya at PUCP allocating for morning and afternoon sessions. The idea was to guide students who has a potential talent to apply to the GSoC for GNOME and Fedora program as well as the Outreachy for GNOME.

Thanks to the support of Damian and Fabian (Ex GNOME GSoC guys) who guided students from different universities in Lima, Peru. We reviewed the GSoC ideas for gnome-todo, games, and calendar; the IRC channels, lists and ways to reach the mentors.

This session promoted the post of one woman developer called Geny Leon and the two candidates to the documentation GNOME team: Lizeth Lucero and Solanch Ccasa 🙂

Thanks to Genghis Rios who let us use the installations of PUCP for these sessions:

I must highlight the labor of Damian Nohales from GNOME Argentina who had helped me so much during these sessions and particularly also to help the student Ariano Cordero. We visited his home in order to help him to file the patch to the bug 624963.

Public thanks to this incredible man! Thanks Damian Nohales for this! ❤


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: Argentina, comunidad, Damian, Damian Nohales, fedora, GNOME, Julita Inca, Julita Inca Chiroque, Linux at PUCP, Linux Playa, Linux Playa post event, Perú, PUCP

March 10, 2017

Nextcloud & Linux Desktop

I’ve used different services for my personal agenda and I always valued if they could well integrate into my Fedora Workstation. Some did it well, some at least provided a desktop app, some only had a web client. That’s fine for many people, but not for me. Call me old-school, but I still prefer using desktop applications and especially those who look and behave natively.

Last summer, I decided to install Nextcloud on my VPS. Originally I was planning to replace Dropbox with it, but then I found out I could actually use it for many other things, for all my personal agenda. Shortly after that I realized that I’d found what I was always looking for in terms of integration into my desktop. Nextcloud apps use standard protocols and formats and integrate very well with the desktop apps I use.

nextcloud

Nextcloud/ownCloud is supported by GNOME Online Accounts, so I log in to my server and automagically get this:

Files – my Nextcloud appears in Nautilus as a remote disk. I like that it doesn’t work like the official desktop client of Nextcloud or Dropbox and doesn’t sync files to the local drive. If you work with small files and documents remotely, you can hardly notice lags and they don’t consume space on your hard drive. If I want to work with large files (e.g. video) or offline, I just download them.

Documents – documents that are stored on your Nextcloud server appear among documents in GNOME Documents. The app makes an abstraction layer over different file sources and the user can work with documents no matter where they come from. A nice thing, but I’m a bit conservative in this and prefer working with files and Nautilus.

Contacts – the Nextcloud app for contacts uses CardDAV, so after a login in GOA your contact list appears in all applications that are using the evolution-data-server backend. In my case it’s Evolution and GNOME Contacts. Evolution is still my daily driver at work while I use the specialized apps at home.

Calendars – the calendar app for Nextcloud uses CalDAV, so after a login in GOA you get the same automagic like with contacts, your calendars appear in all apps that are using evolution-data-server. Again in my case it’s Evolution and GNOME Calendar.

Tasks – CalDAV is also used for tasks in Nextcloud, so if you enable calendars in GOA, your task lists will also appear in Evolution or GNOME Todo.

snc3admek-z-2017-03-01-22-47-36GNOME Todo

Notes – the same applies to notes, you will also be able to automagically access them in Evolution or GNOME Bijiben.

News – the only thing I had to set up separately is a news reader. I use FeedReader which (among other services) supports Nextcloud/ownCloud, too. So I could replace Feedly with it and get a native client as a bonus.

snc3admek-z-2017-03-01-22-34-49FeedReader

What’s really great is that except for the RSS reader everything is set up with one login. I’m done with Feedly, Evernote, Wunderlist and all those services that each require another login and generally have poor desktop integration. Now I can use Nextcloud, have all my data under control and get great and super-easy-to-setup integration into my desktop.

I can imagine even more areas where Nextcloud can improve my desktop experience. For instance, it’d be great if my desktop user settings could be synced via Nextcloud or I could back them up there and then restore them on my new machine. Or it’d be great if the desktop keyring could work with Passman and sync your passwords.

BTW integration into my Android phone is equally important to me and Nextcloud doesn’t fail me there either although setting it up was not as easy as in my Fedora Workstation. I needed to install CalDAV-Sync and CardDAV-Sync apps (DAVdroid which is officially recommended by Nextcloud never worked for me, a while back it didn’t want to sync my contact list at all, now it does, but doesn’t import photos). Then my contacts and calendars were synced to the default apps. For tasks I use OpenTasks. For RSS ownCloud/Nextcloud Reader and for notes MyOwnNotes. To access files Nextcloud provides their own app.

And if I’m not around my PC or phone, I can always access all the services via the web interface which is pretty nice, too. So all in all I’ve been really satisfied with Nextcloud and am really happy how dynamically it’s developing.


So, I got a Dell

Long. Overdue. Upgrade.

I bought a Dell XPS13 as my new portable workstation for Linux and GNOME. This is the model 9360 that is currently available as in a Developer Edition with Ubuntu 16.04 LTS (project Sputnik for those who follow). It satisifies all I was looking for in a laptop: lightweigh, small 13", 16 GB of RAM (at least), core i7 CPU (this is a Kaby Lake) and must run Linux well.

But I didn't buy the Developer Edition. Price-wise the Developer Edition is CAD$150 less than the Windows version in the "business" line and CAD$50 less than the Windows version in the "home" line (which only has that version). Exact same hardware. Last week, it was on sale, CAD$500 off the regular price, so it didn't matter. I got one. I had delayed so long for getting one, this was the opportunity for a bargain. I double checked, and unlike the previous Skylake based model that didn't have the same wifi card, this one is really the same thing.

I got a surprise door bell ring, from the delivery person (the website didn't tell me it was en route).

Unboxing

The black box

It came in a box that contain a cardboard wrap and a nice black box. The cardboard wrap contain the power brick and the AC cord. I'm genuinely surprised of the power adapter. It is small, smaller than what I'm used to. It is just odd that it doesn't come in the same box as the laptop (not a separate shipping box, just that it is boxes in a box, shipped to you at once). The nice black box only bear the Dell logo and contain the laptop, and two small booklets. One is for the warranty, the other is a getting started guide. Interestingly it mentions Ubuntu as well, which lead me to think that it is same for both preloads. This doesn't really matter in the end but it just show the level of refinement for a high-end laptop, which until the last Apple refresh, was still more expensive than the Apple equivalent.

New laptop...

Fiddling with the BIOS

It took me more time to download the Fedora live ISO and flash it to the USB stick than the actual setup of the machine, minus some fiddling. As I had booted, Fedora 25 was installed in 20 minutes. I did wipe the internal SSD, BTW.

Once I figured out it was F2 I had to press to get the BIOS upon boot, to set it to boot off the USB drive, I also had to configure the disk controller to AHCI instead of RAID, as without that the Fedora installed didn't find the internal storage. Note that this might be more troublesome if you want dual boot, but I didn't want. also I don't know what's the default setup in the Developer Edition either.

Dell XPS13 BIOS configuration

Nonetheless, the Fedora installer worked well with mostly defaults. HiDPI is managed, which that I finally have a laptop with "Retina Display".

Hands on

The laptop is small, with a HiDPI screen, a decent trackpad that works out of the box with two finger scrolling. An aluminium shell with a non glowing logo in he middle, a black inside with keyboard, and glass touch screen. The backlit keyboard has a row of function keys at the top, like it should be, row that dub as "media" button with the Fn key or actually without. Much like on a MacBook. The only difference that will require me to get used to is the Control key is actually in the corner. Like it used to be... (not sure if that's on all Dell though, but I remember hating not have control in the corner, then got used to to it like there was no option, and that was more than a decade ago). That will make for a finger reeducation, that's sure. The whole laptop a good size, it is very compact. Light but solid.

As for the connectivity, there is a SD card reader, 2 USB 3.0 port (A type) and a USB 3.1 Type-C that carries HDMI and Thunderbolt. For HDMI looks like a CAD$30 adapter, but it seems to be standard so might not be a huge problem.

Conclusion

I am happy with it. GNOME is beautiful in HiDPI, and it is all smooth.

March 09, 2017

Outreachy (GNOME) – Final

This is my last blog of Outreachy. During this period, I have finished the Chinese translation of GNOME 3.22, and completed most entries of GNOME 3.24, because it always emerges some new entries, so I talked with Mentor Tong and decided to accomplish 3.24 after the frozen-date and before the release-date. On the other hand, I improved the guideline of the Chinese Team – updated it on the basis of the last English vision and reference something from Free Software Localization Guide for Chinese (China).

About the future, I’ll completed GNOME 3.24 with other translators before 22th March, and I have some other ideas about the guideline of Chinese Team, I’ll try to implement them. Besides, I want to try making more contribution than L10n for GNOME, and I would love to become a module maintainer in this community which will making me more deeper at GNOME. Also, I’m trying to spread Outreachy to others, especially the Chinese girls.

I’d like to tell the applicants of round 14 or someone who want to apply it, DON’T BE SHY, show your personal abilities to your mentor as more as possible, and don’t be confined to the test she/he gave you. And if you are selected, an important thing is getting to know more people of your organization and many other interns, it will help you become fully integrated into the circle of FOSS.

Above all, that is a wonderful experience. Outreachy gave me a nice opportunity to learn and contribute to FOSS, thanks for the people who help me in this internship.


This work by Mandy Wang is licensed under a Creative Commons Attribution-ShareAlike 4.0 International


Grilo, Travis CI and Containers

Good news! Finally, we are using containers in Travis CI for Grilo!. Something I was trying for a while, but we achived it now. I must say that a post Bassi wrote was the trigger for getting into this. So all my kudos to him!

In this post I’ll explain the history behind using Travis CI for Grilo continuous integration.

The origin

It all started when one day exploring how GitHub integrates with other services, I discovered Travis CI. As you may know, Travis is a continuous integration service that checks every commit from a project in GitHub, and for each one it starts a testing process. Roughly, it starts a “virtual machine”1 running Ubuntu2, clones the repository at that commit under test, and runs a set of commands defined in the .travis.yml file, located in the same project GitHub repository. In that file, beside the steps to execute the tests, it contains the instructions about how to build the project, as well as which dependencies are required.

Note that before Travis, instead of a continuous integration system in Grilo we had a ‘discontinuous’ one: run the checks manually, from time to time. So we could have a commit entering a bug, and we won’t realize until we run the next check, which can happen way later. Thus, when I found Travis, I thought it would be a good idea to use it.

Setting up .travis.yml for Grilo was quite easy: in the before_install section we just use apt-get to install all requirements: libglib2.0-dev, libxml2-dev, and so on. And then, in the script section we run autogen.sh and make. If nothing fails, we consider the test is successful. We do not run any specific test because we don’t have any in Grilo.

For the plugins, the same steps: install dependencies, configure and build the plugins. In this case, we also run make check, so tests are run always. Again, if nothing fails Travis gives us a green light. Otherwise, a red one. The status is shown in the main web page. Also, if the test fail, an email is sent to the commit author.

Now, this has a small problem when testing plugins: they require Grilo, and we were relying in the package provided by Ubuntu (it is listed in the dependencies). But what happens if the current commit is using a feature that was added in Grilo upstream, but not released yet? One option could be cloning Grilo core, building and installing it, before the plugins, and then compiling the plugins, depending on this version. This means that for each commit in plugins, we need to build two projects, adding lot of complexity in the Travis file. So we decided to go with a different approach: just create a Grilo package with the required unreleased Grilo core version (only for testing), and put it in a PPA. Then we can add that PPA in our .travis.yml file and use that version instead.

A similar problem happens with Grilo itself: sometimes we require a specific version of a package that is not available in the Ubuntu version used by Travis (Ubuntu 12.04). So we need to backport it from a more recent Ubuntu version, and add it in the same PPA.

Summing up, our .travis.yml files just add the PPA, install the required dependencies, build and test it. You can take a look at the core and plugins file.

Travis and the Peter Pan syndrome

Time passes, we were adding more features, new plugins, fixing problem, adding new requirements or bumping up the required versions… but Travis continues using Ubuntu 12.04. My first thoughts were “OK, maybe Travis wants to rely only in LTS releases”. So we need to wait until the next LTS is released, and meanwhile backporting everything we need. No need to say that doing this becomes more and more complicated as time is passing. Sometimes backporting a single dependency requires to backport a lot of other dependencies, which can end up in a bloody nightmare. “Only for a while, until the new LTS is released”, repeated to myself.

And good news! Ubuntu 14.04, the new LTS, is released. But you know what? Travis is not updated, and still uses the old LTS!. What the hell!

Moreover, two years later after this release, Ubuntu 16.04 LTS is also released, and Travis still uses 12.04!

At that moment, backporting were so complex that basically I gave up. And Continuous Integration was basically broken.

Travis and the containers.

And we were under this broken status until I read Travis was adding support for containers. “This is what we need”. But the truth is that even I knew that it would fix all the problems, I wasn’t very sure how to use the new feature. I tried several approaches, but I wasn’t happy with none of them.

Until Emmanuele Bassi published a post about using Meson in Epoxy. That post included an explanation about using Docker containers in Travis, which solved all the doubts I had, and allowed me to finally move to use containers. So again, thank you, Emmanuele!

What’s the idea? First, we have created a Docker container that has preinstalled all the requirements to build Grilo and the plugins. We tagged this image as base.

When Travis is going to test Grilo, we instruct Travis to build a new container, based on base, that builds and installs Grilo. If everything goes fine, then our continous integration is successful, and Travis gives green light. Otherwise it gives red light. Exactly like it happened in the old approach.

But we don’t stop here. If everything goes fine, we push the new container into Docker register, tagging it as core. Why? Because this is the image we will use for building the plugins.

And in the case of plugins we do exactly the same as in the core. But this time, instead of relying in the base image, we rely in the core one. This way, we always use a version that has an up-to-date version of Grilo, so we don’t need to package it when introducing new features. Only if either Grilo or the plugins require a new dependency we need to build a new base image and push it. That’s all.

Also, as a plus, instead of discarding the container that contains the plugins, we push it in Docker, tagged as latest. So anyone can just pull it with Docker to have a container to run and test Grilo and all the plugins.

If interested, you can take a look at the core and plugins files to check how it looks like.

Oh! Last but not least. This also helped us to test the building both using Autotools and Meson, both supported in Grilo. Which is really awesome.

Summing up, moving to containers provides a lot of flexibility, and make things quite easier.

Please, leave any comment or question either in Facebook or Google+.

  1. Let’s call Virtual Machine, container, whatever. In this context it doesn’t matter.

  2. Ubuntu 12.04 LTS, to be exact.

GNOME Photos Flatpaks

I joined the recent buzz around Flatpak manifests in GNOME, and gave the GNOME Photos builds some routine maintenance. The stable build has been updated to the latest 3.22.x point releases; and the nightly, which I had broken, is again tracking Git master.

To install the stable build:

$ flatpak remote-add --from gnome https://sdk.gnome.org/gnome.flatpakrepo
$ flatpak remote-add --from gnome-apps https://sdk.gnome.org/gnome-apps.flatpakrepo
$ flatpak install gnome-apps org.gnome.Photos

To install the nightly:

$ flatpak remote-add --from gnome-nightly https://sdk.gnome.org/gnome-nightly.flatpakrepo
$ flatpak remote-add --from gnome-apps-nightly https://sdk.gnome.org/gnome-apps-nightly.flatpakrepo
$ flatpak install gnome-apps-nightly org.gnome.Photos

They can be run directly from gnome-shell. However, if you have installed both stable and nightly builds, then you can specifically select one by:

$ flatpak run --branch=stable org.gnome.Photos
$ flatpak run --branch=master org.gnome.Photos


Feeds