October 21, 2018

Vala state: October 2018

Maintainability

While I think maintainability could be improved, adding to history commits from contributions, apart from the ones coming from current Maintainer. Actually, there are some lot of commits not in history coming from authors outside current ones. Hope with new GitLab GNOME’s instance, this will reflect the correct situation.

Behind scenes, Vala has to improve its code base to adapt to new requirements like to develop a descent Vala Language Server and more IEDs supporting Vala. At least for me, even GEdit is productive enough to produce software in Vala, because the language itself; write a Class, an Interface and implement interfaces, is 10 times faster in Vala than in C.

Vala has received lot of improvements in last development cycles, like a new POSIX profile, ABI stability, C Warnings improvements and many other, to be reported in a different article.

Look at Vala’s repository history, you will see more “feature” commits than “bindings” ones, contrary to the situation reported by Emmanuel, while should be a good idea to produce a graphic on this, but resent improvements could tell by them self the situation has been improved in recent release cycles.

Lets look at repository’s chart. It reports 2000 commits in the last 3 months, 1.1 average per day, from 101 contributions as for October 19, 2018. Me at 10 commits from the last year, so I’m far to be a core contributor, but push ABI stability to be a reality. My main contributions are to communicate Vala advances and status.

Vala as a language

Vala has its roots as a GObject oriented language. Currently for C too, because it can produce C code without dependent on GLib/GObject.

Program a C code base using an Object Oriented Programming is easy on Vala, even if the C output are GObject classes or not. You can produce Compact Classes, as for  Vala’s Manual:

Compact classes, so called because they use less memory per instance, are the least featured of all class types. They are not registered with the GType system and do not support reference counting, virtual methods, or private fields. They do support unmanaged properties. Such classes are very fast to instantiate but not massively useful except when dealing with existing libraries. They are declared using the Compact attribute on the class

In C a compact class is a simple struct, its fields can have accessors, so your users just call a get/set method to access it and run additional code when is done. Following code:

[vala]
[Compact]
class Caller {
  public string _name;
  public string name {
    get {
      return _name;
    }
    set {
      _name = value;
    }
  }
  public void call (string number) {}
}
[/vala]

Is translated to:

#include <glib.h>
#include <glib-object.h>
#include <stdlib.h>
#include <string.h>

typedef struct _Caller Caller;
#define _g_free0(var) (var = (g_free (var), NULL))

struct _Caller {
	gchar* _name;
};



void caller_free (Caller * self);
static void caller_instance_init (Caller * self);
void caller_call (Caller* self,
                  const gchar* number);
Caller* caller_new (void);
const gchar* caller_get_name (Caller* self);
void caller_set_name (Caller* self,
                      const gchar* value);


void
caller_call (Caller* self,
             const gchar* number)
{
	g_return_if_fail (self != NULL);
	g_return_if_fail (number != NULL);
}


Caller*
caller_new (void)
{
	Caller* self;
	self = g_slice_new0 (Caller);
	caller_instance_init (self);
	return self;
}


const gchar*
caller_get_name (Caller* self)
{
	const gchar* result;
	const gchar* _tmp0_;
	g_return_val_if_fail (self != NULL, NULL);
	_tmp0_ = self->_name;
	result = _tmp0_;
	return result;
}


void
caller_set_name (Caller* self,
                 const gchar* value)
{
	gchar* _tmp0_;
	g_return_if_fail (self != NULL);
	_tmp0_ = g_strdup (value);
	_g_free0 (self->_name);
	self->_name = _tmp0_;
}


static void
caller_instance_init (Caller * self)
{
}


void
caller_free (Caller * self)
{
	_g_free0 (self->_name);
	g_slice_free (Caller, self);
}

Could you find some improvements in C code generation?

As you can see, Vala generates code to create your struct, access your fields while run costume code, to free your struct and initialize it. Your C users will have a clean and easy API to use, as you expect, your struct.

If you use Vala to access your C struct, also written in Vala, it will create and free correctly automatically.

So,

Less code, more features.

Vala Pushing up Complex projects

GObject/GInterface are a powerfull tool, but is really hard to use in C if you have to write and implement a complex hierarchy of classes and interfaces. W3C DOM4, is a clear example of it; sure is possible implement in C, but is hundred times easy to implement and maintain in Vala.

GXml and GSVG are an example on how complex a hierarchy can be, and how they have been implemented in a short time.

GXml’s GomElement hierarchy just shows all interfaces it should implement to be DOM4’s Element.

Have you tried to write lot of fragmented featured interfaces in C and implement all of them in a single GObject class?

W3C SVG 1.1 specification have lot of interfaces to write and implement to have a fully compliant object’s class.

Implement GSVG over Vala, makes SVG 1.1 possible in a short time; but don’t stop here, because maintain this complex hierarchy, like move an object from final to derivable one, or move methods from a final to parent class, including its interface implementation, is hundred times easy to do in Vala than in plain C/GObject.

Vala allows: Complex Interface hierarchy implementation easy

While GSVG implementation is not completed, Vala is the right tool to use on this kind of projects.

Vala libraries are Introspectable and usable in other languages

Yes, C GObject classes are introspectable, but lot of annotations are required to produce GIR/TYPELIB, usable in Python or JavaScript.

Vala code produce Introspection Annotations as you write. Produce GIR XML file format is a matter to use --gir switch. Errors and warnings in Vala are focused to make your classes introspectable consistent; for example, you can’t return NULL in Non NULL declared methods.

October 19, 2018

Vala Scripting?

Vala Language Server

I’m working with a library called GNOME Vala Language Server (GVls), as a proof of concept for a server that will serve autocompletion, syntax highlighting and that kind of stuff, but found something interesting by accident.

I’ve added an interface called Client, may is not it final name, but it allows to locale a symbol in a already parsed file, along with some goodness from other interfaces and implementations, I’ll talk about in another article.

The Accident

While trying to create a in-line parser to detect if the actual word is an object or any other kind of symbol to, in the near future, propose a list of properties and methods or a list of method’s parameters, by accident Client now can parse in-line Vala code, so you push text to this Client object and a Server (the object that parse and handle Symbol collection), parse  it and just add new Symbols to its collection, without any effort or parse errors for incomplete code!!!!

So now if you push the folowing text:

class Push {\n

Server now find a Push symbol representing a new Vala class. Inserting a medhod:

public void callme () {

You can find in the Server a new symbol for Push.callme method. Just no error, even if the code is not completed yet.

Vala Scripting?

Well not for now. But with this work the doors are opening. This include Genie, because it has more script suitable syntax than Vala. Any way, leave to the time and some others’ help of course.

GNOME Foundation Hackfest 2018

This week, the GNOME Foundation Board of Directors met at the Collabora office in Cambridge, UK, for the second annual Foundation Hackfest. We were also joined by the Executive Director, Neil McGovern, and Director of Operations, Rosanna Yuen. This event was started by last year’s board and is a great opportunity for the newly-elected board to set out goals for the coming year and get some uninterrupted hacking done on policies, documents, etc. While it’s fresh in our mind, we wanted to tell you about some of the things we have been working on this week and what the community can hope to see in the coming months.

Wednesday: Goals

On Wednesday we set out to define the overall goals of the Foundation, so we could focus our activities for the coming years, ensuring that we were working on the right priorities. Neil helped to facilitate the discussion using the Charting Impact process. With that input, we went back to the purpose of the Foundation and mapped that to ten and five year goals, making sure that our current strategies and activities would be consistent with reaching those end points. This is turning out to be a very detailed and time-consuming process. We have made a great start, and hope to have something we can share for comments and input soon. The high level 10-year goals we identified boiled down to:

  • Sustainable project and foundation
  • Wider awareness and mindshare – being a thought leader
  • Increased user base

As we looked at the charter and bylaws, we identified a long-standing issue which we need to solve — there is currently no formal process to cover the “scope” of the Foundation in terms of which software we support with our resources. There is the release team, but that is only a subset of the software we support. We have some examples such as GIMP which “have always been here”, but at present there is no clear process to apply or be included in the Foundation. We need a clear list of projects that use resources such as CI, or have the right to use the GNOME trademark for the project. We have a couple of similar proposals from Allan Day and Carlos Soriano for how we could define and approve projects, and we are planning to work with them over the next couple of weeks to make one proposal for the board to review.

Thursday: Budget forecast

We started the second day with a review of the proposed forecast from Neil and Rosanna, because the Foundation’s financial year starts in October. We have policies in place to allow staff and committees to spend money against their budget without further approval being needed, which means that with no approved budget, it’s very hard for the Foundation to spend any money. The proposed budget was based off the previous year’s actual figures, with changes to reflect the increased staff headcount, increased spend on CI, increased staff travel costs, etc, and ensure after the year’s spending, we follow the reserves policy to keep enough cash to pay the foundation staff for a further year. We’re planning to go back and adjust a few things (internships, marketing, travel, etc) to make sure that we have the right resources for the goals we identified.

We had some “hacking time” in smaller groups to re-visit and clarify various policies, such as the conference and hackfest proposal/approval process, travel sponsorship process and look at ways to support internationalization (particularly to indigenous languages).

Friday: Foundation Planning

The Board started Friday with a board-only (no staff) meeting to make sure we were aligned on the goals that we were setting for the Executive Director during the coming year, informed by the Foundation goals we worked on earlier in the week. To avoid the “seven bosses” problem, there is one board member (myself) responsible for managing the ED’s priorities and performance. It’s important that I take advantage of the opportunity of the face to face meeting to check in with the Board about their feedback for the ED and things I should work together with Neil on over the coming months.

We also discussed a related topic, which is the length of the term that directors serve on the Foundation Board. With 7 staff members, the Foundation needs consistent goals and management from one year to the next, and the time demands on board members should be reduced from previous periods where the Foundation hasn’t had an Executive Director. We want to make sure that our “ten year goals” don’t change every year and undermine the strategies that we put in place and spend the Foundation resources on. We’re planning to change the Board election process so that each director has a two year term, so half of the board will be re-elected each year. This also prevents the situation where the majority of the Board is changed at the same election, losing continuity and institutional knowledge, and taking months for people to get back up to speed.

We finished the day with a formal board meeting to approve the budget, more hack time on various policies (and this blog!). Thanks to Collabora for use of their office space, food, and snacks – and thanks to my fellow Board members and the Foundation’s wonderful and growing staff team

How to cite bibliography ISO/IEC standards

For my final post-grade work I’m collecting bibliography and as the main work is around ISO/IEC documents I investigated how to to make a correct bibliography entry for these, which I realized is not very well known as you can check in this question in Tex.StackSchange.com.

I finally chose an style I show here as an example:

  • BibTeX:
    @techreport{iso_central_secretary_systems_2016,
      address = {Geneva, CH},
      type = {Standard},
      title = {Systems and software engineering -- {Lifecycle} profiles for {Very} {Small} {Entities} ({VSEs}) -- {Part} 1: {Overview}},
      shorttitle = {{ISO}/{IEC} {TR} 29110-1:2016},
      url = {https://www.iso.org/standard/62711.html},
      language = {en},
      number = {ISO/IEC TR 29110-1:2016},
      institution = {International Organization for Standardization},
      author = {{ISO Central Secretary}},
      year = {2016}
    }
    
  • RIS:
      TY  - RPRT
      TI  - Systems and software engineering -- Lifecycle profiles for Very Small Entities (VSEs) -- Part 1: Overview
      AU  - ISO Central Secretary
      T2  - ISO/IEC 29110
      CY  - Geneva, CH
      PY  - 2016
      LA  - en
      M3  - Standard
      PB  - International Organization for Standardization
      SN  - ISO/IEC TR 29110-1:2016
      ST  - ISO/IEC TR 29110-1:2016
      UR  - https://www.iso.org/standard/62711.html
      ER  - 
    

    I’ve using this style extensively in a development website http://29110.olea.org/. You can compare details with the official info.

    Both have been generated using Zotero.

October 18, 2018

The GNOME Infrastructure is moving to Openshift

During GUADEC 2018 we announced one of the core plans of this and the coming year: it being moving as many GNOME web applications as possible to the GNOME Openshift instance we architected, deployed and configured back in July. Moving to Openshift will allow us to:

  1. Save up on resources as we’re deprecating and decommissioning VMs only running a single service
  2. Allow app maintainers to use the most recent Python, Ruby, preferred framework or programming language release without being tied to the release RHEL ships with
  3. Additional layer of security: containers
  4. Allow app owners to modify and publish content without requiring external help
  5. Increased apps redundancy, scalability, availability
  6. Direct integration with any VCS that ships with webhooks support as we can trigger the Openshift provided endpoint whenever a commit has occurred to generate a new build / deployment

Architecture

The cluster consists of 3 master nodes (controllers, api, etcd), 4 compute nodes and 2 infrastructure nodes (internal docker registry, cluster console, haproxy-based routers, SSL edge termination). For the persistent storage we’re currently making good use of the Red Hat Gluster Storage (RHGS) product that Red Hat is kindly sponsoring together with the Openshift subscriptions. For any app that might require a database we have an external (as not managed as part of Openshift) fully redundant, synchronous, multi-master MariaDB cluster based on Galera (2 data nodes, 1 arbiter).

The release we’re currently running is the recently released 3.11, which comes with the so-called “Cluster Console”, a web UI that allows you to manage a wide set of the underlying objects that previously were only available to the oc cli client and with a set of Monitoring and Metrics toolings (Prometheus, Grafana) that can be accessed as part of the Cluster Console (Grafana dashboards that show how the cluster is behaving) or externally via their own route.

SSL Termination

The SSL termination is currently happening on the edge routers via a wildcard certificate for the gnome.org and guadec.org zones. The process of renewing these certificates is automated via Puppet as we’re using Let’s Encrypt behind the scenes (domain verification for the wildcard certs happen at the DNS level, we built specific hooks in order to make that happen via the getssl tool). The backend connections are following two different paths:

  1. edge termination with no re-encryption in case of pods containing static files (no logins, no personal information ever entered by users): in this case the traffic is encrypted between the client and the edge routers, plain text between the routers and the pods (as they’re running on the same local broadcast domain)
  2. re-encrypt for any service that requires authentication or personal information to be entered for authorization: in this case the traffic is encrypted from end to end

App migrations

App migrations have started already, we’ve successfully migrated and deprecated a set of GUADEC-related web applications, specifically:

  1. $year.guadec.org where $year spaces from 2013 to 2019
  2. wordpress.guadec.org has been deprecated

We’re currently working on migrating the GNOME Paste website making sure we also replace the current unmaintained software to a supported one. Next on the list will be the Wordpress-based websites such as www.gnome.org and blogs.gnome.org (Wordpress Network). I’d like to thank the GNOME Websites Team and specifically Tom Tryfonidis for taking the time to migrate existing assets to the new platform as part of the GNOME websites refresh program.

Initial thoughts on MongoDB's new Server Side Public License

MongoDB just announced that they were relicensing under their new Server Side Public License. This is basically the Affero GPL except with section 13 largely replaced with new text, as follows:

If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License. Making the functionality of the Program or modified version available to third parties as a service includes, without limitation, enabling third parties to interact with the functionality of the Program or modified version remotely through a computer network, offering a service the value of which entirely or primarily derives from the value of the Program or modified version, or offering a service that accomplishes for users the primary purpose of the Software or modified version.

“Service Source Code” means the Corresponding Source for the Program or the modified version, and the Corresponding Source for all programs that you use to make the Program or modified version available as a service, including, without limitation, management software, user interfaces, application program interfaces, automation software, monitoring software, backup software, storage software and hosting software, all such that a user could run an instance of the service using the Service Source Code you make available.


MongoDB admit that this license is not currently open source in the sense of being approved by the Open Source Initiative, but say:We believe that the SSPL meets the standards for an open source license and are working to have it approved by the OSI.

At the broadest level, AGPL requires you to distribute the source code to the AGPLed work[1] while the SSPL requires you to distribute the source code to everything involved in providing the service. Having a license place requirements around things that aren't derived works of the covered code is unusual but not entirely unheard of - the GPL requires you to provide build scripts even if they're not strictly derived works, and you could probably make an argument that the anti-Tivoisation provisions of GPL3 fall into this category.

A stranger point is that you're required to provide all of this under the terms of the SSPL. If you have any code in your stack that can't be released under those terms then it's literally impossible for you to comply with this license. I'm not a lawyer, so I'll leave it up to them to figure out whether this means you're now only allowed to deploy MongoDB on BSD because the license would require you to relicense Linux away from the GPL. This feels sloppy rather than deliberate, but if it is deliberate then it's a massively greater reach than any existing copyleft license.

You can definitely make arguments that this is just a maximalist copyleft license, the AGPL taken to extreme, and therefore it fits the open source criteria. But there's a point where something is so far from the previously accepted scenarios that it's actually something different, and should be examined as a new category rather than already approved categories. I suspect that this license has been written to conform to a strict reading of the Open Source Definition, and that any attempt by OSI to declare it as not being open source will receive pushback. But definitions don't exist to be weaponised against the communities that they seek to protect, and a license that has overly onerous terms should be rejected even if that means changing the definition.

In general I am strongly in favour of licenses ensuring that users have the freedom to take advantage of modifications that people have made to free software, and I'm a fan of the AGPL. But my initial feeling is that this license is a deliberate attempt to make it practically impossible to take advantage of the freedoms that the license nominally grants, and this impression is strengthened by it being something that's been announced with immediate effect rather than something that's been developed with community input. I think there's a bunch of worthwhile discussion to have about whether the AGPL is strong and clear enough to achieve its goals, but I don't think that this SSPL is the answer to that - and I lean towards thinking that it's not a good faith attempt to produce a usable open source license.

(It should go without saying that this is my personal opinion as a member of the free software community, and not that of my employer)

[1] There's some complexities around GPL3 code that's incorporated into the AGPLed work, but if it's not part of the AGPLed work then it's not covered

comment count unavailable comments

October 17, 2018

Toward Community-Oriented, Public & Transparent Copyleft Policy Planning

[ A similar version was crossposted on Conservancy's blog. ]

More than 15 years ago, Free, Libre, and Open Source Software (FLOSS) community activists successfully argued that licensing proliferation was a serious threat to the viability of FLOSS. We convinced companies to end the era of “vanity” licenses. Different charities — from the Open Source Initiative (OSI) to the Free Software Foundation (FSF) to the Apache Software Foundation — all agreed we were better off with fewer FLOSS licenses. We de-facto instituted what my colleague Richard Fontana once called the “Rule of Three” — assuring that any potential FLOSS license should be met with suspicion unless (a) the OSI declares that it meets their Open Source Definition, (b) the FSF declares that it meets their Free Software Definition, and (c) the Debian Project declares that it meets their Debian Free Software Guidelines. The work for those organizations quelled license proliferation from radioactive threat to safe background noise. Everyone thought the problem was solved. Pointless license drafting had become a rare practice, and updated versions of established licenses were handled with public engagement and close discussion with the OSI and other license evaluation experts.

Sadly, the age of license proliferation has returned. It's harder to stop this time, because this isn't merely about corporate vanity licenses. Companies now have complex FLOSS policy agendas, and those agendas are not to guarantee software freedom for all. While it is annoying that our community must again confront an old threat, we are fortunate the problem is not hidden: companies proposing their own licenses are now straightforward about their new FLOSS licenses' purposes: to maximize profits.

Open-in-name-only licenses are now common, but seem like FLOSS licenses only to the most casual of readers. We've succeeded in convincing everyone to “check the OSI license list before you buy”. We can therefore easily dismiss licenses like Common Clause merely by stating they are non-free/non-open-source and urging the community to avoid them. But, the next stage of tactics have begun, and they are harder to combat. What happens when for-profit companies promulgate their own hyper-aggressive (quasi-)copyleft licenses that seek to pursue the key policy goal of “selling proprietary licenses” over “defending software freedom”? We're about to find out, because, yesterday, MongoDB declared themselves the arbiter of what “strong copyleft” means.

Understanding MongoDB's Business Model

To understand the policy threat inherent in MongoDB's so-called “Server Side Public License, Version 1”, one must first understand the fundamental business model for MongoDB and companies like them. These companies use copyleft for profit-making rather than freedom-protecting. First, they require full control (either via ©AA or CLA) of all copyrights in the work, and second, they offer two independent lines of licensing. Publicly, they provide the software under the strongest copyleft license available. Privately, the same (or secretly improved) versions of the software are available under fully proprietary terms. In theory, this could be merely selling exceptions: a benign manner of funding more Free Software code — giving the proprietary option only to those who request it. In practice — in all examples that have been even mildly successful (such as MongoDB and MySQL) — this mechanism serves as a warped proprietary licensing shake-down: “Gee, it looks like you're violating the copyleft license. That's a shame. I guess you just need to abandon the copyleft version and buy a proprietary license from us to get yourself out of this jam, since we don't plan to reinstate any lost rights and permissions under the copyleft license.” In other words, this structure grants exclusive and dictatorial power to a for-profit company as the arbiter of copyleft compliance. Indeed, we have never seen any of these companies follow or endorse the Principles of Community-Oriented GPL Enforcement. While it has made me unpopular with some, I still make no apologies that I have since 2004 consistently criticized this “proprietary relicensing” business model as “nefarious”, once I started hearing regular reports that MySQL AB (now Oracle) asserts GPL violations against compliant uses merely to scare users into becoming “customers”. Other companies, including MongoDB, have since emulated this activity.

Why Seek Even Stronger Copyleft?

The GNU Affero General Public License (AGPL) has done a wonderful job defending the software freedom of community-developed projects like Mastodon and Mediagoblin. So, we should answer with skepticism a solitary for-profit company coming forward to claim that “Affero GPL has not resulted in sufficient legal incentives for some of the largest users of infrastructure software … to participate in the community. Many open source developers are struggling with a similar reality”. If the last sentence were on Wikipedia, I'd edit it to add a Citation Needed tag, as I know of nomulti-copyright-held or charity-based AGPL'd project that has “struggled with this reality”. In fact, it's only a “reality” for those that engage in proprietary relicensing. Eliot Horowitz, co-founder of MongoDB and promulgator of their new license, neglects to mention that.

The most glaring problem with this license, which Horowitz admits in his OSI license-review list post, is that there was no community drafting process. Instead, a for-profit company, whose primary goal is to use copyleft as a weapon against the software-sharing community for the purpose of converting that “community” into paying customers, published this license as a fait accompli without prior public discussion of the license text.

If this action were an isolated incident by one company, ignoring it is surely the best response. Indeed, I urged everyone to simply ignore the Commons Clause. Now, we see a repackaging of the Commons Clause into a copyleft-like box (with reuse of Commons Clause's text such as “whose value derives, entirely or substantially, from the functionality of the Software”). Since both licenses were drafted in secret, we cannot know if the reuse of text was simply because the same lawyer was employed to write both, or if MongoDB has joined a broader and more significant industry-wide strategy to replace existing FLOSS licensing with alternatives that favor businesses over individuals.

The Community Creation Process Matters

Admittedly, the history of copyleft has been one of slowly evolving community-orientation. GPLv1 and GPLv2 were drafted in private, too, by Richard Stallman and FSF's (then) law firm lawyer, Jerry Cohen. However, from the start, the license steward was not Stallman himself, nor the law firm, but the FSF, a 501(c)(3) charity dedicated to serve the public good. As such, the FSF made substantial efforts in the GPLv3 process to reorient the drafting of copyleft licenses as a public policy and legislative process. Like all legislative processes, GPLv3 was not ideal — and I was even personally miffed to be relegated to the oft-ignored “GPLv3 Discussion Committee D” — but the GPLv3 process was undoubtedly a step forward in FLOSS community license drafting. Mozilla Corporation made efforts for community collaboration in redrafting the MPL, and specifically included the OSI and the FSF (arbiters of the Open Source Definition and Free Software Definition (respectively)) in MPL's drafting deliberations. The modern acceptable standard is a leap rather than a step forward: a fully public, transparent drafting process with a fully public draft repository, as the copyleft-next project has done. I think we should now meet with utmost suspicion any license that does not use copyleft-next's approach of “running licensing drafting as a Free Software project”.

I was admittedly skeptical of that approach at first. What I have seen six years since Richard Fontana started copyleft-next is that, simply put, the key people who are impacted most fundamentally by a software license are mostly likely to be aware of, and engage in, a process if it is fully public, community-oriented, and uses community tools, like Git.

Like legislation, the policies outlined in copyleft licenses impact the general public, so the general public should be welcomed to the drafting. At Conservancy, we don't draft our own licenses0, so our contracts with software developers and agreements with member projects state that the licenses be both “OSI-approved Open Source” and “FSF-approved GPL-compatible Free Software”. However, you can imagine that Conservancy has a serious vested interest in what licenses are ultimately approved by the OSI and the FSF. Indeed, with so much money flowing to software developers bound by those licenses, our very charitable mission could be at stake if OSI and the FSF began approving proprietary licenses as Open, Free, and/or GPL-compatible. I want to therefore see license stewards work, as Mozilla did, to make the vetting process easier, not harder, for these organizations.

A community drafting process allows everyone to vet the license text early and often, to investigate the community and industry impact of the license, and to probe the license drafter's intent through the acceptance and rejection of proposed modified text (ideally through a DVCS). With for-profit actors seeking to gain policy control of fundamental questions such as “what is strong copyleft?”, we must demand full drafting transparency and frank public discourse.

The Challenge Licensing Arbiters Face

OSI, FSF, and Debian have a huge challenge before them. Historically, the FSF was the only organization who sought to push the boundary of strong copyleft. (Full disclosure: I created the Affero clause while working for the FSF in 2002, inspired by Henry Poole's useful and timely demands for a true network services copyleft.) Yet, the Affero clause was itself controversial. Many complained that it changed the fundamental rules of copyleft. While “triggered only on distribution, not modification” was a fundamental rule of the regular GPL, we as a community — over time and much public debate — decided the Affero clause is a legitimate copyleft, and AGPL was declared Open Source by OSI and DFSG-free by Debian.

That debate was obviously framed by the FSF. The FSF, due to public pressure, compromised by leaving the AGPL as an indefinite fork of the GPL (i.e., the FSF did not include the Affero clause in plain GPL. While I personally lobbied (from GPLv3 Discussion Committee D and elsewhere) for the merger of AGPL and GPL during the GPLv3 drafting process, I respect the decision of the FSF, which was informed not by my one voice, but the voices of the entire community.

Furthermore, the FSF is a charity, chartered to serve the public good and the advancement of software freedom for users and developers. MongoDB is a for-profit company, chartered to serve the wallets of its owners. While MongoDB (like any other company) should be welcomed on equal footing to individuals, charities, and trade-associations to the debate about the future of copyleft, we should not accept their active framing of that debate. By submitting this license to OSI for approval without any public community discussion, and without any discussion whatsoever with the key charities in the community, is unacceptable. The OSI should now adopt a new requirement for license approval — namely, that licenses without a community-oriented drafting process should be rejected for the meta-reason of “non-transparent drafting”, regardless of their actual text. This will have the added benefit of forcing future license drafters to come to OSI, on their public mailing lists, before the license is finalized. That will save OSI the painstaking work of walking back bad license drafts, which has in recent years consumed much expert time by OSI's volunteers.

Welcoming All To Public Discussion

Earlier this year, Conservancy announced plans to host and organize the first annual CopyleftConf. Conservancy decided to do this because Conservancy seeks to create a truly neutral, open, friendly, and welcoming forum for discussion about the past and future of copyleft as a strategy for defending software freedom. We had no idea when Karen and I first mentioned the possibility of running CopyleftConf (during the Organizers' Panel at the end of the Legal and Policy DevRoom at FOSDEM 2018 in February 2018) that multiple companies would come forward and seek to control the microphone on the future of copyleft. Now that MongoDB has done so, I'm very glad that the conference is already organized and on the calendar before they did so.

Despite my criticisms of MongoDB, I welcome Eliot Horowitz, Heather Meeker (the law firm lawyer who drafted MongoDB's new license and the Commons Clause), or anyone else who was involved in the creation of MongoDB's new license to submit a talk. Conservancy will be announcing soon the independent group of copyleft experts (and critics!) who will make up the Program Committee and will independently evaluate the submissions. Even if a talk is rejected, I welcome rejected proposers to attend and speak about their views in the hallway track and the breakout sessions.

One of the most important principles in copyleft policy that our community has learned is that commercial, non-commercial, and individual actors should have equal footing with regard to rights assured by the copyleft licenses themselves. There is no debate about that; we all agree that copyleft codebases become meeting places for hobbyists, companies, charities, and trade associations to work together toward common goals and in harmony and software freedom. With this blog post, I call on everyone to continue on the long road to applying that same principle to the meta-level of how these licenses are drafted and how they are enforced. While we have done some work recently on the latter, not enough has been done on the former. MongoDB's actions today give us an opportunity to begin that work anew.


0 While Conservancy does not draft any main FLOSS license texts, Conservancy does help with the drafting of additional permissions upon the request of our member projects. Note that additional permissions (sometimes called license exceptions) grant permission to engage in activities that the main license would otherwise prohibit. As such, by default, additional permissions can only make a copyleft license weaker, never stronger.

October 16, 2018

2018-10-16 Tuesday

  • Mail chew; administration & form filling, built ESC stats.
  • The increasing volume of sextortion SPAM I get is quite extraordinary and rather liberating given its comic lack of any basis. Interesting to see the large increase in purchase-order, payment related scams too - thank goodness our finance department are savvy (and run Linux Desktops).
  • Joined the LOT Network today; was extraordinarily impressed and pleased to see Microsoft's unambiguously good move of joining OIN - was wisely preceeded by joining LOT - to avoid concerns about indirect trolling. Already had a great deal of respect for Microsoft's engineers (having worked with some in the deepish past), now it seems they have commercial people to match. If you read the MS Legal VP's quote I linked - it is interesting that he traces their open-ness back to open-sourcing ASP.Net in 2008. Miguel's Mono project combined with his positive and constructive engagement with many good people inside Microsoft seems to have yielded much sweet fruit in the end. Which is of course not to say that the bad-cops did not play a role too, but - anyhow; most pleased at the positive outcome for all.

October 15, 2018

Making a first contribution in Outreachy usability testing

If you want to join us in GNOME usability testing as part of the upcoming cycle in Outreachy, you'll need to make a first contribution as part of your application process. Every project in Outreachy asks for a first contribution; this is a requirement in Outreachy.

Don't make too big of a deal about your first contribution in usability testing. We don't expect interns to know much about usability testing as they enter the internship. Throughout the internship, you'll learn about usability testing. So for this first contribution, we set a low bar.

GNOME is a desktop system, so the GNOME programs would be part of the desktop system. If you have installed a Linux distribution with the GNOME desktop, you can pick one or two GNOME programs that come installed as part of GNOME, and do a usability test on them. (If you haven't installed a Linux distribution, you can easily do so inside a PC emulator or virtual machine such as VirtualBox and install a desktop-focused Linux distribution like Fedora Workstation.)

Some GNOME programs you might pick from include:

  • gedit (GNOME text editor)
  • GNOME Web
  • GNOME Files
  • GNOME Software
  • GNOME Notes
  • Evince (GNOME PDF viewer)
  • GNOME Calendar
  • GNOME Photos

Or if you have a favorite GNOME program, you can do a usability test on that too.

A short usability test of 5 scenario tasks (I suggest 3 tasks for one program, and 2 tasks for another program) and 5 testers is a good first contribution. (Five testers may sound like a lot, but you can do your usability test with friends and family, which is completely valid.)

Here are some resources showing some usability test results with the above programs:


You can search for "scenario tasks" on the blog to help you write scenario tasks. Some of the blog articles there actually list some scenario tasks you can use, such as Ciarrai's first contribution from 2016 (skip ahead to Appendix/Scenarios for the scenario tasks Ciarrai used in their usability test contribution; you can just copy from there and that's fine). If you need help, let me know.

When you've done your usability test contribution, email me and we can arrange a conversation or discuss over email.

Sample Scenario Tasks

Here are six scenario tasks for gedit and four scenario tasks for GNOME Files. Feel free to pick and choose the tasks that you want to use in your usability test first contribution.

Note that the first scenario task for GNOME Files requires creating a file before the test. Don't forget to do that.

gedit (GNOME text editor)

1. You need to type up a quick note for yourself, briefly describing an event that you want to remember later. You start the Gedit text editor (this has been done for you).

Please type the following short paragraphs into the text editor:
Note for myself:

Jenny and I decided to throw a surprise party for Jeff, for Jeff's birthday. We'll get together at 5:30 on Friday after work, at Jeff's place. Jenny arranged for Jeff's roommate to keep him away until 7:00.

We need to get the decorations up and music ready to go by 6:30, when people will start to arrive. We asked everyone to be there no later than 6:45.
Save this note as party reminder.txt in the Documents folder.

2. After typing the note, you realize that you got a few details wrong. Please make these edits:

  • In the first paragraph, change Friday to Thursday.
  • Change 5:30 to 5:00.
  • Move the entire sentence Jenny arranged for Jeff's roommate to keep him away until 7:00. to the end of the second paragraph, after no later than 6:45.

When you are done, please save the file. You can use the same filename.

3. Actually, Jeff prefers to go by Geoff, and Jenny prefers Ginny. Please replace every occurrence of Jeff with Goeff, and all instances of Jenny with Ginny.

When you are done, please save the file. You can use the same filename.

4. You'd like to make a copy of the note, using a different name that you can find more easily later. Please save a copy of this note as Geoff surprise party.txt in the Documents folder.

For the purposes of this exercise, you do not need to delete the original file.

5. You decide the text in the editor is difficult to read, and you would prefer to use a different style of text. Please change the text to DejaVu Sans Mono, 12 point.

6. You decide the black-on-white text is too bright for your eyes, and you would prefer to use different colors. Please change the colors to the Oblivion color scheme.

GNOME Files

1. Yesterday, you re-organized your files and you don’t remember where you saved the copy of one of the articles you were working on. Please search for a file named The Hobbit.

2. Files and folders are usually displayed as icons, but you can display them in other ways too. Change how the file manager displays files and folders, to show them as a list.

3. You don’t have your glasses with you, so you aren’t able to read the names of the files and folders very well. Please make the text bigger, so it is easier to read.

4. Please search for a folder or a file that you recently worked on, maybe this will help you find the lost article.

2018-10-15 Monday

  • Interview; mail chew, sync with Kendy & Eloy. Lunch with J.
  • Tried to restore the missing Mattermost to IRC bridge that just got shut down; horrors. The lack of good nick / tab-autocompletion - plus the insistence on requiring '@' before completing (which ambiguates itself via other unusual fields) makes it deeply cumbersome to use for conversation. Surely it is possible to learn to communicate effectively in a much more constrained fashion - say with a lump of coal in your mouth; but why ? I wonder if RocketChat copes with typing the 1st char of a nick, and tab to dis-ambiguate it.

Flatpaks, sandboxes and security

Last week the Flatpak community woke to the “news” that we are making the world a less secure place and we need to rethink what we’re doing. Personally, I’m not sure this is a fair assessment of the situation. The “tl;dr” summary is: Flatpak confers many benefits besides the sandboxing, and even looking just at the sandboxing, improving app security is a huge problem space and so is a work in progress across multiple upstream projects. Much of what has been achieved so far already delivers incremental improvements in security, and we’re making solid progress on the wider app distribution and portability problem space.

Sandboxing, like security in general, isn’t a binary thing – you can’t just say because you have a sandbox, you have 100% security. Like having two locks on your front door, two front doors, or locks on your windows too, sensible security is about defense in depth. Each barrier that you implement precludes some invalid or possibly malicious behaviour. You hope that in total, all of these barriers would prevent anything bad, but you can never really guarantee this – it’s about multiplying together probabilities to get a smaller number. A computer which is switched off, in a locked faraday cage, with no connectivity, is perfectly secure – but it’s also perfectly useless because you cannot actually use it. Sandboxing is very much the same – whilst you could easily take systemd-nspawn, Docker or any other container technology of choice and 100% lock down a desktop app, you wouldn’t be able to interact with it at all.

Network services have incubated and driven most of the container usage on Linux up until now but they are fundamentally different to desktop applications. For services you can write a simple list of permissions like, “listen on this network port” and “save files over here” whereas desktop applications have a much larger number of touchpoints to the outside world which the user expects and requires for normal functionality. Just thinking off the top of my head you need to consider access to the filesystem, display server, input devices, notifications, IPC, accessibility, fonts, themes, configuration, audio playback and capture, video playback, screen sharing, GPU hardware, printing, app launching, removable media, and joysticks. Without making holes in the sandbox to allow access to these in to your app, it either wouldn’t work at all, or it wouldn’t work in the way that people have come to expect.

What Flatpak brings to this is understanding of the specific desktop app problem space – most of what I listed above is to a greater or lesser extent understood by Flatpak, or support is planned. The Flatpak sandbox is very configurable, allowing the application author to specify which of these resources they need access to. The Flatpak CLI asks the user about these during installation, and we provide the flatpak override command to allow the user to add or remove these sandbox escapes. Flatpak has introduced portals into the Linux desktop ecosystem, which we’re really pleased to be sharing with snap since earlier this year, to provide runtime access to resources outside the sandbox based on policy and user consent. For instance, document access, app launching, input methods and recursive sandboxing (“sandbox me harder”) have portals.

The starting security position on the desktop was quite terrible – anything in your session had basically complete access to everything belonging to your user, and many places to hide.

  • Access to the X socket allows arbitrary input and output to any other app on your desktop, but without it, no app on an X desktop would work. Wayland fixes this, so Flatpak has a fallback setting to allow Wayland to be used if present, and the X socket to be shared if not.
  • Unrestricted access to the PulseAudio socket allows you to reconfigure audio routing, capture microphone input, etc. To ensure user consent we need a portal to control this, where by default you can play audio back but device access needs consent and work is under way to create this portal.
  • Access to the webcam device node means an app can capture video whenever it wants – solving this required a whole new project.
  • Sandboxing access to configuration in dconf is a priority for the project right now, after the 1.0 release.

Even with these caveats, Flatpak brings a bunch of default sandboxing – IPC filtering, a new filesystem, process and UID namespace, seccomp filtering, an immutable /usr and /app – and each of these is already a barrier to certain attacks.

Looking at the specific concerns raised:

  • Hopefully from the above it’s clear that sandboxing desktop apps isn’t just a switch we can flick overnight, but what we already have is far better than having nothing at all. It’s not the intention of Flatpak to somehow mislead people that sandboxed means somehow impervious to all known security issues and can access nothing whatsoever, but we do want to encourage the use of the new technology so that we can work together on driving adoption and making improvements together. The idea is that over time, as the portals are filled out to cover the majority of the interfaces described, and supported in the major widget sets / frameworks, the criteria for earning a nice “sandboxed” badge or submitting your app to Flathub will become stricter. Many of the apps that access --filesystem=home are because they use old widget sets like Gtk2+ and frameworks like Electron that don’t support portals (yet!). Contributions to improve portal integration into other frameworks and desktops are very welcome and as mentioned above will also improve integration and security in other systems that use portals, such as snap.
  • As Alex has already blogged, the freedesktop.org 1.6 runtime was something we threw together because we needed something distro agnostic to actually be able to bootstrap the entire concept of Flatpak and runtimes. A confusing mishmash of Yocto with flatpak-builder, it’s thankfully nearing some form of retirement after a recent round of security fixes. The replacement freedesktop-sdk project has just released its first stable 18.08 release, and rather than “one or two people in their spare time because something like this needs to exist”, is backed by a team from Codethink and with support from the Flatpak, GNOME and KDE communities.
  • I’m not sure how fixing and disclosing a security problem in a relatively immature pre-1.0 program (in June 2017, Flathub had less than 50 apps) is considered an ongoing problem from a security perspective. The wording in the release notes?

Zooming out a little bit, I think it’s worth also highlighting some of the other reasons why Flatpak exists at all – these are far bigger problems with the Linux desktop ecosystem than app security alone, and Flatpak brings a huge array of benefits to the table:

  • Allowing apps to become agnostic of their underlying distribution. The reason that runtimes exist at all is so that apps can specify the ABI and dependencies that they need, and you can run it on whatever distro you want. Flatpak has had this from day one, and it’s been hugely reliable because the sandboxed /usr means the app can rely on getting whatever they need. This is the foundation on which everything else is built.
  • Separating the release/update cadence of distributions from the apps. The flip side of this, which I think is huge for more conservative platforms like Debian or enterprise distributions which don’t want to break their ABIs, hardware support or other guarantees, is that you can still get new apps into users hands. Wider than this, I think it allows us huge new freedoms to move in a direction of reinventing the distro – once you start to pull the gnarly complexity of apps and their dependencies into sandboxes, your constraints are hugely reduced and you can slim down or radically rethink the host system underneath. At Endless OS, Flatpak literally changed the structure of our engineering team, and for the first time allowed us to develop and deliver our OS, SDK and apps in independent teams each with their own cadence.
  • Disintermediating app developers from their users. Flathub now offers over 400 apps, and (at a rough count by Nick Richards over the summer) over half of them are directly maintained by or maintained in conjunction with the upstream developers. This is fantastic – we get the releases when they come out, the developers can choose the dependencies and configuration they need – and they get to deliver this same experience to everyone.
  • Decentralised. Anyone can set up a Flatpak repo! We started our own at Flathub because there needs to be a center of gravity and a complete story to build out a user and developer base, but the idea is that anyone can use the same tools that we do, and publish whatever/wherever they want. GNOME uses GitLab CI to publish nightly Flatpak builds, KDE is setting up the same in their infrastructure, and Fedora is working on completely different infrastructure to build and deliver their packaged applications as Flatpaks.
  • Easy to build. I’ve worked on Debian packages, RPMs, Yocto, etc and I can honestly say that flatpak-builder has done a very good job of making it really easy to put your app manifest together. Because the builds are sandboxed and each runtimes brings with it a consistent SDK environment, they are very reliably reproducible. It’s worth just calling this out because when you’re trying to attract developers to your platform or contributors to your app, hurdles like complex or fragile tools and build processes to learn and debug all add resistance and drag, and discourage contributions. GNOME Builder can take any flatpak’d app and build it for you automatically, ready to hack within minutes.
  • Different ways to distribute apps. Using OSTree under the hood, Flatpak supports single-file app .bundles, pulling from OSTree repos and OCI registries, and at Endless we’ve been working on peer-to-peer distribution like USB sticks and LAN sharing.

Nobody is trying to claim that Flatpak solves all of the problems at once, or that what we have is anywhere near perfect or completely secure, but I think what we have is pretty damn cool (I just wish we’d had it 10 years ago!). Even just in the security space, the overall effort we need is huge, but this is a journey that we are happy to be embarking together with the whole Linux desktop community. Thanks for reading, trying it out, and lending us a hand.

Restyling apps at scale

tl;dr: If you want to change how an app looks, you need a designer in the loop.

Over the past few months we’ve had a lively debate about “theming” in GNOME, and how it affects our ecosystem. In this discussion I’ve found that there is a divide between people who design and/or develop apps, and people who don’t. I have yet to see an app developer who thinks the current approach to “theming” can work, while many people who aren’t app developers are arguing that it can.

After a few long discussions I started to realize that part of the reason why there’s so little agreement and so much drama around this issue is that we don’t agree what the problem is. Those who don’t work on apps often can’t see the issues with theming and think we want to remove things for no reason, while those who do are very frustrated that the other side doesn’t want to acknowledge how broken everything is.

The basic issue we’re arguing about is whether it’s possible to restyle applications automatically, at scale, without breaking them. In this post I’ll try to explain why I think that it isn’t possible, and why trying to do it is hurting our ecosystem.

There are no themes, just CSS

A fundamental misconception a lot of people have is that GTK 3 supports theming. This is not true, as there is no clearly defined theming API. There are CSS stylesheets, but they were only ever meant to be used by the platform and app developers. The platform stylesheet is called Adwaita (“the only one” in Sanskrit) for a reason.

However, some people (inlcluding major distributions) have been using custom stylesheets as a hacky approximation of “themes” for so long now that nobody even remembers that they are not a real theming API.

An older version of GRadio with Ubuntu’s Ambiance theme

Since CSS is a huge API surface, which is used by both app developers and “theme” developers, it’s very easy for the two to conflict. This leads to apps looking broken unless you manually do QA for every app/theme combination.

“You’re exaggerating, it’s not that bad.”

One of the most frustrating things about the current situation is that to users, it looks like it almost works. For the most part, third-party themes look and work okayish, there are just a few small bugs here and there. A button with too little contrast, an underline clashing with a border, a really large loading spinner. Not that big a deal, you’d think.

However, this view of the situation misses a few really important realities:

  1. App developers are doing a lot of bug fixing to account for “theming”, because people complain to them when their app is broken on certain distros. The current situation essentially forces developers to fix bugs for setups they never intended to support in the first place. They’re not happy about it, but they’re doing it because they don’t want their users to have broken apps.
  2. App developers are trying hard not do anything innovative or visual in their apps, because they know it will break with other stylesheets.
  3. “Theme” developers are fixing a lot of bugs for edge cases in individual apps in their stylesheets. Of course, this is a never-ending task because as soon as a new version of an app is released, something will very likely be broken again.

All of this only kinda sorta works because we have very few apps. If we ever hope to grow our ecosystem past a few dozen apps things have to change, because maintaining a “theme” gets less sustainable with every new app.

There is also the question of what your definition of “broken” is. Some people think that things like these are acceptable:

Date picker in Calendar on Pop OS: There are double arrows left of the month and year labels.
“New Folder” dialog in Nautilus with Ambiance: There’s an invisible “Create” button in the top right.
Gedit on Pop OS: There’s a brown rectangle sticking out of the window at the top, and the widgets at the top of the sidebar look awkward and don’t make sense in this configuration.

I believe this is nowhere near ok. App developers put a lot of effort into making sure their apps look and feel right, fixing bugs, and doing QA. Having distributors change their apps (often in ways that break things) with no QA before users get to experience them is developer-hostile, and would be unacceptable in any other context.

Can you imagine Samsung restyling every third-party app on their phones, without testing them, and when Instagram complains that the text on their buttons is illegible Samsung just shrug and say “Sorry, but branding is very important to us. Why don’t you change your app?”.

“But it’s fine as long as you follow best practices!”

This is a common sentiment among some people. “If only app developers followed best practices, used CSS variables, and derived their colors from theme colors, everything would be fine”, the argument goes. While it’s true that these things are important and make apps more flexible in terms of appearance, this doesn’t come close to solving the entire problem.

If every app only used default GTK controls, in very simple layouts, with no custom widgets or in-app CSS, then best practices would perhaps be enough to prevent apps from breaking. But the reality is that a) this is not the case, and b) we really don’t want it to be the case.

Human Interface Guidelines are just that: guidelines. They have to be implemented manually, they change and evolve over time, and the best apps on any platform push their boundaries in some places and experiment with new patterns (which then sometimes make their way back into the HIG).

This kind of experimentation means that it’s impossible to “theme” apps automatically. Visual design and interaction design are very closely linked, so if you want to change the style, you need a designer to actually think about what a widget should look like.

For example: How should the Pop theme know what a “flat” variant of the new Nautilus path bar should look like? It’s impossible to do this automatically, because it’s a new pattern.

The new path bar in Nautilus
The new path bar in Nautilus with the Pop OS theme (completely broken)

This is not a rare situation, even among core GNOME apps. Many of the apps I’ve personally worked on have their own equivalent of the Nautilus path bar. And that’s the way it should be. Apps have different needs that sometimes require custom solutions, and this experimentation is good for the ecosystem. However, it also means that automatic restyling is impossible.

“But what about Adwaita Dark and High Contrast?”

The point that the dark and high contrast variants of Adwaita can be seen as GTK 3 kinda sorta supporting themes is technically accurate, but misguided. Adwaita variants may technically just be different stylesheets, but this is just an implementation detail. There are very clear differences between them and third-party “themes”.

  1. They are very very close to Adwaita code-wise, and therefore much less likely to break.
  2. There’s a finite number of them, and they are part of the platform. This makes them a tangible target for developers to test for and support, which is completely different from third-party “themes” which just apply random CSS to every app without any QA.
  3. They are part of the same design language, so they require very little extra work to adapt to from a design point of view. If you follow the best practices mentioned above, often no work at all.
GNOME Builder with Adwaita Dark

“But what about consistency? I want all my apps to look the same!”

“Consistency” is a word you hear a lot in debates about theming, but it seems to mean different things to different people.

To some, consistency effectively means “everything should use the same colors”. This is relatively easy to achieve for some apps, even across toolkits, if you simply provide a “theme” for each toolkit.
However, this “consistency” is very shallow. Just because a Qt app uses the Adwaita colors it doesn’t automatically fit in with the GNOME platform. UI patterns are much more important, and they can’t be changed with theming. There is no magic that can redesign menu bar apps with header bars.

No matter how hard you theme these, they will never look native on GNOME

Real consistency can only happen by design, and requires the app developer to build it into the app at every step. If you want all your apps to look the same, only use apps built for your platform’s HIG. “Fixing” apps after the fact with theming isn’t really making your system more consistent, but it’s hurting app developers a great deal.

Of course, even the very shallow “everything uses the same colors” consistency is impossible to enforce across all apps and toolkits. Apps like Blender, Telegram, or Steam don’t respond to system theming at all, and even Firefox and Chromium only do so in a very limited fashion.

“But users *want* themes!”

“Users” want a lot of things, but just because you want something impossible that doesn’t make it possible. In this case, it’s important to be aware of the costs of giving complete visual freedom to “themes”, both in individual app developer effort, and chilling effects on the ecosystem. If given a choice between customization and more, better apps, I’m confident the majority of people would prefer the latter.

Would it be nice if there were a way to be able to restyle every app to make them look like Material Design, or macOS, or Windows 95, and have them all look as if they were built for that style? Absolutely! I would love that! However, as I’ve tried to explain in this blog post, this is simply not realistic.

So, what can we do?

As the recent discussions have shown, talking about solutions before we agree what the problem is isn’t very productive, so for now I’m mostly interested in making sure we’re all aware of the problem and its various facets. There are several different stakeholders with different perspectives on this issue, and making progress will mean making some hard choices. At this year’s GUADEC we had a Theming & Ecosystem BoF where we talked about a number of potential directions, and I hope we can move forward on that path.

No matter what we come up with though, I think it’s crucial that we start taking the needs of app developers seriously. Developers are the lifeblood of any platform, and we’ve been treating them very poorly. If we want to grow our ecosystem and actually compete with other major platforms, we need to fix that.

Note: The examples in this post have been chosen because the themes in question ship with major distributions, so app developers tend to get complaints about them. It’s not my intention to single them out, this problem applies to all third-party themes equally.

Geoclue 2.5 & repeating call for help

Just wanted to announce release 2.5.0 of Geoclue that includes the changes I mentioned in my last blog post.

Also, while I'm at it, I wanted to highlight the "call for help" at the end of that post by repeating it here again. I apologize of repeating to those who already read it but a friend pointed out that it's likely going to be missed by many folks:

The future of Mozilla Location Service

When Mozilla announced their location service in late 2013, Geoclue became one of its first users as it was our only hope for a reliable WiFi-geolocation source. We couldn't use Google's service as their ToC don't allow it to be used in an open source project (I recall some clause that it can only be used with Google Maps and not any other Map software). Mozilla Location Service (MLS) was a huge success in terms of people contributing WiFi data to it. I've been to quite a few places around Europe and North America in the last few years and I haven't been to any location, that is not already covered by MLS.

Mozilla's own interest in this service was tied to their Firefox OS project. Unfortunately Firefox OS project was abandoned two years ago and Mozilla lost its interest in MLS as a result. Mozilla folks are the good guys so they have kept the service running and users can still contribute data but it's no longer developed or maintained.

Since this is a very important service for all users of geoclue, I feel very uneasy about this uncertain future of MLS. So consider this a call for help. If your company relies on MLS (directly or through Geoclue) and you'd want to secure the future of Open Source geolocation, please do get in touch and we can discuss how we could possibly achieve that.

October 14, 2018

Google Cloud Print in Ubuntu

There is an interesting hidden feature available in Ubuntu 18.04 LTS and newer. To enable this feature, first install cpdb-backend-gcp.

sudo apt install cpdb-backend-gcp

Make sure you are signed in to Google with GNOME Online Accounts. Open the Settings app1 to the Online Accounts page. If your Google account is near the top above the Add an account section, then you’re all set.

Currently, only LibreOffice is supported. Hopefully, for 19.04, other GTK+ apps will be able to use the feature.

This feature was developed by Nilanjana Lodh and Abhijeet Dubey when they were Google Summer of Code 2017 participants. Their mentors were Till Kamppeter, Aveek Basu, and Felipe Borges.

Till has been trying to get this feature installed by default in Ubuntu since 18.04 LTS, but it looks like it won’t make it in until 19.04.

I haven’t seen this feature packaged in any other Linux distros yet. That might be because people don’t know about this feature so that’s why I’m posting about it today! If you are a distro packager, the 3 packages you need are cpdb-libs , cpdb-backend-gcp, and cpdb-backend-cups. The final package enables easy printing to any IPP printer. (I didn’t mention it earlier because I believe Ubuntu 18.04 LTS already supports that feature through a different package.)

Save to Google Drive

In my original blog post, I confused the cpdb feature with a feature that already exists in GTK3 built with GNOME Online Accounts support. This should already work on most distros.

When you print a document, there will be an extra Save to Google Drive option. Saving to Google Drive saves a PDF of your document to your Google Drive account.

This post was edited on October 16 to mention that cpdb only supports LibreOffice now and that Save to Google Drive is a GTK3 feature instead.

October 17: Please see Felipe’s comments. It turns out that even Google Cloud Print works fine in distros with recent GTK3. The point of the cpdb feature is to make this work in apps that don’t use GTK3. So I guess the big benefit now is that you can use Google Cloud Print or Save to Google Drive from LibreOffice.

Things Microsoft could do to make life of developers easier

A few weeks ago I was at CppCon. One of the presentations was about new stuff in the Visual Studio compiler. The presentation had this slide fairly early on.

Presentation screenshot saying the mission of the C++ team is to make the lives of all C++ developers on the planet better.


If that is truly their goal, then here are some things they could do. (Some not specifically about C++ but still related.)

Proper RPATH support

If you have a project that uses shared libraries and you want to run it directly from the build directory, then you really need to have rpath or something similar to it. A simple way of explaining it is that you add a piece of text inside an executable saying "when running me, search for foo.dll in directory ../baz/lib.

Since this is not natively supported, people need to resort to awful hacks to make it work:
  • adjusting the PATH envvar to contain the dirs where the dlls are (because PATH is used to look up dlls)
  • copy all files to the same directory before running
  • creating a manifest file defining an internal bundle, creating a subdirectory and copying all dependency dlls there
  • static linking everything
  • mandating a project layout where everything is in one subdirectory
All of these are nasty hacks. It should be possible to run programs straight from the build dir without needing to copy anything or change envvars.

During CppCon I was told that with the very latest Windows 10 it should be possible to do this "somehow" but googling for this has not uncovered any instructions.

Spawning a process using an array

The way to spawn processes in Windows is using the CreateProcess function. Note that it takes a command string, not an array. The command implementation will then parse the string to a command array and run it. The documentation page does not document how the parsing is done, but presumably it is the same as what cmd.exe does.

What this means is that it is impossible to spawn a process on Windows without needing to jump through massive quoting hoops. For example suppose you want to write a Ninja file to call a specific command. Because of this problem Ninja does not support arrays natively but instead requires every user to write a single command string, which leads to double quoting. First you need to quote the array to be a Windows process spawning command and then you need to quote that according to Ninja's quoting rules.

And then it gets terrible.

The command line length limitation on Windows is ridiculously short. Even fairly simple link commands are too long. Thus you need to write the actual command to a response file, which the command then reads and parses on its own. Since every program writes their own parsing and splitting code, you may find that you need to quote things differently depending on whether you are using the command line or a response file. You get one guess whether some (but not all) programs coming from Unix parse their response files according to Unix shell rules, even on Windows.

Now there might be a few people out there who just got outraged, because msvcrt does in fact have functions to spawn processes with arrays. They are a complete lie. Here is a rough pseudocode representation on how they are implemented:

def spawn_process(command_array):
    command_line = ' '.join(command_array)
    return CreateProcess(command_line)

Support GCC's destructor extension in plain C

RAII is awesome. It is, in fact, so awesome that GCC ships an extension to use it with plain C. It is used by many plain C projects such as GLib and systemd. I have spoken to many C developers and they really love that feature and they absolutely hate that they can't use it in code that has to support MSVC.

Adding this support would be great and make the world a better place in several ways including:
  • you can use libraries that use this feature as dependencies when building with MSVC
  • multiplatform projects can start using destructors freely
  • all the millions of lines of C code that exist in the world (and which will not be rewritten any time soon) can be made iteratively safer and more reliable
Eventually it would be nice to get this feature in the C standard, but that is unlikely to happen any time soon.

Performance optimize MSBuild

Running the test suite of Meson with the Visual Studio compiler takes roughly 6-7 minutes when using the Ninja backend and 14-18 minutes when using the MSBuild backend. Granted, this is a worst case scenario of running many small independent builds in a row, but it is still frustratingly slow. The same can be found when using Visual Studio IDE. After typing ctrl-shift-b there is usually a noticeable lag until any compilation actually starts.

Kill the need for vcvarsall.bat and provide parallel installable compilers

Visual studio compilers are not in path by default. You have to either start a special shell or run a magic bat file from a magic directory that sets up the environment so that the compilers work. If you go looking in the installed directory there are many different directories all of which contain an executable cl.exe. Which one you run depends on PATH settings, thus you can only run one compiler at a time. This makes it really difficult to, for example, run multiple different VS versions (15, 17, native, cross etc) from a single script.

This same problem has been solved on Unix side ages ago. The trick is to provide many executables with different names. For example cl15-x86.exe, cl17-arm.exe and cl17-x64.exe. Each of these executables would set up the equivalent of vcvarsall.bat for its own process and then forward the actual compilation to the compiler, wherever it may be hidden in the file system hierarchy. These binaries could the be put in one single path location and they could be used from any command prompt, even in parallel. This is particularly useful for cross compilation projects where you need to build a code generator with the native compiler and then use it to generate source code for the cross compiler.

Have you reported these as bugs upstream?

No. Nothing on this blog post is new, these are all issues that have been known for 20+ years and most likely have been reported to Microsoft dozens, if not hundreds of times. The fact that these things have not been fixed is a question of corporate priorities. As a random-non-windows-using-dude-on-internet I don't really have any influence on those.

October 13, 2018

Shutter removed from Debian & Ubuntu

This week, the popular screenshot app Shutter was removed from Debian Unstable & Ubuntu 18.10. (It had already been removed from Debian “Buster” 6 months ago and some of its “optional” dependencies had already been removed from Ubuntu 18.04 LTS).

Shutter will need to be ported to gtk3 before it can return to Debian. (Ideally, it would support Wayland desktops too but that’s not a blocker for inclusion in Debian.)

See the Debian bug for more discussion.

I am told that flameshot is a nice well-maintained screenshot app.

I believe Snap or Flatpak are great ways to make apps that use obsolete libraries available on modern distros that can no longer keep those libraries around. There isn’t a Snap or Flatpak version of Shutter yet, so hopefully someone interested in that will help create one.

Redesign of the invite dialog in Fractal (part 1)

Hello everyone!

This month, I’ve had some time to work on the redesign of the invite dialog in Fractal. There is a dialog used for inviting users in a room you are in or inviting a user to start a direct chat with them. In this dialog, you can search for users by usernames. The result of this search is shown in a list below the search entry and you can click on the GtkListBox‘s rows to select users (in the case of direct chat invitations, the latest selected user will be the only one selected) and you can then click on the button “Invite” to send invitations to all selected users.

Previously, the selected users were listed in another GtkListBox placed above the search entry. So it looked like this:

image

I’ve reimplemented the way selected users are shown. Now, the selected users are represented inline as part of the search input: when you select a user, a “pill” (a small widget grouping the avatar and the name of the user) is added to the entry and when you delete this pill, the user is removed from the list of the invitations to send. It looks like this now:

Capture du 2018-10-12 17-53-36

I’m going to describe the steps I’ve done in order to get this result.

First of all, I knew that if I wanted to insert pills in the search entry, I’d have to use a GtkTextView instead of a GtkEntry so that I could create a GtkChildAnchor at a given position in the text (a GtkTextIter) with the method

Next, I wanted to be able to remove users from the invite list when deleting their associated pill from the GtkTextBuffer. So each time there was a “delete-range” signal emitted by the buffer, Fractal would iterate through the invite list (that I’ve modified to associate an invited member with their child anchor) and check if the child anchor were removed, with the method

Then I had the possibility to remove the GtkListBox containing the invite list. I’ve removed it from the UI file (see it here) and cleaned up the code using it (here).

I modified the new text view so that it launches a user search when typing into it. It was easy to implement because

One of the final steps was to make the text view looks like a GtkEntry (here and here), I did it the same way as I did for the message input. And finally, I’ve removed the old entry in this commit.

There is a little problem with the new entry: the text that is typed into the text view is lower than the position of the pill (the child anchor); so they are not aligned correctly and it makes the entry being resized, which is not great. It looks like this:

Capture du 2018-10-13 10-06-17.png

I think that a solution could be to set the vertical position of the anchor child so that it is lower but I didn’t find out how to do that. So if you have an idea for that or another way to fix this, I be pleased to hear about it 🙂

October 11, 2018

The future of AlternateTab, and why you need not worry

Any time someone publishes a “The top n GNOME Shell extensions” article, there’s a fair chance that it will include the AlternateTab extension.

That is a bit sad to be honest. Not because it would be wrong for users to prefer a more traditional switcher, mind you, but because the actual functionality has been built-in for years — all the extension does is intercept one keyboard shortcut and pretend that it was a different keyboard shortcut.

So without further ado, this is how to set up the window switcher without using the extension:

But wait, if the extension doesn’t do anything really, why does it even exist?

Well, until very recently, the extension was still used in GNOME Classic, so that the same <Super>Tab> shortcut would bring up the default application switcher in the regular GNOME session and the window switcher in the Classic session.

In GNOME 3.30 this now works without the extension, which brings us to the extension’s future mentioned in the title. After the above explanation it shouldn’t come as a surprise that there is none — the extension will be retired for GNOME 3.32. If you are currently using it, please set up the built-in Switch Windows” shortcut as outlined above, and enjoy having one less extension to worry about on updates…

Librem 5 ❤️ GNOME 3.32

I am glad to announce that the tooling I am working on since the beginning of the year is ready to be used!

Thanks to new features introduced into libhandy 0.0.3 and 0.0.4 and thanks to a few fixes to Adwaita in GTK+ 3.24.1, you can make GTK+ 3 apps adaptive to work both on the desktop and on the upcoming GNOME-based Librem 5 phone.

We are early in the GNOME 3.32 release schedule and the Librem 5 will be released a bit after it, so if you want your apps to work on the Librem 5, now is the best time: use libhandy 0.0.4 and up, use GTK+ 3.24.1 and up and target GNOME 3.32! A few apps like Fractal, Podcasts, Calls and Chatty are already using libhandy's adaptive capabilities, and other apps are working on their adaptive transition like Contacts, Games, Geary and Settings (all are works in progress). libhandy is available in Debian Unstable and Arch's AUR repository, and I wish it would be in Fedora already to let GNOME Settings' CI pass.

For the moment, libhandy is a GTK+ 3 widget library, we will ultimately port it to GTK+ 4 and relevant widgets will be submitted to GTK+ 4. Theme creators, for adaptive apps to work well with your theme please apply similar chages to the ones I applied to Adwaita (look for changes by Adrien Plazas between 3.24.0 and 3.24.1).

Here are a few links you will find useful:

heap object representation in spidermonkey

I was having a look through SpiderMonkey's source code today and found something interesting about how it represents heap objects and wanted to share.

I was first looking to see how to implement arbitrary-length integers ("bigints") by storing the digits inline in the allocated object. (I'll use the term "object" here, but from JS's perspective, bigints are rather values; they don't have identity. But I digress.) So you have a header indicating how many words it takes to store the digits, and the digits follow. This is how JavaScriptCore and V8 implementations of bigints work.

Incidentally, JSC's implementation was taken from V8. V8's was taken from Dart. Dart's was taken from Go. We might take SpiderMonkey's from Scheme48. Good times, right??

When seeing if SpiderMonkey could use this same strategy, I couldn't find how to make a variable-sized GC-managed allocation. It turns out that in SpiderMonkey you can't do that! SM's memory management system wants to work in terms of fixed-sized "cells". Even for objects that store properties inline in named slots, that's implemented in terms of standard cell sizes. So if an object has 6 slots, it might be implemented as instances of cells that hold 8 slots.

Truly variable-sized allocations seem to be managed off-heap, via malloc or other allocators. I am not quite sure how this works for GC-traced allocations like arrays, but let's assume that somehow it does.

Anyway, the point of this blog post. I was looking to see which part of SpiderMonkey reserves space for type information. For example, almost all objects in V8 start with a "map" word. This is the object's "hidden class". To know what kind of object you've got, you look at the map word. That word points to information corresponding to a class of objects; it's not available to store information that might vary between objects of that same class.

Interestingly, SpiderMonkey doesn't have a map word! Or at least, it doesn't have them on all allocations. Concretely, BigInt values don't need to reserve space for a map word. I can start storing data right from the beginning of the object.

But how can this work, you ask? How does the engine know what the type of some arbitrary object is?

The answer has a few interesting wrinkles. Firstly I should say that for objects that need hidden classes -- e.g. generic JavaScript objects -- there is indeed a map word. SpiderMonkey calls it a "Shape" instead of a "map" or a "hidden class" or a "structure" (as in JSC), but it's there, for that subset of objects.

But not all heap objects need to have these words. Strings, for example, are values rather than objects, and in SpiderMonkey they just have a small type code rather than a map word. But you know it's a string rather than something else in two ways: one, for "newborn" objects (those in the nursery), the GC reserves a bit to indicate whether the object is a string or not. (Really: it's specific to strings.)

For objects promoted out to the heap ("tenured" objects), objects of similar kinds are allocated in the same memory region (in kind-specific "arenas"). There are about a dozen trace kinds, corresponding to arena kinds. To get the kind of object, you find its arena by rounding the object's address down to the arena size, then look at the arena to see what kind of objects it has.

There's another cell bit reserved to indicate that an object has been moved, and that the rest of the bits have been overwritten with a forwarding pointer. These two reserved bits mostly don't conflict with any use a derived class might want to make from the first word of an object; if the derived class uses the first word for integer data, it's easy to just reserve the bits. If the first word is a pointer, then it's probably always aligned to a 4- or 8-byte boundary, so the low bits are zero anyway.

The upshot is that while we won't be able to allocate digits inline to BigInt objects in SpiderMonkey in the general case, we won't have a per-object map word overhead; and we can optimize the common case of digits requiring only a word or two of storage to have the digit pointer point to inline storage. GC is about compromise, and it seems this can be a good one.

Well, that's all I wanted to say. Looking forward to getting BigInt turned on upstream in Firefox!

Moving away from the 1.6 freedesktop runtime

A flatpak runtime contains the basic dependencies that an application needs. It is shared by applications so that application authors don’t have to bother with complicated low-level dependencies, but also so that these dependencies can be shared and get shared updates.

Most flatpaks these days use the freedesktop runtime or one of its derivates (like the Gnome and KDE runtimes). Historically, these have been using the 1.6 version of the freedesktop runtime which is based on Yocto.

The 1.6 runtime has served its place to kickstart flatpak and flathub well, but it is getting quite long in the tooth. We still fix security issues in it now and then, but it is not seeing a lot of maintenance recently. Additionally, not a lot of people know enough yocto to work on it, so we were never able to build a larger community around it.

However, earlier this summer a complete reimplementation, version 18.08, was announced, and starting with version 3.30 the Gnome runtime is now based on it as well, with a KDE version is in the works. This runtime is based on BuildStream, making it much easier to work with, which has resulted in a much larger team working on this runtime. Partly this is due to the awesome fact that Codethink has several people paid to work on this, but there are also lots of community support.

The result is a better supported, easier to maintain runtime with more modern content. What we need to do now is to phase out the old runtime and start using the new one in apps.

So, this is a call to action!

Anyone who maintains a flatpak application, especially on flathub, please try to move to a runtime based on 18.08. And if you have any problems, please report them to the upstream freedesktop-sdk project.

October 10, 2018

Removing my favorite feature

Years ago, when starting the Builder project, I added a real-time CPU graph so that it was easy to detect slow-downs quickly while hacking on the product. If I ever saw a main-loop stall, it would be immediately obvious.

Over the years I added more advanced features like a Sysprof-based profiler. That duplicated the feature and requires significantly less overhead. You also have the added benefit of zooming and panning.

So in a decision that was long overdue, I’m removing the real-time graph from Builder 3.32. I never did a great job of porting that code to optimal Wayland use anyway. It was really designed with Xrender/Xshm in mind where XCopyArea() was cheap and done on the GPU.

There are better solutions now anyway.

Thoughts on Microsoft Joining OIN's Patent Non-Aggression Pact

[ A similar version was crossposted on Conservancy's blog. ]

Folks lauded today that Microsoft has joined the Open Invention Network (OIN)'s limited patent non-aggression pact, suggesting that perhaps it will bring peace in our time regarding Microsoft's historical patent aggression. While today's announcement is a step forward, we call on Microsoft to make this just the beginning of their efforts to stop their patent aggression efforts against the software freedom community.

The OIN patent non-aggression pact is governed by something called the Linux System Definition. This is the most important component of the OIN non-aggression pact, because it's often surprising what is not included in that Definition especially when compared with Microsoft's patent aggression activities. Most importantly, the non-aggression pact only applies to the upstream versions of software, including Linux itself.

We know that Microsoft has done patent troll shakedowns in the past on Linux products related to the exfat filesystem. While we at Conservancy were successful in getting the code that implements exfat for Linux released under GPL (by Samsung), that code has not been upstreamed into Linux. So, Microsoft has not included any patents they might hold on exfat into the patent non-aggression pact.

We now ask Microsoft, as a sign of good faith and to confirm its intention to end all patent aggression against Linux and its users, to now submit to upstream the exfat code themselves under GPLv2-or-later. This would provide two important protections to Linux users regarding exfat: (a) it would include any patents that read on exfat as part of OIN's non-aggression pact while Microsoft participates in OIN, and (b) it would provide the various benefits that GPLv2-or-later provides regarding patents, including an implied patent license and those protections provided by GPLv2§7 (and possibly other GPL protections and assurances as well)

Wandering in the symlink forest forever

Last week, Philip Withnall told me that Meson has built-in support for generating code coverage reports: just configure with -Db_coverage=true, run your tests with ninja test, then run ninja coverage-{text,html,xml} to generate the report in the format of your choice. The XML format is compatible with Cobertura’s output, which is convenient since Endless’s Jenkins is already configure to consume Cobertura XML generated by Autotools projects using our EOS_COVERAGE_REPORT macro. So it was a simple matter of adding gcovr to the build enviroment, running ninja coverage-xml after the tests, and moving the report to the right place for Jenkins to find it. It worked well on the projects I tested, so I decided to enable it for all Meson projects built in our CI. Sure, I thought, it’s not so useful for our forks of GNOME and third-party projects, but it’s harmless and saves adding per-project config, right?

Fast-forward to yesterday, when someone noticed that a systemd build had been stuck on the ninja coverage-xml step for 16 hours. Uh oh.

It turns out that gcovr follows symlinks when scanning for coverage files, but didn’t check for cycles. systemd’s test suite generates a fake sysfs tree, with many circular references via symlinks. For example, there are 64 self-referential ttyX trees:

$ ls -l build/test/sys/devices/virtual/tty/tty1
total 12
-rw-r--r-- 1 wjt wjt    4 Oct  9 12:16 dev
drwxr-xr-x 2 wjt wjt 4096 Oct  9 12:16 power
lrwxrwxrwx 1 wjt wjt   21 Oct  9 12:16 subsystem -> ../../../../class/tty
-rw-r--r-- 1 wjt wjt   16 Oct  9 12:16 uevent
$ ls -l build/test/sys/devices/virtual/tty/tty1/subsystem/tty1
lrwxrwxrwx 1 wjt wjt 30 Oct  9 12:16 build/test/sys/devices/virtual/tty/tty1/subsystem/tty1 -> ../../devices/virtual/tty/tty1
$ readlink -f build/test/sys/devices/virtual/tty/tty1/subsystem/tty1
/home/wjt/src/endlessm/systemd/build/test/sys/devices/virtual/tty/tty1

And, worse, all other ttyY trees are accessible via the symlinks from each ttyX tree. The kernel caps the number of symlinks per path to 40 before lookups fail with ELOOP, but that’s still 6440 paths to resolve, just for the fake ttys. Quite a big number!

The fix is straightforward: maintain a set of visited (st_dev, st_ino) pairs while walking the tree, and prune subtrees we’ve already visited. I tried adding a similar highly self-referential symlink graph to the gcovr test suite, so that it would run in reasonable time if the fix works and essentially never terminate if it does not. Unfortunately, pytest has exactly the same bug: while searching for tests to run, it gets lost wandering in the symlink forest forever.

This bug is a good metaphor for my habit of starting supposedly-quick side-projects.

October 09, 2018

Usability testing with Outreachy

I've volunteered with Allan and Jakub to mentor more GNOME usability testing in the next cycle of Outreachy, from December 4, 2018 to March 4, 2019. Outreachy expressly invites applicants from around the world who are women (both cis and trans), trans men, and genderqueer people.

Interns will work with the GNOME team and mentor(s) to do usability testing on GNOME. The goal is to perform several cycles of usability testing on prototypes of new designs, and provide usability testing results and feedback to the GNOME team so a new iterative design can be updated based on those results. We would like to use a "test what you've got" approach where we set up a testing schedule, and the intern tests whatever prototype or model is ready at that time. So if "test day" is Thursday, we could nail down what to test by Monday, and have the intern post results on Friday or the weekend.

What you need to know

You don't need to know usability testing to intern with us! We'll start the internship with a few weeks of "usability 101" with the mentor to learn about usability testing and usability processes, including how to lead a usability test. After the first few weeks, interns should expect to learn as they go, or "learn by doing."

This "learn by doing" approach will help you gain practical experience about usability testing. By the end of the internship, you will understand usability testing, and why it is important in open source software (this applies to any open source software project).

As an intern, you can have a direct impact on the GNOME desktop! Your results will help us improve the next versions of GNOME, and make GNOME easier for everyone to use.

I'll also note that in previous cycles of Outreachy, many of the interns have done such great work that I co-authored articles with them. If your tests are successful and generate useful results, we may co-author an article with you, and submit it to a technology/open source magazine or news website, or (non-academic) journal. That's a big bonus to your resume!

How to start

If you're interested in joining us for GNOME usability testing, you first need to apply for an internship at Outreachy. You'll also need to make a first contribution.

You can do your first contribution by performing a usability test of your own, using GNOME, since that's the topic of the usability testing internship. You can pick one or a few programs from GNOME, and do a total of ten scenario tasks.

When you've finished your usability test, generate a heat map of your results, then write a brief summary about the experience. This is not a formal paper, it's something you would write on a blog or send in an email. Please use an informal voice here; imagine you are explaining your test to a friend. Email your summary to me (jhall@gnome.org) in OTF or PDF format, and I'll review the results with you. Or if you prefer, you can publish your results on your blog and email the URL to me.

Fedora at LinuxDays 2018 in Prague

LinuxDays, the biggest Linux event in the Czech Republic, took place at the Faculty of Information Technology of Czech Technical University in Prague. The number of registered attendees was a bit lower this year, it could be caused by municipality and senate elections happening on Fri and Sat, but the number got almost to the 1300 mark anyway.

Besides a busy schedule of talks and workshops the conference also has a pretty large booth area and as every year I organized the Fedora one. I drove by car to Prague with Carlos Soriano and Felipe Borges from the Red Hat desktop team on Saturday morning and we were joined by František Zatloukal (Fedora QA) at the booth.

linuxdays-fedoraFrantišek and me at the booth.

Our focus for this year was Silverblue and modularity. I prepared one laptop with an external monitor to showcase Silverblue, the atomic version of Fedora Workstation. I must say that the interest of people in Silverblue surprised me. There were even some coming next day saying: “It sounded so cool yesterday and I couldn’t resist and install it when I got home and played with it in the night…” With Silverblue comes distribution of applications in Flatpak and there was a lot of user interest in this direction as well.

DSC_0563Reasons to use Fedora.

I was hoping for more interest in modularity, but people don’t seem to be so aware of it. It doesn’t have the same reach outside the Fedora Project as Flatpak does, it’s not so easy to explain its benefits and use cases. We as a project have to do a better job selling it.

The highlight of Saturday was when one of the sysadmins at National Library of Technology, which is on the same campus, took us to the library to show us public computers where they run Fedora Workstation. It’s 120 computers with over 1000 users (in the last 90 days). Those computers serve a very diverse group of users, from elderly people to computer science students. And they have received very few complaints since they switched from Windows to Fedora. Also they’ve hit almost no problems as sysadmins. They only mentioned one corner case bug in GDM which we promised to look into.

linuxdays-ntkCarlos and Felipe checking out Fedora in the library.

It was also interesting to see the setup. They authenticate users against AD using the SSSD client, mount /home from a remote server using NFS. They enable several GNOME Shell extensions by default: AlternateTab (because of Windows users), Places (to show the Places menu next to Activities)… They also created one custom extension that replaces the “Power Off” button with “Log Out” button in the user menu because users are not supposed to power the computers off. They also create very useful stats of application usage based on “recently-used” XML files that GNOME creates to generate the menu of frequently used applications. All computers are administrated using Ansible scripts.

Dj_-sJtW0AY6u0QDefault wallpaper with instructions.

The only talk I attended on Saturday was “Why and How I Switched to Flatpak for App Distribution and Development in Sandbox” by Jiří Janoušek who develops Nuvola apps. It was an interesting talk and due to his experience developing and distributing apps on Linux Jiří was able to name and describe all the problems with app distribution on Linux and why Flatpak helps here.

On Sunday, we organized a workshop to teach to build flatpaks. It was the only disappointment of the weekend. Only 3 ppl showed up and none of them didn’t really need to learn to build flatpaks. We’ll have the same workshop at OpenAlt in Brno and if the attendance is also low, we’ll know that workshop primarily for app developers is not a good fit for such conferences.
But it was not a complete waste of time, we discussed some questions around Flatpak and worked on flatpaking applications. The result is GNOME Recorder already available in Flathub and Datovka in the review process.

The event is also a great opportunity to talk to many people from the Czech community and other global FLOSS projects. SUSE has traditionally a lot of people there, there was Xfce, FFMPEG, FreeBSD, Vim, LibreOffice…

 

Farewell, application menus!

Application menus – or app menus, as they are often called – are the menu that you see in the GNOME 3 top bar, with the name and icon for the current app. These menus have been with us since the beginning of the GNOME 3.0 series, but we’re planning on retiring them for the next GNOME release (version 3.32). This post is intended to provide some background on this change, as well as information on how the transition will happen.

The development version of Web, whose app menu has been removed

Background

When app menus were first introduced, they were intended to play two main roles. First, they were supposed to contain application-level menu items, like Preferences, About and Quit. Secondly, they were supposed to indicate which app was focused.

Unfortunately, we’ve seen app menus not performing well over the years, despite efforts to improve them. People don’t always engage with them. Often, they haven’t realised that the menus are interactive, or they haven’t remembered that they’re there.

My feeling is that this hasn’t been helped by the fact that we’ve had a split between app menus and the menus in application windows. With two different locations for menu items, it becomes easy to look in the wrong place, particularly when one menu is more frequently visited than the other.

One of the other issues we’ve had with application menus is that adoption by third-party applications has been limited. This has meant that they’re often empty, other than the default quit item, and people have learned to ignore them.

As a result of these issues, there’s a consensus that they should be removed.

The plan

Software, which has moved its app menu into the windowSoftware, which has also removed its app menu

We are planning on removing application menus from GNOME in time for the next release, version 3.32. The application menu will no longer be shown in the top bar (neither the menu or the application name and icon will be shown). Each GNOME application will move the items from its app menu to a menu inside the application window (more detailed design guidelines are on the GNOME Gitlab instance).

If an application fails to remove its app menu by 3.32, it will be shown in the app’s header bar, using the fallback UI that is already provided by GTK. This means that there’s no danger of menu items not being accessible, if an app fails to migrate in time.

We are aiming to complete the entire migration away from app menus in time for GNOME 3.32, and to avoid being in an awkward in-between state for the next release. The new menu arrangement should feel natural to existing GNOME users, and they hopefully shouldn’t experience any difficulties.

The technical changes involved in removing app menus are quite simple, but there are a lot of apps to convert (so far we’ve
fixed 11 out of 63!) Therefore, help with this initiative would be most welcome, and it’s a great opportunity for new contributors to get involved.

App menus, it was nice knowing you…

October 08, 2018

Banksy Shredder




PD: After some reports about male nudity this post has been edited to remove the portrait of my back. If you have reservations with male nudity PLEASE DON'T FOLLOW THE LINK.

PPD: If you don't have problems with male nudity for your convenience here you'll find the Wiki Commons category «Nude men» of pictures.

GNOME ED Update – September

We’ve now moved my reporting to the board to a monthly basis, so this blog should get updated monthly too! So here’s what I’ve been up to in September.

Recruitment

Recruitment continues for our four positions that we announced earlier this year, but I’m pleased to say we’re in the final stages for these. For those interested, the process went a little bit like this:

  • Applicants sent in a CV and cover letter
  • If they were suitable for the position on a quick read of the CV and letter, they got a short questionnaire asking for more details, such as “What do you know about the GNOME Foundation?”
  • Those with interesting answers get sent to a first interview, which mostly technical
  • Then, those who are still in the process are invited to a second interview, which is competency-based
  • At the end of all this, we hope to make an offer to the best candidate!

End of year

For those who don’t know, the Foundation’s financial year runs from the start of October to the end of September. This means we have quite a bit of work to do to:

  1. Finalise the accounts for last year and submit our tax returns
  2. Make a new budget for the forthcoming year

Work has already begun on this, and I hope to finalise the new budget with the board at the Foundation Hackfest being held next week.

Libre Application Summit

LAS was held in Denver, Colorado, and I attended. There were 20 talks and three BoF
sessions held, as well as a number of social events. From looking around, there were probably around 60-70 people, including representatives from KDE and Elementary. It was particularly pleasing to see a number of students from the local university attend and present a lightning talk.

I also had meetings with System76 and Private Internet Access, as well as a couple of local companies.

Speaking of System76, we also had a nice tour of their new factory. I knew they were taking manufacturing in-house, but I didn’t realise the extent of this process. It’s not just assembly, but taking raw sheet metal, bending it into the right shape and painting them!

My meetings with PIA were also interesting – I got to see the new VPN client that they have, which I’m assured will be free software when released. There was a couple of issues I could see about how to integrate that with GNOME, and we had a good session running through these.

Other conferences coming up

In October, I’m hoping to attend Sustain Summit 2018, in London, followed by Freenode.Live in Bristol, UK. I’ll be speaking at the latter, which is in November. Then, after a couple of days at home, GNOME is going to SeaGL! Meet me and Rosanna in Seattle at the GNOME booth!

Friends of GNOME

Another thing that happened was fixing the Friends of GNOME signup page. For some reason, unknown to us, when you submitted the form to PayPal, it redirected to the home page rather than the payment page. This didn’t happen if you selected “EUR” as the payment method, or if you selected “EUR” and then “USD” before submitting. After lots of head scratching (an analysis of the POST data showed that it was /identical/ in each case) I changed the POST to a GET, and it suddenly started working again. Confusion all around, but it should now be working again.

Flatpak, after 1.0

Flatpak 1.0 has happened a while ago, and we now have a stable base. We’re up to the 1.0.3 bug-fix release at this point, and hope to get 1.0.x adopted in all major distros before too long.

Does that mean that Flatpak is done ? Far from it! We have just created a stable branch and started landing some queued-up feature work on the master branch. This includes things like:

  • Better life-cycle control with ps and kill commands
  • Logging and history
  • File copy-paste and DND
  • A better testsuite, including coverage reports

Beyond these, we have a laundry list of things we want to work on in the near future, including

  • Using  host GL drivers (possibly with libcapsule)
  • Application renaming and end-of-life migration
  • A portal for dconf/gsettings (a stopgap measure until we have D-Bus container support)
  • A portal for webcam access
  • More tests!

We are also looking at improving the scalability of the flathub infrastructure. The repository has grown to more than 400 apps, and buildbot is not really meant for using it the way we do.

What about releases?

We have not set a strict schedule, but the consensus seems to be that we are aiming for roughly quarterly releases, with more frequent devel snapshots as needed. Looking at the calendar, that would mean we should expect a stable 1.2 release around the end of the year.

Open for contribution

One of the easiest ways to help Flatpak is to get your favorite applications on flathub, either by packaging it yourself, or by convincing the upstream to do it.

If you feel like contributing to Flatpak itself,  please do! Flatpak is still a young project, and there are plenty of small to medium-size features that can be added. The tests are also a nice place to stick your toe in and see if you can improve the coverage a bit and maybe find a bug or two.

Or, if that is more your thing, we have a nice design for improving the flatpak commandline user experience that is waiting to be implemented.

October 07, 2018

scikit-survival 0.6.0 released

scikit-survival 0.6.0 released

Today, I released scikit-survival 0.6.0. This release is long overdue and adds support for NumPy 1.14 and pandas up to 0.23. In addition, the new class sksurv.util.Surv makes it easier to construct a structured array from NumPy arrays, lists, or a pandas data frame. The examples below showcase how to create a structured array for the dependent variable.

First, we can construct a structered array from a list of boolean event indicators and a list of integers for the observed time:

import pandas
from sksurv.util import Surv
 
y = Surv.from_arrays([True, False, False, True, True], [1, 19, 11, 6, 9])

which equals

y = numpy.array([( True,  1.), (False, 19.), (False, 11.), ( True,  6.),
                 ( True,  9.)], dtype=[('event', '?'), ('time', ')])

Alternatively, we can use a 0/1 valued list for the event indicator:

y = Surv.from_arrays([1, 0, 0, 1, 1], [1, 19, 11, 6, 9])

Finally, if event indicator and observed time are stored in a pandas data frame,
we can just use Surv.from_dataframe and tell it what columns to use:

data = pandas.DataFrame({"some_event": [True, False, False, True, True],
                         "time_of_event": [1, 19, 11, 6, 9]})
y = Surv.from_dataframe("some_event", "time_of_event", data)

The some_event column can also be 0/1 valued, of course.

Download

You can install the latest version via Anaconda (Linux, OSX and Windows):

conda install -c sebp scikit-survival

or via pip:

pip install -U scikit-survival
sebp Sun, 10/07/2018 - 18:14

Comments

October 05, 2018

GStreamer Conference 2018

For the 9th time this year there will be the GStreamer Conference. This year it will be in Edinburgh, UK right after the Embedded Linux Conference Europe, on the 25th of 26th of October. The GStreamer Conference is always a lot of fun with a wide variety of talks around Linux and multimedia, not all of them tied to GStreamer itself, for instance in the past we had a lot of talks about PulseAudio, V4L, OpenGL and Vulkan and new codecs.This year I am really looking forward to talks such as the DeepStream talk by NVidia, Bringing Deep Neural Networks to GStreamer by Pexip and D3Dx Video Game Streaming on Windows by Bebo, to mention a few.

For a variety of reasons I missed the last couple of conferences, but this year I will be back in attendance and I am really looking forward to it. In fact it will be the first GStreamer Conference I am attending that I am not the organizer for, so it will be nice to really be able to just enjoy the conference and the hallway track this time.

So if you haven’t booked yourself in already I strongly recommend going to the GStreamer Conference website and getting yourself signed up to attend.

See you all in Edinburgh!

Also looking forward to seeing everyone attending the PipeWire Hackfest happening right after the GStreamer Conference.

October 04, 2018

Announcing the first release of libxmlb

Today I did the first 0.1.0 preview release of libxmlb. We’re at the “probably API stable, but no promises” stage. This is the library I introduced a couple of weeks ago, and since then I’ve been porting both fwupd and gnome-software to use it. The former is almost complete, and nearly ready to merge, but the latter is still work in progress with a fair bit of code to write. I did manage to launch gnome-software with libxmlb yesterday, and modulo a bit of brokenness it’s both faster to start (over 800ms faster from cold boot!) and uses an amazing 90Mb less RSS at runtime. I’m planning to merge the libxmlb branch into the unstable branch of fwupd in the next few weeks, so I need volunteers to package up the new hard dep for Debian, Ubuntu and Arch.

The tarball is in the usual place – it’s a simple Meson-built library that doesn’t do anything crazy. I’ve imported and built it already for Fedora, much thanks to Kalev for the super speedy package review.

I guess I should explain how applications are expected to use this library. At its core, there are just five different kinds of objects you need to care about:

  • XbSilo – a deduplicated string pool and a read only node tree. This is typically kept alive for the application lifetime.
  • XbNode – a “Gobject wrapped” immutable node available as a query result from XbSilo.
  • XbBuilder – a “compiler” to build the XbSilo from XbBuilderNode’s or XbBuilderSource’s. This is typically created and destroyed at startup or when the blob needs regenerating.
  • XbBuilderNode – a mutable node that can have a parent, children, attributes and a value
  • XbBuilderSource – a source of data for XbBuilder, e.g. a .xml.gz file or just a raw XML string

The way most applications will use libxmlb is to create a local XbBuilder instance, add some XbBuilderSource’s and then “ensure” a local cache file. The “ensure” process either mmap loads the binary blob if all the file mtimes are the same, or compiles a blob saving it to a new file. You can also tell the XbSilo to watch all the sources that it was built with, so that if any files change at runtime the valid property gets set to FALSE and the application can xb_builder_ensure() at a convenient time.

Once the XbBuilder has been compiled, a XbSilo pops out. With the XbSilo you can query using most common XPath statements – I actually ended up implementing a FORTH-style stack interpreter so we can now do queries like /components/component/id[contains(upper-case(text()),'GIMP')] – I’ll say a bit more on that in a minute. Queries can limit the number of results (for speed), and are deduplicated in a sane way so it’s really quite a simple process to achieve something that would be a lot of C code. It’s possible to directly query an attribute or text value from a node, so the silo doesn’t have to be passed around either.

In the process of porting gnome-software, I had to make libxmlb thread-safe – which required some internal organisation. We now have an non-exported XbMachine stack interpreter, and then the XbSilo actually registers the XML-specific methods (like contains()) and functions (like ~=). These get passed some per-method user data, and also some per-query private data that is shared with the node tree – allowing things like [last()] and position()=3 to work. The function callbacks just get passed an query-specific stack, which means you can allow things like comparing “1” to 1.00f This makes it easy to support more of XPath in the future, or to support something completely application specific like gnome-software-search() without editing the library.

If anyone wants to do any API or code review I’d be super happy to answer any questions. Coverity and valgrind seem happy enough with all the self tests, but that’s no replacement for a human asking tricky questions. Thanks!

Games 3.30: Features Overload

With a new version of GNOME always comes a new version of Games, and this new version comes packed with new features, bug fixes and developer experience improvements.

Platforms View and Developers View

As part of his GSoC project, Saurabh implemented two new views of your games collection: one filtering games by their developers and another one filtering them by their platforms.

To know more, read Saurabh's Segregating views and Description view articles on his blog.

To implement this he needed to work a lot on the Grilo front, check his explanations in his Adding self registering keys to lua-factory article.

He also started to work on a new page displaying many details about a game like the number of players and a description, it was unfortunately not ready on time for this release but will hopefully land in 3.32.

Gamepad Navigation

You can now navigate the UI with your gamepads! Select your collection view with the shoulder buttons, browse your games with the analog stick or the directional pad, enter the game with your A or cross button, and go back to your game collection with the home button.

This makes Games even more suitable for a gaming console setup like a HTPC or a handheld like the GPD Win, and even more comfortable to use on your desktop or laptop.

libmanette Event Deferring

Notes this is a small technical aparté, if you are only interested about what is new in Games you can skip it.

I implemented a prototype of gamepad navigation during the 3.28 development cycle but it has not been merged before because of a strange bug: when entring a game with the gamepad it was not possible to choose to resume it or not with the gamepad, but when entering it with the pointer or the keyboard it was possible to resume it or not with the gamepad.

Here is what happened:

  • libmanette was looking for input on an the gamepad's event file via g_io_add_watch(), polling the events, parsing the events and sending the corresponding signals,
  • Games was listening to these signals, triggering the game and triggering the resume dialog which was then waiting for a response from the user.
The execution line from the input channel watch source's callback to the dialog waiting for a response was direct: we never finished polling the ongoing events, we never returned the source's callback, the source callback was never added back in the main loop, so we couldn't receive more events from that gamepad!

The solution I choose is to queue a source on the main loop for each gamepad event, so we are sure we finished polling them all before they are treated, and so if an event is blocked it is not going to block other events!

Keyboard-as-Gamepad Mappings

Abhinav implemented a way to set which keyboard keys are mapped to a mock-gamepad used by retro games. Thanks to him you won't have to guess which keys we choose by default: just set yours!

He started working on it during his GSoC 2017 but I have been pretty bad at reviewing it, so it waited on Bugzilla and then GitLab for more than a full development cycle, woops. 😶

Polishing the UI

Alexander brought a few minor corrections to the UI, like fixing the size of the icons used in the media selector button (it was too large) or dropping a box shadow surrounding the collection view.

I took care of migrating the app menu into the window and to add a help button and a shortcuts window. I also made the thumbnails shrink to a smaller size if the widow is narrow, So you can have more than a column or two of games visible in these extreme cases.

Running from Flatpak

Running native games from a flatpaked version of Games finally works thanks to Alexander! That means you can now run desktop games, Steam games and LÖVE games from the version of Games we recommend you to use. Unfortunately Games when flatpaked still can't list desktop games from your system, but it can list the ones from your home like your GOG games so you can run these.

New Platforms

Alexander implemented support for the Flathub version of Steam: its games can now be listed and ran alongside the ones from the native installation! Also, Steam games using the new Proton Windows compatibility layer work out of the box.

He also worked a lot on adding Nintendo DS and Virtual Boy support as well as making sure Libretro cores for these new platforms work well in Games. Dont expect special features like making one screen of the Nintendo DS fullscreen or selecting the anaglyph mode of the Virtual boy yet, but at least the games are playable and you can use your mouse pointer to interact with the Nintendo DS touch screen!

On my side I found a well working free software Libretro core supporting Game Gear and Master System games, which allows you to finally play games for these platforms.

Tests, Options and CI

In a previous article I told you about how having tests and CI would allow us to have more feature. Well, we now have a new feature thanks to that. 😁

We now run reftests on many Libretro cores we ship, we are not testing everything we would like to yet but it's already way better than not tests at all! As part of our reftests, we also check the options that the Libretro cores offer are as expected.

Options are a way for a Libretro core to offer a custom (and usually domain-specific) API, an option (or variable as it is called in Libretro) being a key which can take only one value from a specific set of values. Given that options are "runtime" APIs and given that we don't control the cores offering them, we couldn't trust them to be stable and hence we couldn't use them.

Thanks to the reftests and the CI, we can be warned of any change of these runtime APIs, so we can react to their changes, so them not being reliable isn't a problem anymore, so we can use them! I implemented options accessors in retro-gtk and Alexander used them in Games to set a core's options at boot time. Distributors can change these options easly as they are .options keyfiles installed in /usr/share/gnome-games/options.

This combination of tests and CI directly led to new features as we are now able to ask the Nintendo DS Libretro core DeSmuME to use the pointer instead of an emulated mouse, making is usable and shipable with our stable Flatpak!

A big thanks to Jordan Petridis for helping with the GitLab CI for Games and retro-gtk! 😄

Meson

Thanks to Alexander, Games now uses Meson instead of the Autotools, making it simpler to maintain and faster to build. My coffee breaks are ruined!

Lesser retro-gtk Improvements

Aside the aforementioned focus on testing, enabling access to a core's options, enabling configuration of the core view's mock-gamepad and the various bug fixes, retro-gtk received some smaller but notable improvements: retro-gtk now handles the RETRO_ENVIRONMENT_GET_LANGUAGE and RETRO_ENVIRONMENT_SET_GEOMETRY Libretro environment commands, allowing the cores respectively to know the current language and to notify changes to their video output's geometry.

Leaks fixes

Some reference leaks in retro-gtk and Games were caused by dependency cycles between a core and its view, preventing them to be freed. The links are now unset before releasing the references to the objects. Similarly, if a main loop was dropped while running then a reference to it was dropped and this is now fixed.

Thank You

I would like to give a huge thank you to the many contributors to this version:

  • Mathieu Bridon for keeping an eye on our Flatpak builds and for helping fixing them for ARM,
  • Jordan Petridis for helping a lot with our CI infrastructure,
  • Saurabh Singh for doing great on his GSoC project,
  • Abhinav Singh for mentoring Saurabh and for his many contributions,
  • and Alexander Mikhaylenko for basically taking over my role as a maintainer during this whole development cycle, ensuring the application was in a constant good shape, fixing bugs tiny or big, reviewing merge requests… the list is too long!

Stepping Down

I have been working on Games, its companion libraries and their prototypes since some time in 2013, and Laurent Pointecouteau and I were discussing and designing the application a year prior to that. It has been many fun years but I am now a bit tired and need to move on to other projects, so as Alexander is stepping up as a new maintainer of the project I am stepping down: I am no longer maintaining Games.

Abhinav and Alexander have proven they can make the project shine better than I can, so I am sure they are going to do great things with it! Anyway, I am not disappearing as I am still maintaining the two libraries that spun out of Games: retro-gtk and libmanette.

October 03, 2018

Treading New Waters

Water, molluscs, oh what a thrill!

Today, we are not talking about Nautilus, but rather it’s just me bragging!

After a month of waiting, talking, waiting, writing and waiting some more, I’m officially a month away from starting at Red Hat (and moving to Brno, so some of you I’ll get to meet as well), working on all things ABRT. Since my GNOME work really helped me sell myself, maybe I’ll manage to help bridge whatever gap there exists between the two.

As far as Nautilus goes, this will probably not change a whole lot, as Carlos still holds the mollusc-wrangling-champion belt. Well, the feature-removal cabal meetings might take a new form. :p

That’s it for now, take care!

October 02, 2018

Talking at OSDNConf in Kyiv, Ukraine

I was fortunate enough to be invited to Kyiv to keynote (video) the local Open Source Developer Network conference. Actually, I had two presentations. The opening keynote was on building a more secure operating system with fewer active security measures. I presented a few case studies why I believe that GNOME is well positioned to deliver a nice and secure user experience. The second talk was on PrivacyScore and how I believe that it makes the world a little bit better by making security and privacy properties of Web sites transparent.

The audience was super engaged which made it very nice to be on stage. The questions, also in the hallway track, were surprisingly technical. In fact, most of the conference was around Kernel stuff. At least in the English speaking track. There is certainly a lot of potential for Free Software communities. I hope we can recruit these excellent people for writing Free Software.

Lennart eventually talked about CAsync and how you can use that to ship your images. I’m especially interested in the cryptography involved to defend against certain attacks. We also talked about how to protect the integrity of the files on the offline disk, e.g. when your machine is off and some can access the (encrypted) drive. Currently, LUKS does not use authenticated encryption which makes it possible that an attacker can flip some bits in the disk image you read.

Canonical’s Christian Brauner talked about mounting in user namespaces which, historically, seemed to have been a contentious topic. I found that interesting, because I think we currently have a problem: Filesystem drivers are not meant for dealing with maliciously crafted images. Let that sink for a moment. Your kernel cannot deal with arbitrary data on the pen drive you’ve found on the street and are now inserting into your system. So yeah, I think we should work on allowing for insertion of random images without having to risk a crash of the system. One approach might be libguestfs, but launching a full VM every time might be a bit too much. Also you might somehow want to promote drives as being trusted enough to get the benefit of higher bandwidth and lower latency. So yeah, so much work left to be done. ouf.

Then, Tycho Andersen talked about forwarding syscalls to userspace. Pretty exciting and potentially related to the disk image problem mentioned above. His opening example was the loading of a kernel module from within a container. This is scary, of course, and you shouldn’t be able to do it. But you may very well want that if you have to deal with (proprietary) legacy code like Cisco, his employer, does. Eventually, they provide a special seccomp filter which forwards all the syscall details back to userspace.

As I’ve already mentioned, the conference was highly technical and kernel focussed. That’s very good, because I could have enlightening discussions which hopefully get me forward in solving a few of my problems. Another one of those I was able to discuss with Jakob on the days around the conference which involves the capabilities of USB keyboards. Eventually, you wouldn’t want your machine to be hijacked by a malicious security device like the Yubikey. I have some idea there involving modifying the USB descriptor to remove the capabilities of sending funny keys. Stay tuned.

Anyway, we’ve visited the city and the country before and after the event and it’s certainly worth a visit. I was especially surprised by the coffee that was readily available in high quality and large quantities.

Builder 3.30

Now that org.gnome.Sdk//3.30 has gone gold, you can get Builder from Flathub.

I expect those that have not been using Nightly for the past few months will be pleasantly surprised at the improvements.

October 01, 2018

GNOME Internships are getting closer

Almost there…

GNOME Internships are getting closer, if you are a GNOME contributor with some experience this internship is a good fit for you. Official application deadline is 1st of October, but we are open to receive more aplications if the aplicant is a good fit.

How about applying? :)

What’s this about?

GNOME internships is a program to use the donations we get from campaigns into projects aimed to improve the topic of the campaign. This internship is not limited to students or a other level of expertise, quite the opossite, some experience and/or being part of GNOME already is a must. Making sure these projects get finished is very important to us, and to achieve that the GNOME foundation is providing a more than nice stipend!

This round of internships is aimed to improve privacy when using GNOME. We have selected a few projects tha are key to provide an excellent privacy experience with GNOME software. We will use the funds we raised as part of the privacy campaign and we are excited to put them in good use for all of our donors.

Requirements & more info

Ideal what-to-have are being a GNOME contributor and some work/other FOSS experience. If you don’t have work/other FOSS experience but are a regular GNOME contributor this is also fit for you. Likewise, if you have work/other FOSS experience but don’t have as many GNOME contributions you can try out too after making some contributions to GNOME.

If you are not sure if you fullfill the requirements don’t hesitate to ask me or send an email to internships-admin@gnome.org. Above all, don’t let the impostor syndrome to prevent you for asking or aplying!

Go to the wiki page to read more info and follow the steps to apply.

Cheers 🐩

Moving from Wordpress

Wordpress

wordpress blog

I have been using Wordpress for long. Basically since I joined GNOME because it was required for GSoC. Wordpress works, it’s okay. There are themes, it has a wysiwyg editor and you can embed images and videos quite easily. It kinda does the job…

The bad

Now, one of the biggest problems are ads. Not that they exist, which is completely understandable, but rather that they are kinda crazy. I got reports that sometimes they were about guns in USA, or they lead to scam sites.

I also missed a way to link to my personal twitter/linkdin/gitlab. That’s quite useful for people to check what other things I write about and how to get more info about what I do.

The ugly

More on the technical side, one of the issues was that I coulnd’t create a draft post and share it in private with other people so I could get some review. This was specially required for Nautilus anouncements. And in case I could, how could we collaborate in editing the post itself? That was simply not there.

Most importantly, I couldn’t give personality to the blog. In the same way I want the software I use being neutral because for it’s just a tool, my personal blog should be more repsentative on how I am and how I express myself. With Wordpress this was not possible. Hell, I couldn’t even put some colors here and there, or take the bloat out from some of the widgets provided by the Wordpress themes.

The enlightenment…

didier's blog

Once upon a time, on a stormy day, I wrote the blog post about the removal and future plans of the desktop icons as a guest writter on Didier’s Roche blog. And here came the enligthening, he was using some magic PR workflow in GitHub where I could just fix stuff, request to merge those changes, review happens, then gets accepted and published all automatically with CI.

Finally you could review, share drafts, and collaborate much easily than with Wordpress!

Not only that, but also his blog was much more personalized, closer to how he wanted to express himself.

The decision

One thing I had have in the back of my mind for some time is that I need to improve my skills with non-GNOME stuff, specially web and cloud technologies. So what a better oportunity than trying to set up a static web generator for blogs (and other stuff) to mimic the features of Didier’s blog?

And decided was it, got some free time this weekend and decided to learn this magic stuff.

The technology

I decided to go with Hugo, because back when I was experimenting with GitLab pages and a project page for Nautilus Hugo seemed to be the easiest and most convenient tech to create a static website that just works. Overall, it seems that Hugo is the most used static website generator for blogs.

I also decided that I would get a theme based in the well known and well maintained bootstrap. Most themes had some custom CSS, all delicately and manually crafted. But let’s be honest, I don’t want to maintain this, I wanted something simple I can go with.

So I chose the minimal theme wich is based in bootstrap and then applied my own changes such as a second accent (the red in the titles), support for centered images, the Ubuntu font (which I use it everywhere I can), some navbar changes, full content rss support, lot of spacing adjustments, softer main text coloring, etc.

Also, I could finally put a decent comment section by using DisQus, which is added to Hugo with a single line. Eventually I would like to go with a free software solution such as Talkyard, so far I didn’t have luck to make it work.

The nicest thing of Hugo is that adding a new post is a matter of dropping a MarkDown file in the post folder. And that’s it. That easy.

The result

The result

So here’s the result. I think it’s quite an improvement versus what I had in Wordpress, although Antonio says that the page looks like Nautilus… I guess I cannot help myself. Let’s see how it works in the future, specially since there is no wysiwyg editor. To be honest, using MarkDown is so easy that I don’t see that as a problem so far.

I can even embed some code with highlighting for free:

def do_measure(self, orientation: Gtk.Orientation,
               for_size: int) -> Tuple[int, int, int, int]:
    child = self.get_child()
    if not child:
        return -1, -1, -1, -1

    minimum, natural, _x, _x = child.measure(orientation, for_size)
    if self._max_width is not None and orientation == Gtk.Orientation.HORIZONTAL:
        natural = min(self._max_width, natural)

    return minimum, natural, -1, -1

And use Builder with Vim emulation to write this. That is already a big win!

If you want to take a look at the code or do something similar, feel free to take a look and use anything from the code in GNOME’s GitLab. I also added an example post in order to see all formats for headings, bullet lists, images, code blocks, etc. Any feedback about the looks, functionality, content, etc. is welcome; finally I would be able to do something about it 😏.

Hasta otra amigos 🍹

Announcing flickerfree boot for Fedora 29

A big project I've been working on recently for Fedora Workstation is what we call flickerfree boot. The idea here is that the firmware lights up the display in its native mode and no further modesets are done after that. Likewise there are also no unnecessary jarring graphical transitions.

Basically the machine boots up in UEFI mode, shows its vendor logo and then the screen keeps showing the vendor logo all the way to a smooth fade into the gdm screen. Here is a video of my main workstation booting this way.

Part of this effort is the hidden grub menu change for Fedora 29. I'm happy to announce that most of the other flickerfree changes have also landed for Fedora 29:

  1. There have been changes to shim and grub to not mess with the EFI framebuffer, leaving the vendor logo intact, when they don't have anything to display (so when grub is hidden)

  2. There have been changes to the kernel to properly inherit the EFI framebuffer when using Intel integrated graphics, and to delay switching the display to the framebuffer-console until the first kernel message is printed. Together with changes to make "quiet" really quiet (except for oopses/panics) this means that the kernel now also leaves the EFI framebuffer with the logo intact if quiet is used.

  3. There have been changes to plymouth to allow pressing ESC as soon as plymouth loads to get detailed boot messages.

With all these changes in place it is possible to get a fully flickerfree boot today, as the video of my workstation shows. This video is made with a stock Fedora 29 with 2 small kernel commandline tweaks:

  1. Add "i915.fastboot=1" to the kernel commandline, this removes the first and last modeset during the boot when using the i915 driver.

  2. Add "plymouth.splash-delay=20" to the kernel commandline. Normally plymouth waits 5 seconds before showing the charging Fedora logo so that on systems which boot in less then 5 seconds the system simply immediately transitions to gdm. On systems which take slightly longer to boot this makes the charging Fedora logo show up, which IMHO makes the boot less fluid. This option increases the time plymouth waits with showing the splash to 20 seconds.

So if you have a machine with Intel integrated graphics and booting in UEFI mode, you can give flickerfree boot support a spin with Fedora 29 by just adding these 2 commandline options. Note this requires the new grub hidden menu feature to be enabled, see the FAQ on this.

The need for these 2 commandline options shows that the work on this is not yet entirely complete, here is my current TODO list for finishing this feature:

  1. Work with the upstream i915 driver devs to make i915.fastboot the default. If you try i915.fastboot=1 and it causes problems for you please let me know.

  2. Write a new plymouth theme based on the spinner theme which used the vendor logo as background and draws the spinner beneath it. Since this keeps the logo and black background as is and just draws the spinner on top this avoids the current visually jarring transition from logo screen to plymouth, allowing us to set plymouth.splash-delay to 0. This also has the advantage that the spinner will provide visual feedback that something is actually happening as soon as plymouth loads.

  3. Look into making this work with AMD and NVIDIA graphics.

Please give the new flickerfree support a spin and let me know if you have any issues with it.