GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

January 23, 2017

GInterface and GXml

I love GInterface definitions, more in Vala, because they are clean an easy to describe API. Interfaces are the way W3C defines their specifications, like SVG and DOM4.

Vala interfaces definition are realy close to be a copy and paste from W3C’s specification definitions. With some, well a bit, work, you can transform them in usable GObject Interfaces definitions.

GXml take DOM4 interfaces and implement them, using a set of instantiable classes.

GXml provides a GObject to XML and back serialization framework, allowing you to define your own classes and how you want your data is represented in XML.

In order to read back your information, GXml needs to create, on-the-fly, instances of your classes, this means you need GObject ones no GInterface.

Starting to implement XSD support in GXml, I’ve created a set of interfaces to interpret W3C specification, this is really helpful, but unusable when you need to instantiate an object it is declared as a Interface. For example, if you have an interface A and it has a property of type B, but at the same time B is a GInterface, you can’t implement A and have an instantiable object from B: I mean, using g_object_new().

Because GXml engine, requires instantiable objects, to create new element nodes when found, using a GObject type to parse attributes to properties, for example, I ended creating a set of interfaces, to help me design an clean API, makes room for other implementations engines, but creating a new classes that will implement XSD interfaces having its own “mirror” properties.

This is, while GXml.XsdSchema have a GXml.XsdListSimpleTypes property to access to all simple type definitions, GXml.GomXsdSchema will have two properties with same purpose: a GXml.GomXsdListSimpleTypes property AND a property of type GXml.XsdListSimpleTypes to fully implement GXml.XsdSchema. Second one, will mirror the first. These is more work to implement an interface but keeps your classes’ properties instantiable, and your users can choose to use just GXml.XsdSchema interfaces API to access your class implementation, keeping open to use different implementations.

Best of all, with GXml implementation of XML to GObject, more clearly using GXml.Gom* classes, you will have access to *all* nodes, attributes and child ones found in an XML file, without loose them in the process of de-serializing back to your class instance.

GXml and XSD

While on the road to release GXml 0.14, I started to port some of my projects to new GXml.Gom* objects, in order to take advantage on speed and reduced memory footprint.

In the process, my library requires to define a large set of strings to select on for an element’s attribute. This is, PostalCode number. They are defined as XSD enumeration in a SimpleType.

At the begining, I started to define an array to add to GXml.GomArrayString[1], but found they are too many to maintain and error prone.

Then I desired to take a look at W3C Schema Specification and started to define a new GXml.Xsd* interfaces and a new GXml.GomXsd* classes to implement them. The result is: GXml.GomXsdArrayString [1], a new class taking an XSD file, to search a SimpleType definition and parse all enumerations, add them to an array of strings you can choose or validate your property value from.

These are the first steps in the way to get XSD support in GXml. While, GXml.Xsd* interfaces are unstable, and will continue this way after 0.14 release, will open new opportunities to any one consuming XSD definitions.

May be in the future, some one can use this API to create an XSD to Vala (or C) code GObject classes.

May other wants to help adding more object definitions from XSD specification in order to get patterns and other restriction from schema definitions, and improve data handling and validation.

[1] GXml.GomCollections definitions

[2] GXml.Schema definitions and GXml.GomXsd implementations

 

This week in GTK+ – 32

In this last week, the master branch of GTK+ has seen 106 commits, with 7340 lines added and 12138 lines removed.

Planning and status
  • Matthias Clasen released GTK+ 3.89.3
  • The GTK+ road map is available on the wiki.
Notable changes

On the master branch:

  • Benjamin Otte simplified the clipping shaders for the Vulkan renderers
  • Benjamin also removed the “assume numbers without dimensions are pixels” fallback code from the CSS parser
  • Daniel Boles landed various fixes to the GtkMenuGtkComboBox and GtkScale widgets
  • Daniel also simplified the internals of GtkComboBox and moved most of its internal widgets to GtkBuilder UI files
  • Matthias Clasen removed command line argument handling from the GTK+ initialization functions; gtk_init() now takes no arguments. Additionally, gdk_init() has been removed, as GDK ceased to be a separate shared library. The recommended way to write GTK+ applications remains using GtkApplication, which handles library initialization and the main loop
  • Timm Bäder merged his branch that makes GtkWidget visible by default, except for the GtkWindow and GtkPopover classes; Timm also removed gtk_widget_show_all() from the API, as it’s not useful any more
  • Timm modified GtkShortcutsShortcut, GtkFileChooserButton, and GtkFontButton to inherit directly from GtkWidget, taking advantage of the new scene graph API inside the base GtkWidget class

On the gtk-3-22 stable branch:

  • Ruslan Izhbulatov fixed the Windows backend for GDK to ensure that it works with remote displays
Bugs fixed
  • 777527 GDK W32: Invisible drop-down menus in GTK apps when working via RDP
  • 770112 The documented <alt>left shortcut doesn’t work on Wayland
  • 776225 [wayland] dropdown placed somewhere in the screen
  • 777363 [PATCH] wayland: avoid an unnecessary g_list_length call
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

Release of the pilot episode of an old project: “Ouhlala”

2017 starts well for ZeMarmot, with many new contributors and joy of life!

We  recently found a former project, lost in old hard drives, dating from either end-of-2014, or early 2015 (before  ZeMarmot), when we were still searching for a fun project to keep us  busy. As you know, we haven’t chosen this project, called “Ouhlala“,  which explains this small 30-sec episode was getting forgotten somewhere  in a hard drive. A little sad; therefore we now release it.
Obviously  this serie is on indefinite standby since we now focus on ZeMarmot and this is the  first time we publicly release this episode (it was only shown once  during a very small talk, 2 years ago)!

The early concept of the serie was to illustrate various idioms from all over the world with short  animations (not necessarily in an intellectual way, more with funny  views). The pilot episode focused on the French idiom “Jamais 2 sans 3” (~ things always go in 3).

License of the movie: Creative Commons by-SA 4.0 international
As usual, all is drawn with GIMP, sound is recorded or edited in Ardour, except for a few CC 0 sounds found on the awesome freesound.org.

Have a fun viewing all!


Reminder: if you like what we do, you can fund our current project, ZeMarmot at Patreon (USD) or Tipeee (EUR).

Android permissions and hypocrisy

I wrote a piece a few days ago about how the Meitu app asked for a bunch of permissions in ways that might concern people, but which were not actually any worse than many other apps. The fact that Android makes it so easy for apps to obtain data that's personally identifiable is of concern, but in the absence of another stable device identifier this is the sort of thing that capitalism is inherently going to end up making use of. Fundamentally, this is Google's problem to fix.

Around the same time, Kaspersky, the Russian anti-virus company, wrote a blog post that warned people about this specific app. It was framed somewhat misleadingly - "reading, deleting and modifying the data in your phone's memory" would probably be interpreted by most people as something other than "the ability to modify data on your phone's external storage", although it ends with some reasonable advice that users should ask why an app requires some permissions.

So, to that end, here are the permissions that Kaspersky request on Android:
  • android.permission.READ_CONTACTS
  • android.permission.WRITE_CONTACTS
  • android.permission.READ_SMS
  • android.permission.WRITE_SMS
  • android.permission.READ_PHONE_STATE
  • android.permission.CALL_PHONE
  • android.permission.SEND_SMS
  • android.permission.RECEIVE_SMS
  • android.permission.RECEIVE_BOOT_COMPLETED
  • android.permission.WAKE_LOCK
  • android.permission.WRITE_EXTERNAL_STORAGE
  • android.permission.SUBSCRIBED_FEEDS_READ
  • android.permission.READ_SYNC_SETTINGS
  • android.permission.WRITE_SYNC_SETTINGS
  • android.permission.WRITE_SETTINGS
  • android.permission.INTERNET
  • android.permission.ACCESS_COARSE_LOCATION
  • android.permission.ACCESS_FINE_LOCATION
  • android.permission.READ_CALL_LOG
  • android.permission.WRITE_CALL_LOG
  • android.permission.RECORD_AUDIO
  • android.permission.SET_PREFERRED_APPLICATIONS
  • android.permission.WRITE_APN_SETTINGS
  • android.permission.READ_CALENDAR
  • android.permission.WRITE_CALENDAR
  • android.permission.KILL_BACKGROUND_PROCESSES
  • android.permission.RESTART_PACKAGES
  • android.permission.MANAGE_ACCOUNTS
  • android.permission.GET_ACCOUNTS
  • android.permission.MODIFY_PHONE_STATE
  • android.permission.CHANGE_NETWORK_STATE
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_LOCATION_EXTRA_COMMANDS
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.CHANGE_WIFI_STATE
  • android.permission.VIBRATE
  • android.permission.READ_LOGS
  • android.permission.GET_TASKS
  • android.permission.EXPAND_STATUS_BAR
  • com.android.browser.permission.READ_HISTORY_BOOKMARKS
  • com.android.browser.permission.WRITE_HISTORY_BOOKMARKS
  • android.permission.CAMERA
  • com.android.vending.BILLING
  • android.permission.SYSTEM_ALERT_WINDOW
  • android.permission.BATTERY_STATS
  • android.permission.MODIFY_AUDIO_SETTINGS
  • com.kms.free.permission.C2D_MESSAGE
  • com.google.android.c2dm.permission.RECEIVE

Every single permission that Kaspersky mention Meitu having? They require it as well. And a lot more. Why does Kaspersky want the ability to record audio? Why does it want to be able to send SMSes? Why does it want to read my contacts? Why does it need my fine-grained location? Why is it able to modify my settings?

There's no reason to assume that they're being malicious here. The reasons that these permissions exist at all is that there are legitimate reasons to use them, and Kaspersky may well have good reason to request them. But they don't explain that, and they do literally everything that their blog post criticises (including explicitly requesting the phone's IMEI). Why should we trust a Russian company more than a Chinese one?

The moral here isn't that Kaspersky are evil or that Meitu are virtuous. It's that talking about application permissions is difficult and we don't have the language to explain to users what our apps are doing and why they're doing it, and Google are still falling far short of where they should be in terms of making this transparent to users. But the other moral is that you shouldn't complain about the permissions an app requires when you're asking for even more of them because it just makes you look stupid and bad at your job.

comment count unavailable comments

Wikimedia in Google Code-in 2016

(Google Code-in and the Google Code-in logo are trademarks of Google Inc.)

Google Code-in 2016 has come to an end. Wikimedia was one of the 17 organizations who took part to offer mentors and tasks to 14-17 year old students exploring free and open source software projects via small tasks.

Congratulations to our 192 students and 46 mentors for fixing 424 tasks together!

Being one of the organization admins, deciding on your top five students at the end of the contest always takes time and discussions as many students have provided impressive work and it hurts to have to put a great contributor on the 6th or 7th place.
Google will announce the Grand Prize winners and finalists on January 30th.

Reading the final feedback of students always re-assures that all the effort mentors and organization admins put into GCI are worth it:

  • In 1.5 month, I learned more than in 1.5 year. — Filip
  • I know these things will be there forever and it’s a big thing for me to have my name on such a project as MediaWiki. — Victor
  • What makes kids like me continue a work is appreciation and what the community did is give them a lot. — Subin
  • I spent my best time of my life during the contest — David

Read blogposts by GCI students about their experience with Wikimedia.

To list some of the students’ achievements:

  • Many improvements to Pywikibot, Kiwix (for Wikipedia offline reading), Huggle, WikiEduDashboard, Wikidata, documentation, …
  • MediaWiki’s Newsletter extension received a huge amount of code changes
  • The Pageview API offers monthly request stats per article title
  • jQuery.suggestions offer reason suggestions to block, delete, protect forms
  • A {{PAGELANGUAGE}} magic word was added
  • Changes to number of observations in the Edit Quality Prediction model
  • A dozen MediaWiki extension pages received screenshots
  • Lots of removal of deprecated code in MediaWiki core and extensions
  • Long CREDIT showcase videos got split into ‘one video per topic’ videos on Wikimedia Commons
  • Proposals for a redesign of the Romanian Wikipedia’s main page
  • Performance improvements to the importDump.php maintenance script
  • Converted Special:RecentChanges to use the OOUI library
  • Allow users to apply change tags as they make logged actions using the MediaWiki web API
  • Added some hooks to Special:Unblock
  • Added a $wgHTTPImportTimeout setting for Special:Import
  • Added ability to configure the web service endpoint and added phpcs checks in MediaWiki’s extension for Ideographic Description Sequences
  • Glossary wiki pages follow the formatting guidelines
  • Research on team communication tools

We also received valuable feedback from our mentors on what we can improve for the next round.

Thanks to everybody for your friendliness, patience, and help provided.
Thanks for your contributions to free software and free knowledge.
See you around on IRC, mailing lists, tasks, and patch comments!

January 22, 2017

How to install GNOME Shell extensions with Firefox 52?

As many of you may know already Mozilla is dropping NPAPI support from Firefox. This change was announced more than an year ago and finally all NPAPI plugins except Adobe Flash are banned in nightly Firefox builds.

Firefox 52 will be first version with GNOME Shell integration NPAPI plugin hard disabled.

For those who still want to use Firefox for managing GNOME Shell extensions there is NPAPI plugin replacement ready: GNOME Shell integration for Chrome (chrome-gnome-shell).

You should not be confused by it’s name because chrome-gnome-shell supports all major browsers: Google Chrome/Chromium, Vivaldi, Opera and Firefox.

The words “for Chrome” in project’s name means “for Chrome extensions capable browsers” and recent versions of Firefox supports own implementation of “Chrome extensions API“: WebExtensions.

Currently, Firefox supported only in git master branch of chrome-gnome-shell, however first chrome-gnome-shell’s release with Firefox support will be published on Jan 4, 2017 and will have version 8.

Unlike NPAPI plugin chrome-gnome-shell consists of 2 parts: browser extension and helper application written in Python: native host messaging connector.

Firefox extension can be installed from addons.mozilla.org.
As for native host connector – it should be installed separately, preferably using your distro’s package manager. Currently, packages are prepared for Arch, Debian, Fedora, Gentoo, Ubuntu. Packages with Firefox support will be available shortly after Jan 4, 2017.

Have problems with or questions about chrome-gnome-shell? Look to the chrome-gnome-shell’s wiki page.
Feel free to ping me on irc (my nickname is nE0sIghT on GIMPNet) or fill new ticket at Github or at bugzilla.gnome.org.

 

Update:

GNOME Shell integration for Chrome version 8 with Firefox support released

extensions.gnome.org: yesterday, today and tomorrow

During last 3 years extensions.gnome.org website was unmaintained, accumulated unresolved bugs and still used old unmaintained Django 1.4 framework.

Things are changed today, below you will find recent changes.

Django 1.8 LTS migration.

We had migrated codebase from Django 1.3 to 1.8 and website from Django 1.4 to 1.8 (big thanks to Andrea Veri for this).

Because migration was done “as is” new direct platform features are not used currently, however we can already benefit from internal indirect platform improvements. For more information about Django changes look to 1.4, 1.5, 1.6, 1.7 and 1.8 release notes.

Site theme updated.

Site theme was synchronized with GNOME Grass wordpress theme and logo was updated to match other GNOME websites.

Look to the old…

… and new site header.

UI improvements.

With integration plugin or browser extension control buttons are now aligned to the right and “delete” button is stylized to match other buttons:

System extensions now have explicit mark and no “delete” button is shown for them:

If you have extensions installed that is missing from extensions.gnome.org then grayscale icon will be shown:

Also site layout is fixed for low resolution screens, screenshots are shown in lightbox now and description is aligned to title.

Bugfixes.

Some minor and more serious bugs were fixes and some patches were merged. I still continue to working on existing issues.

So, What is next?

Internationalization.

In 2017 we should have GNOME Shell extensions repository to be translated to other languages. I plan to bring translations support to e.g.o. via GNOME Translation project.

For that goal it’s possible to reuse Damned-Lies translations support which is Django 1.8 powered website.

Improving help.

“About” page is outdated and should be rewriten. There is a lot of bugs where filled in Bugzilla asking to improve it.

Inline installation of GNOME Shell integration browser extension.

GNOME Shell integration for Chrome suports Chrome/Chromium, Firefox, Vivaldi and Opera browsers. However it should be manually installed from browser extensions store.

I plan to improve this by allowing inline extensions installation from extensions.gnome.org website.

User control panel.

Currently it’s not possible to do any user account related changes yourself.

Some user control panel will be created allowing to change username, email. View owned extensions and more.

 

To track this and future changes you can  follow Roadmap wiki page.

January 21, 2017

A Python extension module using C, C++, FORTRAN and Rust

One of the advantages of using a a general build system is that combining code written in different languages is easy. To demonstrate this I wrote a simple Python module called Polysnake. It compiles and links four different compiled languages into one shared extension module. The languages are C, C++, FORTRAN and Rust.

The build system setup consists of five declarations in Meson. It should be doable in four, but Rust is a bit special. Ignoring the boilerplate the core business logic looks like this:

rustlib = static_library('func', 'func.rs')

py3_mod.extension_module('polysnake',
  'polysnake.c',
  'func.cpp',
  'ffunc.f90',
  link_with : rustlib,
  dependencies : py3_dep)

The code is available on Github.

Compiling and running.

Compiling is done using the default Meson commands:

meson build
ninja -C build

Once built the main script can be run with this command:

PYTHONPATH=build ./main.py

The script just calls into the module and prints the string it returns. The output looks like this:

Combining many languages is simple.

This line is created in C.
This line is created in FORTRAN.
This line is created in C++.
This line is created in Rust.

Why not COBOL?

January 20, 2017

Debugging a Flatpak application

Since I’ve been asking people to try the recipes app with Flatpak, I can’t complain too much if I get bug reports back. But how does one create a useful bug report when something goes wrong in a Flatpak sandbox ? Some of the stacktraces I’ve seen have not been very useful, since they are lacking symbols.

This post is a quick attempt to spread some basics about Flatpak debugging.

Normally, you run your Flatpak app like this:

flatpak run org.gnome.Recipes

Well, that’s not quite true; the ”normal” way to launch the Flatpak is just the same as launching a non-Flatpak app: click on the icon, or hit the Super key, type recipes, hit Enter. But lets assume you’re launching flatpak from the commandline.

What happens behind the scenes here is that flatpak finds the metadata for org.gnome.Recipes, determines which runtime it needs, sets up the sandbox by mounting the app in /app and the runtime in /usr, does some more sandboxy stuff, and eventually launches the app.

First problem for bug reporting: we want to run the app under gdb to get a stacktrace when it crashes.  Here is how you do that:

flatpak run --command=sh org.gnome.Recipes

Running this command, you’ll end up with a shell prompt ”inside” the recipes sandbox.  This is great, because we can now launch our app under gdb (note that the application gets installed in the /app prefix):

$ gdb /app/bin/recipes

Except… this fails because there is no gdb. Remember that we are inside the sandbox, so we can only run what is either shipped with the app in /app/bin or with the runtime in /usr/bin.  And gdb is not among either.

Thankfully, for each runtime, there is a corresponding sdk, which is just like the runtime, except it includes the stuff you need to develop and debug: headers, compilers, debuggers and other useful tools. And flatpak has a handy commandline option to use the sdk instead of the regular runtime:

flatpak run --devel --command=sh org.gnome.Recipes

The –devel option tells flatpak to use the sdk instead of the runtime  and do some other things that make debugging in the sandbox work.

Now for the last trick: I was complaining about stacktraces without symbols at the beginning. In rpm-based distributions, the debug symbols are split off into debuginfo packages. Flatpak does something similar and splits all the debug information of runtimes and apps into separate ”runtime extensions”, which by convention have .Debug appended to their name. So the debug info for org.gnome.Recipes is in the org.gnome.Recipes.Debug extension.

When you use the –devel option, flatpak automatically includes the Debug extensions for the application and runtime, if they are available. So, for the most useful stacktraces, make sure that you have the Debug extensions for the apps and runtimes in question installed.

Hope this helps!

Most of this information was taken from the Flatpak wiki.

The flatpak security model – part 2: Who needs sandboxing anyway?

The ability to run an application sandboxed is a very important  feature of flatpak. However, its is not the only reason you might want to use flatpak. In fact, since currently very few applications work in a fully sandboxed environment, most of the apps you’d run are not sandboxed.

In the previous part we learned that by default the application sandbox is very limiting. If we want to run a normal application we need to open things up a bit.

Every flatpak application contains a manifest, called metadata. This file describes the details of the application, like its identity (app-id) and what runtime it uses. It also lists the permissions that the application requires.

By default, once installed, an application gets all the permissions that it requested. However, you can override the permissions each time you call flatpak run or globally on a per-application basis by using flatpak override (see manpages for flatpak-run and flatpak-override for details). The handling of application permissions are currently somewhat hidden in the interface, but the long term plan is to show permissions during installation and make it easier to override them.

So, what kind of permissions are there?

First apps need to be able to produce output and get input. To do this we have permissions that allow access to PulseAudio for sound and X11 and/or Wayland for graphical output and input. The way this works is that we just mount the unix domain socket for the corresponding service into the sandbox.

It should be noted that X11 is not very safe when used like this, you can easily use the X11 protocol to do lots of malicious things. PulseAudio is also not very secure, but work is in progress on making it better. Wayland however was designed from the start to isolate clients from each other, so it is pretty secure in a sandbox.

But, secure or not, almost all Linux desktop applications currently in existence use X11, so it is important that we are able to use it.

Another way for application to integrate with the system is to use DBus. Flatpak has a filtering dbus proxy, which lets it define rules for what the application is allowed to do on the bus. By default an application is allowed to own its app-id and subnames of it (i.e. org.gnome.gedit and org.gnome.gedit.*) on the session bus. This means other clients can talk to the application, but it can only talk to the bus itself, not any other clients.

Its interesting to note this connection between the app-id and the dbus name. In fact, valid flatpak app-ids are defined to be the same form as valid dbus names, and when applications export files to the host (such as desktop files, icons and dbus service files), we only allow exporting files that start with the app-id. This ties very neatly into modern desktop app activation were the desktop and dbus service files also have to be named by the dbus name. This rule ensures that applications can’t accidentally conflict with each other, but also that applications can’t attack the system by exporting a file that would be triggered by the user outside the sandbox.

There are also permissions for filesystem access. Flatpak always uses a filesystem namespace, because /usr and /app are never from the host, but other directories from the host can be exposed to the sandbox. The permission here is quite fine grained, starting with access to all host files, to your home-directory only or to individual directories. The directories can also be exposed read-only.

The default sandbox only has a loopback network interface and thius has no connection to the network, but if you grant network access then the app will get full network access. There are no partial access for network access however. For instance one would like to be able to set up a per-application firewall configuration. Unfortunately, it is quite complex and risky to set up networking so we can’t expose it in a safe way for unprivileged use.

There are also a few more specialized permissions, like various levels of hardware device access and some other details. See man flatpak-metadata for the available settings.

All this lets us open up exactly what is needed for each application, which means we can run current Linux desktop applications without modifications. However, the long term goal is to introduce features so that applications can run without opening the sandbox. We’ll get to this plan in the next part.

Until then, happy flatpaking.

January 19, 2017

Android apps, IMEIs and privacy

There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

Outreachy (GNOME)-W5&W6

My plan was altered in this two-week, because the strings of GNOME 3.24 have not frozen yet and the maintainers of Chinese localization group told me the Extra GNOME Applications are more necessary to be translated than documents, so I began to translate the Extra GNOME Applications (stable) during this period.

As many applications are not familiar to me, I will install the application and find the strings in it when I met some strings which I can’t understand what they are used to, but you know, this process could be a huge waste of time. Mentor Tong suggested me it’s better to locate strings in the source code, I found that is a nice method because I can understand the means of the strings more clear and I can save time by it.


This work by Mandy Wang is licensed under a Creative Commons Attribution-ShareAlike 4.0 International


Outreachy (GNOME)-W3&W4

During this period, I finished the UI translation of GNOME 3.22, I’m waiting to reviewed and committed now, and I met some troubles and resolved them these days.

After I finished the gitg, actually there are few strings remained before I began to deal with it, Mentor Tong said the gitg need to be reworked because some terms translated before didn’t fit the criterion of git glossary. He taught me to proofread them with the git glossary in Github, and I use this method when I meet other terms.

Other one is the orca, there are many Mathematical terms in it, and I’m not familiar with a lot of them, I saw the history and found that it drived many translators crazy. Fortunately, I searched a nice website – Unicode® character table, it’s very convenient to check unicode.


This work by Mandy Wang is licensed under a Creative Commons Attribution-ShareAlike 4.0 International

 

 

 


Quantifying Synchronisation: Oscilloscope Edition

I’ve written a bit in my last two blog posts about the work I’ve been doing in inter-device synchronised playback using GStreamer. I introduced the library and then demonstrated its use in building video walls.

The important thing in synchronisation, of course, is how much in-sync are the streams? The video in my previous post gave a glimpse into that, and in this post I’ll expand on that with a more rigorous, quantifiable approach.

Before I start, a quick note: I am currently providing freelance consulting around GStreamer, PulseAudio and open source multimedia in general. If you’re looking for help with any of these, do get in touch.

The sync measurement setup

Quantifying what?

What is it that we are trying to measure? Let’s look at this in terms of the outcome — I have two computers, on a network. Using the gst-sync-server library, I play a stream on both of them. The ideal outcome is that the same video frame is displayed at exactly the same time, and the audio sample being played out of the respective speakers is also identical at any given instant.

As we saw previously, the video output is not a good way to measure what we want. This is because video displays are updated in sync with the display clock, over which consumer hardware generally does not have control. Besides, our eyes are not that sensitive to minor differences in timing unless images are side-by-side. After all, we’re fooling it with static pictures that change every 16.67ms or so.

Using audio, though, we should be able to do better. Digital audio streams for music/videos typically consist of 44100 or 48000 samples a second, so we have a much finer granularity than video provides us. The human ear is also fairly sensitive to timings with regards to sound. If it hears the same sound at an interval larger than 10 ms, you will hear two distinct sounds and the echo will annoy you to no end.

Measuring audio is also good enough because once you’ve got audio in sync, GStreamer will take care of A/V sync itself.

Setup

Okay, so now that we know what we want to measure, but how do we measure it? The setup is illustrated below:

Sync measurement setup illustrated

As before, I’ve set up my desktop PC and laptop to play the same stream in sync. The stream being played is a local audio file — I’m keeping the setup simple by not adding network streaming to the equation.

The audio itself is just a tick sound every second. The tick is a simple 440 Hz sine wave (A₄ for the musically inclined) that runs for for 1600 samples. It sounds something like this:

I’ve connected the 3.5mm audio output of both the computers to my faithful digital oscilloscope (a Tektronix TBS 1072B if you wanted to know). So now measuring synchronisation is really a question of seeing how far apart the leading edge of the sine wave on the tick is.

Of course, this assumes we’re not more than 1s out of sync (that’s the periodicity of the tick itself), and I’ve verified that by playing non-periodic sounds (any song or video) and making sure they’re in sync as well. You can trust me on this, or better yet, get the code and try it yourself! :)

The last piece to worry about — the network. How well we can sync the two streams depends on how well we can synchronise the clocks of the pipeline we’re running on each of the two devices. I’ll talk about how this works in a subsequent post, but my measurements are done on both a wired and wireless network.

Measurements

Before we get into it, we should keep in mind that due to how we synchronise streams — using a network clock — how in-sync our streams are will vary over time depending on the quality of the network connection.

If this variation is small enough, it won’t be noticeable. If it is large (10s of milliseconds), then we may notice start to notice it as echo, or glitches when the pipeline tries to correct for the lack of sync.

In the first setup, my laptop and desktop are connected to each other directly via a LAN cable. The result looks something like this:

The first two images show the best case — we need to zoom in real close to see how out of sync the audio is, and it’s roughly 50µs.

The next two images show the “worst case”. This time, the zoomed out (5ms) version shows some out-of-sync-ness, and on zooming in, we see that it’s in the order of 500µs.

So even our bad case is actually quite good — sound travels at about 340 m/s, so 500µs is the equivalent of two speakers about 17cm apart.

Now let’s make things a little more interesting. With both my laptop and desktop connected to a wifi network:

On average, the sync can be quite okay. The first pair of images show sync to be within about 300µs.

However, the wifi on my desktop is flaky, so you can see it go off up to 2.5ms in the next pair. In my setup, it even goes off up to 10-20ms, before returning to the average case. The next two images show it go back and forth.

Why does this happen? Well, let’s take a quick look at what ping statistics from my desktop to my laptop look like:

Ping from desktop to laptop on wifi

That’s not good — you can see that the minimum, average and maximum RTT are very different. Our network clock logic probably needs some tuning to deal with this much jitter.

Conclusion

These measurements show that we can get some (in my opinion) pretty good synchronisation between devices using GStreamer. I wrote the gst-sync-server library to make it easy to build applications on top of this feature.

The obvious area to improve is how we cope with jittery networks. We’ve added some infrastructure to capture and replay clock synchronisation messages offline. What remains is to build a large enough body of good and bad cases, and then tune the sync algorithm to work as well as possible with all of these.

Also, Florent over at Ubicast pointed out a nice tool they’ve written to measure A/V sync on the same device. It would be interesting to modify this to allow for automated measurement of inter-device sync.

In a future post, I’ll write more about how we actually achieve synchronisation between devices, and how we can go about improving it.

January 18, 2017

The flatpak security model – part 1: The basics

This is the first part of a series talking about the approach flatpak takes to security and sandboxing.

First of all, a lot of people think of container technology like docker, rkt or systemd-nspawn when they think of linux sandboxing. However, flatpak is fundamentally different to these in that it is unprivileged.

What I mean is that all the above run as root, and to use them you either have to be root, or your access to it is equivalent to root. For instance, if you have access to the docker socket then you can get a full root shell with a command like:

docker run -t -i --privileged -v /:/host fedora chroot /host

Flatpak instead runs everything as the regular user.  To do this it uses a project called bubblewrap which is like a super-powered version of chroot, only you don’t have to be root to run it.

Bubblewrap can do more than just change the root, it lets you construct a custom filesystem mount tree for the process. Additionally it lets you create namespaces to further isolate things from the host. For instance if use –unshare-pid then your process will not see any processes from outside the sandbox.

Now, chroot is a root-only operation. How can it be that bubblewrap lets you do the same thing but doesn’t require root privileges? The answer is that it uses unprivileged user namespaces.

Inside such a user namespace you get a lot of capabilities that you don’t have outside it, such as creating new bind mounts or calling chroot. However, in order to be allowed to use this you have to set up a few process limits. In particular you need to set a process flag called PR_SET_NO_NEW_PRIVS. This causes all forms of privilege escalation (like setuid) to be disabled, which means the normal ways to escape a chroot jail don’t work.

Actually, I lied a bit above. We do use unprivileged user namespaces if we can, but many distributions disable them. The reason is that user namespaces open up a whole new attack surface against the kernel, allowing an unprivileged user access to lots of things that may not be perfectly adapted user access. For instance CVE-2016-3135 was a local root exploit which used a memory corruption in an iptables call. This is normally only accessible by root, but user namespaces made it user exploitable.

If user namespaces are disabled, bubblewrap can be built as a setuid helper instead. This still only lets you use the same features as before, and in many ways it is actually safer this way, because only a limited subset of the full functionality is exposed. For instance you cannot use bubblewrap to exploit the iptable bug above because it doesn’t set up iptable (and if it did it wouldn’t pass untrusted data to it).

Long story short, flatpak uses bubblewrap to create a filesystem namespace for the sandbox. This starts out with a tmpfs as the root filesystem, and in this we bind-mount read-only copies of the runtime on /usr and the application data on /app. Then we mount various system things like a minimal /dev, our own instance of /proc and symlinks into /usr from /lib and /bin. We also enable all the available namespaces so that the sandbox cannot see other processes/users or access the network.

On top of this we use seccomp to filter out syscalls that are risky. For instance ptrace, perf, and recursive use of namespaces, as well as weird network families like DECnet.

In order for the application to be able to write data anywhere we bind mount $HOME/.var/app/$APPID/ into the sandbox, but this is the only persistent writable location.

In this sandbox we then spawn the application (after having dropped all increased permissions). This is a very limited environment, and there isn’t much the application can do. In the next part of this series we’ll start looking into how things can be opened up to allow the app to do more.

Recipes for you and me

Since I’ve last written about recipes, we’ve started to figure out what we can achieve in time for GNOME 3.24, with an eye towards delivering a useful application. The result is this plan, which should be doable.

But: your help is needed. We need more recipe contributions from the GNOME community to have a well-populated initial experience. Everybody who contributes a recipe before 3.24 will get a little thank-you from us, so don’t delay…

The 0.8.0 release that I’ve just created already contains the first steps of this plan. One thing we decided is that we don’t have the time and resources to make the ingredients view useful by March, so the Ingredients tab is gone for now.

At the same time, there’s a new feature here, and that is the blue tile leading to the shopping list view:

The design for this page is still a bit up in the air, so you should expect this to change in the next releases. I decided to merge it already anyway, since I am impatient, and this view already provides useful functionality. You can print the shopping list:

Beyond this, I’ve spent some time on polishing and fixing bugs. One thing that I’ve discovered to my embarrassment earlier this week is that exporting recipes from the flatpak did not actually work. I had only ever tested this with an un-sandboxed local build.

Sorry to everyone who tried to export their recipe and was left wondering why it didn’t work!

We’ve now fixed all the bugs that were involved here, both in recipes and in the file chooser portal and in the portal infrastructure itself, and exporting recipes works fine with the current flatpak, which, as always, you can install from here:

https://alexlarsson.github.io/test-releases/gnome-recipes.flatpakref

One related issue that became apparent during this bug hunt is that things work less than perfectly if the portals are not present on the host system. Until that becomes less likely, I’ve added a bit of code to make the failure less mysterious, and give you some idea how to fix it:

I think recipes is proving its value as  a test bed and early adopter for flatpak and portals. At this point, it is using the file chooser portal, the account information portal, the print portal, the notification portal, the session inhibit portal, and it would also use the sharing portal, if we had that already.

I shouldn’t close this post without mentioning that you will have a chance to hear a bit from Elvin about the genesis of this application in the Fosdem design devroom. See you there!

Creating .NET Bindings for C Libraries with ObjectiveSharpie

We created the ObjectiveSharpie tool to automate the mapping of Objective-C APIs to the .NET world. This is the tool that we use to keep up with Apple APIs.

One of the lesser known features of ObjectiveSharpie, is that it is not limited to binding Objective-C header files. It is also capable of creating definitions for C APIs.

To do this, merely use the "bind" command for ObjectiveSharpie and run it on the header file for the API that you want to bind:

	sharpie bind c-api.h -o binding.cs

The above command will produce the binding.cs that contains the C# definitions for both the native data structures and the C functions that can be invoked.

Since C APIs are ambiguous, in some cases ObjectiveSharpie will generate some diagnostics. In most cases it will flag methods that have to be bound with the [Verify]. This attribute is used as an indicator on your source code that you need to manually audit the binding, perhaps checking the documentation and adjust the P/Invoke signature accordingly.

There are various options that you can pass to the bind command, just invoke sharpie bind to get an up-to-date list of configuration options.

This is how I quickly bootstrapped the TensorFlowSharp binding. I got all the P/Invoke signatures done in one go, and then I started to do the work to surface an idiomatic C# API.

January 17, 2017

Meet the new Week view

This morning, I had some free hours to spend on my baby Calendar, and of course I’d spend on what matters the most: the Week view.

I’ve been working on and off in this feature for quite a while, and the last missing piece was proper drag n’ drop support. Fear no more!, and say hello to the new Week view in GNOME Calendar:

captura-de-tela-de-2017-01-17-19-43-40Introducing the Week view

This work initially started as a Summer of Code driven by Vamsi, and I just went ahead and finished his work. I tried to be as careful as possible with the new Week view, in order to keep in on part with Month and Year views. That means:

  • Drag n’ drop
  • Visualizing and editing events
  • Properly handle multiday and all day events
  • Being beautiful
  • Handling too many events

And so it goes. You can create all day and multiday events very quickly:

captura-de-tela-de-2017-01-17-19-58-38Creating all day, multiday events by clicking the header

Of course you can also create timed events with the new week view! Check this out:

captura-de-tela-de-2017-01-17-20-00-17Creating a timed event in the week view’s grid

And, as always, the traditional sequence of pictures with an awsome rock-ish soundtrack!

 

Excited? Join #gnome-calendar IRC room at irc.gnome.org, or send me an email to get in touch! Don’t be shy. Calendar is entirely made by amazing contributors dedicating their free time to make the world a slightly better place to live 🙂

Enjoy!

A history about Gtk+, Vulkan and Wayland

A few weeks ago, I was curious to test Gtk+ 4. I know it has some awsome features like OpenGL rendering, major cleanups and other hot stuff, but didn’t have the chance to check it out until then.

I was mostly excited about Vulkan.

I know both of my laptop’s graphic cards support Vulkan. It’s a hybrid Intel Broadwell G2 + NVidia GeForce 920M, although I don’t use the latter because Linux sucks hard with Dual GPU.

Downloaded the latest Gtk+ source, compiled and… nothing. Immediate segmentation fault. Yay! What a great chance to get involved with the next major Gtk+ version development!

So, this happened:

captura-de-tela-de-2017-01-17-09-24-32The Fishbowl running under Wayland and rendered with Vulkan

May not be as exciting, since there are no new visible features but… damn, it’s Gtk+ being rendered with Vulkan on Wayland. It’s basically the state-of-the-art of toolkit support right now. Even better, the absolute majority of applications will gain this for free once they port to Gtk+ 4 series.

Getting this into an usable state wasn’t easy, but fortunately, Vulkan has an ~amazing~ thing called “Validation Layers” that simplified the tedious debugging process a whole lot (of course, only after making the validation layers work with Gtk+). This work even uncovered a driver bug in the Intel driver, which was quickly fixed by Lionel Landwerlin and Jason Ekstrand (thanks folks!)

Of course, there are many improvements that still must be done. A bright future is lying ahead!

January 16, 2017

My next EP will be released as a corrupted GPT image

Since July last year I’ve been working at Endless Computers on the downloadable edition of Endless OS.1 A big part of my work has been the Endless Installer for Windows: a Wubi-esque tool that “installs” Endless OS as a gigantic image file in your Windows partition2, sparing you the need to install via a USB stick and make destructive changes like repartitioning your drive. It’s derived from Rufus, the Reliable USB Formatting Utility, and our friends at Movial did a lot of the heavy lifting of turning it to our installer.

Endless OS is distributed as a compressed disk image, so you just write it to disk to install it. On first boot, it resizes itself to fill the whole disk. So, to “install” it to a file we decompress the image file, then extend it to the desired length. When booting, in principle we want to loopback-mount the image file and treat that as the root device. But there’s a problem: NTFS-3G, the most mature NTFS implementation for Linux, runs in userspace using FUSE. There are some practical problems arranging for the userspace processes to survive the transition out of the initramfs, but the bigger problem is that accessing a loopback-mounted image on an NTFS partition is slow, presumably because every disk access has an extra round-trip to userspace and back. Is there some way we can avoid this performance penalty?

Robert McQueen and Daniel Drake came up with a neat solution: map the file’s contents directly, using device mapper. Daniel wrote a little tool, ntfsextents, which uses the ntfs-3g library to find the position and size (in bytes) within the partition of each chunk of the Endless OS image file.3 We feed these to dm-setup to create a block device corresponding to the Endless OS image, and then boot from that – bypassing NTFS entirely! There’s no more overhead than an LVM root filesystem.

This is safe provided that you disallow concurrent modification of the image file via NTFS (which we do), and provided that you get the mapping right. If you’ve ensured that the image file is not encrypted, compressed, or sparse, and if ntfsextents is bug-free, then what could go wrong?

Unfortunately, we saw some weird problems as people started to use this installation method. At first, everything would work fine, but after a few days the OS image would suddenly stop booting. For some reason, this always seemed to happen in the second half of the week. We inspected some affected image files and found that, rather than ending in the secondary GPT header as you’d expect, they ended in zeros. Huh?

We were calling SetEndOfFile to extend the image file. It’s documented to “[set] the physical file size for the specified file”, and “if the file is extended, the contents of the file between the old end of the file and the new end of the file are not defined”. For our purposes this seems totally fine: the extended portion will be used as extra free space by Endless OS, so its contents don’t matter, but we need it to be fully physically allocated so we can use the extra space. But we missed an important detail! NTFS maintains two lengths for each file: the allocation size (“the size of the space that is allocated for a file on a disk”), and the valid data length (“the length of the data in a file that is actually written”).4 SetEndOfFile only updates the former, not the latter. When using an NTFS driver, reads past the valid data length return zero, rather than leaking whatever happens to be on the disk. When you write past the valid data length, the NTFS driver initializes the intervening bytes to zero as needed. We’re not using an NTFS driver, so were happily writing into this twilight zone of allocated-but-uninitialized bytes without updating the valid data length; but when the file is defragmented, the physical contents past the valid data length are not copied to their new home on the disk (what would be the point? it’s just uninitialized data, right?). So defragmenting the file would corrupt the Endless OS image.

One could fix this in our installer in two ways: write a byte at the end of the file (forcing the NTFS driver to write tens of gigabytes of zeros to initialize the file), or use SetFileValidData to mark the unused space as valid without actually initializing it. We chose the latter: installing a new OS is already a privileged operation, and the permissions on the Endless OS image file are set to deny read access to mere mortals, so it’s safe to avoid the cost of writing ten billion zeros.5

We weren’t quite home and dry yet, though: some users were still seeing their Endless OS image file corrupting itself after a few days. Having been burned once, we guessed this might be the defragmenter at work again. It turned out to be a quirk of how chunks of a file which happen to be adjacent can be represented, which we were not handling correctly in ntfsextents, leading us to map parts of the file more than once, like a glitchy tape loop. (We got lucky here: at least all the bytes we mapped really were part of the image file. Imagine if we’d mapped some arbitrary other part of the Windows system drive and happily scribbled over it…)

(Oh, why did these problems surface in the second half of any given week? By default, Windows defragments the system drive at 1am every Wednesday, or as soon as possible after that.)

  1. If you’re not familiar with Endless OS, it’s a GNOME- and Debian-derived desktop distribution, focused on reliable, easy-to-use computing for everyone. There was lots of nice coverage from CES last week. People seem particularly taken by the forthcoming “flip the window to edit the app” feature.
  2. and configures a bootloader – more on this in a future post…
  3. See debian/patches/endless*.patch in our ntfs-3g source package.
  4. I gather many other filesystems do the same.
  5. A note on the plural of “zero”: I conducted a poll on Twitter but chose to disregard the result when it was pointed out that MATLAB and NumPy both spell it without an “e”. See? No need to blindly implement the result of a non-binding referendum!

Digest of Fedora 25 Reviews

Fedora 25 has been out for 2 months and it seems like a very solid release, maybe the best in the history of the distro. And feedback from the press and users has also been very positive. I took the time and put together a digest of the latest reviews:

Phoronix: Fedora 25 Is Quite Possibly My Most Favorite Release Yet

As a long-time Fedora fan and user going back to Fedora Core, Fedora 25 is quite possibly my most favorite Fedora release yet. With the state as of this week, it feels very polished and reliable and haven’t encountered any glaring bugs on any of my test systems. Thanks in large part due to the heavy lifting on ensuring GNOME 3.22 is a super-polished desktop release, Fedora 25 just feels really mature yet modern when using it.

Phoronix: Fedora 25 Turned Out Great, Definitely My Most Favorite Fedora Release

That’s the first time I’ve been so ambitious with a Fedora release, but in testing it over the past few weeks (and months) on a multitude of test systems, the quality has been excellent and by far is most favorite release going back to the Fedora Core days — and there’s Wayland by default too, as just the icing on the cake.

Distrowatch: Fedora 25 Review

Even when dealing with the various Wayland oddities and issues, Fedora 25 is a great distribution. Everything is reasonably polished and the default software provides a functional desktop for those looking for a basic web browsing, e-mail, and word processing environment. The additional packages available can easily turn Fedora into an excellent development workstation customized for a developer’s specific needs. If you are programming in most of the current major programming languages, Fedora provides you the tools to easily do so. Overall, I am very pleased using Fedora 25, but I am even more excited for future releases of Fedora as the various minor Wayland issues get cleaned up.

ZDNet: Fedora 25 Linux arrives with Wayland display support

Today, Fedora is once more the leading edge Linux distribution.

ArsTechnica: Fedora 25: With Wayland, Linux has never been easier (or more handsome)

Fedora 24 was very close to my favorite distro of the year, but with Fedora 25 I think it’s safe to say that the Fedora Project has finally nailed it. I still run a very minimal Arch install (with Openbox) on my main machine, but everywhere else—family and friends who want to upgrade, clients looking for a stable system and so on—I’ve been recommending Fedora 25.

…I have no qualms recommending both Fedora and Wayland. The best Linux distro of 2016 simply arrived at the last moment.

Hectic Geek: Fedora 25 Review: A Stable Release, But Slightly Slow to Boot (on rotational disks)

If you have a rotational disk, then Fedora 25 will be a little slow to boot and there is nothing you or I can do to fix it. But if you have an SSD, then you shall have no issues here. Other than that, I’m quite pleased with this release actually. Sure the responsiveness sucked the first time on, but as mentioned, it can be fixed, permanently. And the stability is also excellent.

Dedoimedo: And the best distro of 2016 is…

The author prefers Fedora 24 to 25, but Fedora is still the distro of the year for him:

Never once had I believed that Fedora would rise so highly, but rise it did. Not only is the 24th release a child of a long succession of slowly, gradually improving editions, it also washed away my hatred for Gnome 3, and I actually started using it, almost daily, with some fairly good results. Fedora 24 was so good that it broke Fedora. The latest release is not quite as good, but it is a perfectly sane compromise if you want to use the hottest loaf of modern technology fresh from the Linux oven.

OCS-Mag: Best GNOME distro of 2016

The same author, and again not surprisingly prefers 24 which is the best GNOME distro in his opinions:

Fedora 24 is a well-rounded and polished operating system, and with the right amount of proverbial pimping, its Gnome desktop offers a stylish yet usable formula to the common user, with looks and functionality balanced to a fair degree. But, let us not forget the extensions that make all this possible. Good performance, good battery life and everyday stuff aplenty should keep you happy and entertained. Among the Gnome bunch, it’s Funky Fedora that offers the best results overall. And thus we crown it the winner of the garden ornament competition of 2016.

The Register: Fedora 25: You’ve got that Wayland feelin’, oh, that Wayland feelin’

Fedora 25 WorkStation is hands down the best desktop Linux distro I tested in 2016. With Wayland, GNOME 3.22 and the excellent DNF package manager, I’m hard-pressed to think of anything missing. The only downside? Fedora lacks an LTS release, but now that updating is less harrowing, that’s less of a concern.

Bit Cannon: Finding an Alternative to Mac OS X

Wesley Moore was looking for an alternative to Mac OS X and his three picks were: Fedora, Arch Linux, and elementaryOS.

Fedora provided an excellent experience. I installed Fedora 25 just after its release. It’s built on the latest tech like Wayland and GNOME 3.22.

The Huffington Post: How To Break Free From Your Computer Operating System — If You Dare

Fedora is a gorgeous operating system, with a sleek and intuitive interface, a clean aesthetic, and it’s wicked fast.

ArsTechnica: Dell’s latest XPS 13 DE still delivers Linux in a svelte package

Not really a review of Fedora, but the author tried to install Fedora 25 on the new XPS13 and this is what he had to say:

As a final note, I did install and test both Fedora 25 and Arch on the new hardware and had no problems in either case. For Fedora, I went with the default GNOME 3.22 desktop, which, frankly, is what I think Dell should ship out of the box. It’s got far better HiDPI support than Ubuntu, and the developer tools available through Fedora are considerably more robust than most of what you’ll find in Ubuntu’s repos.

Looks like we’re on the right track and I’m sure Fedora 26 will be an even better release. We’ve got very interesting things in the works.


This week in GTK+ – 31

In this last week, the master branch of GTK+ has seen 52 commits, with 10254 lines added and 9466 lines removed.

Planning and status
  • Alex Larsson is working on two separate branches to optimize the memory allocation and fragmentation when building the GSK render tree, after profiling the tree building code
  • Timm Bäder is working on a topic branch to switch widgets to be visible by default
  • The GTK+ road map is available on the wiki.
Notable changes

On the master branch:

  • Rui Matos added support in the Wayland backend for the gtk-enable-primary-paste settings key; this requires a newer version of the gsettings-desktop-schemas
  • Matthias Clasen and Alex Larsson refactored some of the GSK,  GtkWidget, and CSS internals to avoid excessive type casting and type checking after profiling the rendering code
  • Matthias added a “system” tab to the GtkAboutDialog widget, for free-form, system-related information
  • Matthias also updated the porting documentation for -gtk-icon-filter
  • Benjamin Otte changed the X11 backend to always call XInitThreads() unconditionally, in order to safely use the Vulkan rendering API; this should be safe, but testing is encouraged
  • Benjamin updated the GtkSnapshot API to ensure that render nodes are available only after the snapshot is complete
  • Benjamin also fixed the handling of CSS images that have no explicit size but should be scaled according to their aspect ratio
  • Timm Bäder added a revealed property to the GtkInfoBar widget and ported the GtkActionBar code to use it

On the gtk-3-22 stable branch:

  • Ruslan Izhbulatov worked on fixing various cases of keyboard handling under Windows, including interaction with AeroSnap; Ruslan also fixed bug 165385, which was going to be 12 years old in 10 days
  • Carlos Garnacho changed the EGL handling inside the Wayland backend to disable swap interval, as the compositor is in charge of timing the rendering
  • Matthias Clasen deprecated additional API that has been removed from the development branch
Bugs fixed
  • 776031 W32: Winkey+down minimizes maximized window instead of restoring it
  • 165385 Win32 keyboard handling still incomplete
  • 769835 On Wayland, application containing GtkGLArea stops responding if it’s not on current workspace
  • 774726 GtkTreeView dnd: gtk_drag_finish remove row when reorder sinse 3.20
  • 769214 keyval field not filled correctly for Pause key
  • 776485 GDK W32: Impossible to restore maximized window via system menu
  • 776604 about dialog: Add a “system” tab
  • 775846 gdk/wayland: Add support for the gtk-enable-primary-paste gsetting
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

January 15, 2017

2017-01-15 Sunday.

  • Up lateish, visited David at Christ Church - lovely to see lots of people there, met a BlackRock Analyst there; interesting; David back for lunch, chatted by the fire afterwards. Dis-assembled E's laptop to remove the Broadcom Wifi card which requires proprietary (and thus predictably not working) drivers. Installed a new Ralink card cheap on ebay - which worked flawlessly out of the box; nice. Watched The House of Magic with the babes, put them to bed.

January 14, 2017

2017-01-14 Saturday.

  • Up earlyish; train to London with the family. Tube to the Tate Modern for N. to find some things to sketch, over the Millennium bridge, past St Pauls, to Mansion House tube. On to South Kensington and the Royal Albert Hall - to see AmaLuna which was really rather excellent. Out to Pizza Express afterwards, trains home - rather late with the babes.

Using a smart card for decryption from the command line

In Finland the national ID card contains a fully featured smart card, much like in Belgium and several other countries. It can be used for all sorts of cool things, like SSH logins. Unfortunately using this card for HW crypto operations is not easy and there is not a lot of easily digestible documentation available. Here's how you would do some simple operations from the command line.

The commands here should work on most other smart cards with very little changes. Note that these are just examples, they are not hardened for security at all. In production you'd use the libraries directly, keep all data in memory rather than putting it in temp files and so on.

First you plug in the device and card and check the status:

$ pkcs15-tool -k

Private RSA Key [todentamis- ja salausavain]
Object Flags   : [0x1], private
Usage          : [0x26], decrypt, sign, unwrap
Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Access Rules   : execute:01;
ModLength      : 2048
Key ref        : 0 (0x0)
Native         : yes
Path           : XXXXXXXXXXXX
Auth ID        : 01
ID             : 45
MD:guid        : XXXXXXXXXXXX

Private RSA Key [allekirjoitusavain]
Object Flags   : [0x1], private
Usage          : [0x200], nonRepudiation
Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
Access Rules   : execute:02;
ModLength      : 2048
Key ref        : 0 (0x0)
Native         : yes
Path           : XXXXXXXXXXXX
Auth ID        : 02
ID             : 46
MD:guid        : XXXXXXXXXXXX

This card has two keys. We use the first one whose usage is "decrypt". Its ID number is 45. Now we need to extract the public key from the card:

$ pkcs15-tool --read-public-key 45 > mykey.pub

Next we generate some data and encrypt it with the public key. The important thing to note here is that you can only encrypt a small amount of data, on the order of a few dozen bytes.

$ echo secret message > original

Then we encrypt this message with OpenSSL using the extracted public key.

$ openssl rsautl -encrypt -inkey mykey.pub -pubin -in original -out encrypted

The file encrypted now contains the encrypted message. The only way to decrypt it is to transfer the data to the smart card for decryption, because the private key can only be accessed inside the card. This is achieved with the following command.

$ pkcs11-tool --decrypt -v --input-file encrypted --output-file decrypted --id 45 -l -m RSA-PKCS --pin 1234

After this the decrypted contents have been written to the file decrypted. The important bit here is -m RSA-PKCS, which specifies the exact form of RSA to use. There are several and if you use the wrong one the output will be random bytes without any errors or warnings.

What can this be used for?

Passwordless full disk encryption is one potential use. Combining a card reader with a Raspberry Pi and an electrical lock makes for a pretty spiffy DIY physical access control system.

January 13, 2017

humane websites

not sure how to tell you this: but just because your website is well-designed doesn’t mean that it’s effective.

and there’s one simple reason for this: most people fail to understand that websites are processes.

i've been talking about this a lot last year at conferences like sfscon 2016 in italy or 12min.me in munich. many people asked me about the slides and further information, so i gladly published an extended version of my slides along with speaker notes. a video recording is available here.

the gist of my talk is the following:

  • websites are processes and start way before people come to your website and end with clients sitting in your meeting room or buying your product
  • it's no longer about optimizing your websites for seo and hoping for the best. it's about optimizing your presence across the web. and in the real world as well
  • take time to carefully craft your value proposition. otherwise people don't get what you do, how you can help them and you'll lose them immediately
  • make sure that your landing page works. a value proposition, a deep dive into your client's big, expensive problem and a call to action are essential
  • if you do have an email list, don't send these spammy newsletters. personalize. give value. a lot

January 12, 2017

Highlights in Grilo 0.2.11 (and Plugins 0.2.13)

Hello, readers!

Some weeks ago we released a new version of Grilo and the Plugins set (yes, it sounds like a 70’s music group :smile:). You can read the announcement here and here. If you are more curious about...

Another year, another GUADEC

It’s 2014, and like previous years:

GUADEC 2014

This time I won’t give any talk, just relax and enjoy talks from others, and hope Strasbourg.

And what is more important, meet those hackers you interact with frequently, and maybe share some beers.

So...

Wed 2017/Jan/11

  • Reproducible font rendering for librsvg's tests

    The official test suite for SVG 1.1 consists of a bunch of SVG test files that use many of the features in the SVG specification. The test suite comes with reference PNGs: your SVG renderer is supposed to produce images that look like those PNGs.

    I've been adding test files from that test suite to librsvg as I convert things to Rust, and also when I refactor code that touches code for a particular kind of SVG element or filter.

    The SVG test suite is not a drop-in solution, however. The spec does not specify pixel-exact rendering. It doesn't mandate any specific kind of font rendering, either. The test suite is for eyeballing that tests render correctly, and each test has instructions on what to look for; it is not meant for automatic testing.

    The test files include text elements, and the font for those texts is specified in an interesting way. SVG supports referencing "SVG fonts": your image_with_text_in_it.svg can specify that it will reference my_svg_font.svg, and that file will have individual glyphs defined as normal SVG objects. "You draw an a with this path definition", etc.

    Librsvg doesn't support SVG fonts yet. (Patches appreciated!) As a provision for renderers which don't support SVG fonts, the test suite specifies fallbacks with well-known names like "sans-serif" and such.

    In the GNOME world, "sans-serif" resolves to whatever Fontconfig decides. Various things contribute to the way fonts are resolved:

    • The fonts that are installed on a particular machine.

    • The Fontconfig configuration that is on a particular machine: each distro may decide to resolve fonts in slightly different ways.

    • The user's personal ~/.fonts, and whether they are running gnome-settings-daemon and whether it monitors that directory for Fontconfig's perusal.

    • Phase of the moon, checksum of the clouds, polarity of the yak fields, etc.

    For silly reasons, librsvg's "make distcheck" doesn't work when run as a user; I need to run it as root. And as root, my personal ~/.fonts doesn't get picked up and also my particular font rendering configuration is different from the system's default (why? I have no idea — maybe I selected specific hinting/antialiasing at some point?).

    It has taken a few tries to get reproducible font rendering for librsvg's tests. Without reproducible rendering, the images that get rendered from the test suite may not match the reference images, depending on the font renderer's configuration and the available fonts.

    Currently librsvg does two things to get reproducible font rendering for the test suite:

    • We use a specific cairo_font_options_t on our PangoContext. These options specify what antialiasing, hinting, and hint metrics to use, so that the environment's or user's configuration does not affect rendering.

    • We create a specific FcConfig and a PangoFontMap for testing, with a single font file that we ship. This will cause any font description, no matter if it is "sans-serif" or whatever, to resolve to that single font file. Special thanks to Christian Hergert for providing the relevant code from Gnome-builder.

    • We ship a font file as mentioned above, and just use it for the test suite.

    This seems to work fine. I can run "make check" both as my regular user with my private ~/.fonts stash, or as root with the system's configuration, and the test suite passes. This means that the rendered SVGs match the reference PNGs that get shipped with librsvg — this means reproducible font rendering, at least on my machine. I'd love to know if this works on other people's boxes as well.

January 11, 2017

Constraints editing

Last year I talked about the newly added support for Apple’s Visual Format Language in Emeus, which allows to quickly describe layouts using a cross between ASCII art and predicates. For instance, I can use:

H:|-[icon(==256)]-[name_label]-|
H:[surname_label]-|
H:[email_label]-|
H:|-[button(<=icon)]
V:|-[icon(==256)]
V:|-[name_label]-[surname_label]-[email_label]-|
V:[button]-|

and obtain a layout like this one:

Boxes approximate widgets

Thanks to the contribution of my colleague Martin Abente Lahaye, now Emeus supports extensions to the VFL, namely:

  • arithmetic operators for constant and multiplication factors inside predicates, like [button1(button2 * 2 + 16)]
  • explicit attribute references, like [button1(button1.height / 2)]

This allows more expressive layout descriptions, like keeping aspect ratios between UI elements, without requiring hitting the code base.

Of course, editing VFL descriptions blindly is not what I consider a fun activity, so I took some time to write a simple, primitive editing tool that lets you visualize a layout expressed through VFL constraints:

I warned you that it was primitive and simple

Here’s a couple of videos showing it in action:

At some point, this could lead to a new UI tool to lay out widgets inside Builder and/or Glade.

As of now, I consider Emeus in a stable enough state for other people to experiment with it — I’ll probably make a release soon-ish. The Emeus website is up to date, as it is the API reference, and I’m happy to review pull requests and feature requests.

January 10, 2017

Dear package managers: dependency resolution results should be in version control

If your build depends on a non-exact dependency version (like “somelibrary >= 3.1”), and the exact version gets recomputed every time you run the build, your project is broken.

  • You can no longer build old versions and get the same results.
  • Want to cut a bugfixes-only release from an old branch? Sorry.
  • Want to use git bisect? Nope.
  • You can’t rely on your code working because it will change by itself. Maybe it worked today, but that doesn’t mean it will work tomorrow. Maybe it worked in continuous integration, but that doesn’t mean it will work when deployed.
  • Wondering whether any dependency versions changed and when? No way to figure it out.

Package management and build tools should get this right by default. It is a real problem; I’ve seen it bite projects I’m working on countless times.

(I know that some package managers get it right, and good for them! But many don’t. Not naming names here because it’s beside the point.)

What’s the solution? I’d argue that it’s been well-known for a while. Persist the output of the dependency resolution process and keep it in version control.

  • Start with the “logical” description of the dependencies as hand-specified by the developers (leaf nodes only, with version ranges or minimum versions).
  • Have a manual update command to run the dependency resolution algorithm, arriving at an exhaustive list of all packages (ideally: identified by content hash and including results for all possible platforms). Write this to a file with deterministic sort order, and encourage keeping this file in git. This is sometimes called a “lock file.”
  • Both CI and production deployment should use the lock file to download and install an exact set of packages, ideally bit-for-bit content-hash-verified.
  • When you want to update dependencies, run the update command manually and submit a pull request with the new lock file, so CI can check that the update is safe. There will be a commit in git history showing exactly what was upgraded and when.

Bonus: downloading a bunch of fixed package versions can be extremely efficient; there’s no need to download package A in order to find its transitive dependencies and decide package B is needed, instead you can have a list of exact URLs and download them all in parallel.

You may say this is obvious, but several major ecosystems do not do this by default, so I’m not convinced it’s obvious.

Reproducible builds are (very) useful, and when package managers can’t snapshot the output of dependency resolution, they break reproducible builds in a way that matters quite a bit in practice.

(Note: of course this post is about the kind of package manager or build tool that manages packages for a single build, not the kind that installs packages globally for an OS.)

Dark title bars for apps with dark UI

I really like the polished look of GNOME and its default theme Adwaita, but there is one thing that has been bugging me for some time. By default server side window decorations are light and if an app has a dark UI and uses a server side window decorations, you get a dark window with a light title bar. It doesn’t look every nice and when you maximize the window, it’ll get even worse because you get a nice black-and-white hamburger (black top bar, light title bar, and dark window content).

There are quite a few apps suffering from this: Atom, Firefox Developer Edition, Blender,…

But Mutter actually allows the clients to set a theme for their window decorations even though they’re rendered on the server side. They just need to set an x window property GTK_THEME_VARIANT=dark.

And I think the difference speaks for itself:

snimek-z-2017-01-10-18-55-41

snimek-z-2017-01-10-16-52-05

You can test it by executing: xprop -f _GTK_THEME_VARIANT 8u -set _GTK_THEME_VARIANT dark

and clicking the window where it should apply.

Are you a user of one of the apps that would benefit from it? Or even a contributor? Try to convince the project to implement this tiny change. If you’re a distro maintainer of such an app, you may consider applying a small patch.


GNOME Gaming Handheld

Recently I got myself a GPD Win, to make it simple it's a PC in a Nintendo 3DS XL form factor, with a keyboard and a game controller. It comes with Windows 10 and many not too demanding games work perfectly on it: it's perfect to run indie games from Steam and for retro consoles emulation.

But who simply want to play video games, let's make it fun, let's put a penguin in it! On this GNOME wiki page I'll report all my findings on Linux support on this machine, focusing mainly on OpenSUSE for the moment. Wouldn't it be awesome to have a fully working and easily installable GNOME desktop running Games and Steam on this machine? 😃

Reviewing the Librem 15

Following up on my previous post where I detailed the work I’ve been doing mostly on Purism’s website, today’s post post will cover some video work. Near the beginning of October, I received a Librem 15 v2 unit for testing and reviewing purposes. I have been using it as my main laptop since then, as I don’t believe in reviewing something without using it daily for a couple weeks at least. And so on nights and week-ends, I wrote down testing results, rough impressions and recommendations, then wrote a detailed plan and script to make the first in depth video review of this laptop. Here’s the result—not your typical 2-minutes superficial tour:

With this review, I wanted to:

  • Satisfy my own curiosity and then share the key findings; one of the things that annoyed me some months ago is that I couldn’t find any good “up close” review video to answer my own technical questions, and I thought “Surely I’m not the only one! Certainly a bunch of other people would like to see what the beast feels like in practice.”
  • Make an audio+video production I would be proud of, artistically speaking. I’m rather meticulous in my craft as like creating quality work made to last (similarly, I have recently finished a particular decorative painting after months of obsession… I’ll let you know about that in some other blog post ;)
  • Put my production equipment to good use; I had recently purchased a lot of equipment for my studio and outdoors shooting—it was just begging to be used! Some details on that further down in this post.
  • Provide a ton of industrial design feedback to the Purism team for future models, based on my experience owning and using almost every laptop type out there. And so I did. Pages and pages of it, way more than can fit in a video:
Pictured: my review notes

A fine line

For the video, I spent a fair amount of time writing and revising my video’s plan and narration (half a dozen times at least), until I came to a satisfactory “final” version that I was able to record the narration for (using the exquisite ancient techniques of voice acting).

The tricky part is being simultaneously concise, exhaustive, fair, and entertaining. I wanted to be as balanced as possible and to cover the most crucial topics.

  • At the end of the day, there are some simple fundamentals of what makes a good or bad computer. Checking my 14 pages of review notes, I knew I was being extremely demanding, and that some of those expectations of mine came down to personal preference, things that most people don’t even care about, or common issues that are not even addressed by most “big brand” OEMs’ products… so I balanced my criticism with a dose of realism, making sure to focus on what would matter to people.
  • I also chose topics that would have a longer “shelf life”, considering how much work it takes to produce a high-quality video. For example, even while preparing the review over the course of 2-3 months, some aspects (such as the touchpad drivers) changed/improved and made me revise my opinion. The touchpad behaved better in Fedora 25 than in Fedora 24… until a kernel update broke it (Ugh. At that point I decided to version-lock my kernel package in Fedora).
  • I was conservative in my estimates, even if that makes the device look less shiny than it is. For example, while I said “5-6 hours” of battery life in the review video, in practice I realized that I can get 10-12 hours of battery life with my personal usage pattern (writing text in Gedit, no web browser open, 50% brightness, etc.) and a few simple tweaks in PowerTop.

The final version of my script, as I recorded it, was 59 minutes long. No jokes. And that was after I had decided to ignore some topics (ex.: the whole part about the preloaded operating system; that part would be long enough to be a standalone review review).

I spent some days processing that 59 minutes recording to remove any sound impurities or mistakes, and sped up the tempo, bringing down the duration to 37 and then 31 minutes. Still, that was too long, so while I was doing the final video edit, I tightened everything further (removing as many speech gaps as possible) and cut out a few more topics at the last minute. The final rendered result is a video that is 21 minutes long. Much more reasonable, considering it’s an “in depth” review.

Real audio, lighting, and optics

My general work ethic is: when you do something, do it properly or don’t do it at all.

For this project I used studio lighting, tripods and stands, a dolly, a DSLR with two lenses, a smaller camera for some last minute shots, a high-end PCM sound recorder, a phantom-powered shotgun microphone, a recording booth, monitoring headphones, sandbags, etc.

To complement the narration and cover the length of the review, I chose seven different songs (out of many dozens) based on genre, mood and tempo. I sometimes had to cut/mix songs to avoid the annoying parts. The result, I hope, is a video that remains pleasant and entertaining to watch throughout, while also having a certain emotional or “material” quality to it. For example, the song I used for the thermal design portion makes me feel like I’m touching heatpipes and watching energy flow. Maybe that’s just me though—perhaps if there was some lounge music I would feel like a sofa ;)

Fun!

Much like when I made a video for a local symphonic orchestra, I have enjoyed the making of this review video immensely. I’m glad I got an interesting device to review (not a whole chicken in a can) with the freedom to make the “video I would have wanted to see”. On that note, if anyone has a fighter jet they’d want to see reviewed, let me know (I’m pretty good at dodging missiles ;)

January 07, 2017

The importance of the press kit

I'd like to share a few lessons I've learned about creating a press kit. This helped us spread the word about our recent FreeDOS 1.2 release, and it can help your open source software project to get more attention.
I'm part of several open source software projects, but probably the one that I'll be remembered for is FreeDOS. As an open source software implementation of DOS, you might not think that FreeDOS will get much attention in today's tech news. Yet when we released FreeDOS 1.2 a few weeks ago, we got a ton of news coverage.

Slashdot was the first to write about FreeDOS 1.2, but we also saw coverage from Engadget Germany, LWN, Heise Online, PC Forum Hungary, FOSS Bytes, ZDNet Germany, PC Welt, Tom's Hardware, and Open Source Feed. And that's just a sample of the news! There were articles from the US, Germany, Japan, Hungary, Ukraine, Italy, and others.

In reading the articles people had written about FreeDOS 1.2, I realized something that was both cool and insightful: most tech news sites re-used material from our press kit.

You see, in the weeks leading up to FreeDOS 1.2, I assembled additional information and resources about FreeDOS 1.2 release, including a bunch of screenshots and other images of FreeDOS in action. In an article posted to our website, I highlighted the press kit, and added "If you are writing an article about FreeDOS, feel free to use this information to help you." And they did!

We track a complete timeline of interesting events on our FreeDOS History page, including links to articles. Comparing the press coverage from FreeDOS 1.0, FreeDOS 1.1 and FreeDOS 1.2, we definitely saw the most articles about FreeDOS 1.2. And unlike previous releases where only a few tech news websites wrote articles about FreeDOS and other news outlets mostly referenced the first few sites, the coverage of FreeDOS 1.2 was mostly original articles. Only a small handful were references to news items from other news sites.

I put that down to the press kit. With the press kit, journalists were able to quickly pull interesting information and quotes about FreeDOS, and find images they could use in their articles. For a busy journalist who doesn't have much time to write about a free DOS implementation in 2016, our press kit made it easy to create something fresh. And news sites love to write their own stories rather than link to other news sites. That means more eyeballs for them.

Here are a few lessons I learned from creating our press kit:
Include basic information about your open source software project.
What is your project about? What does it do? How is it useful? Who uses it? What are the new features in this release? These are the basic questions any journalist will want to answer in their article, if they choose to write about you. In the FreeDOS press kit, I also included a history about FreeDOS, discussing how we got started in 1994 and some highlights from our timeline.
Write in a casual, conversational tone that's easy to quote.
In writing about your project, pretend you are writing an email to someone you know. Or if you prefer, write like you are posting something to a personal blog. Keep it informal. Avoid jargon. If your language is too stuffy or too technical, journalists will have a hard time quoting from you. In writing the FreeDOS press kit, I started by listing a few common questions that people usually ask me about FreeDOS, then I just responded to them like I was answering an email. My answers were often long, but the paragraphs were short so easier to skim.
Provide lots of screenshots of your project doing different things.
Whether your program runs from the command line or in a graphical environment, screenshots are key. And tech news sites like to use images; they are a cheap way to draw attention. So take lots of screenshots and include them in your press kit. Show all the major features through these screenshots. But be wary of background images and other branding that might distract from your screenshots. In particular, if the screenshot will show your desktop, set your wallpaper to the default for your operating system, or use a solid color in the range medium- to light-blue. For the FreeDOS press kit, I took a ton of screenshots of every step in the install process. I also grabbed screenshots of FreeDOS at the command line, running utilities and tools, and playing some of the games we installed.
Organize your material so it's easy to read.
You may find your press kit will become quite long. That's okay, as long as this doesn't make it difficult for someone to figure out what's there. Put the important stuff first. Use a table of contents, if you have a lot of information to share. Use headings and sections to break things up. If a journalist can't find the information they need to write an article about your project, they may skip it and write about something else. I organized our press kit like a simple website. An index page provided some basic information, with a list of links to other material contained in the press kit. I arranged our screenshots in separate "pages." And every page of screenshots started with a brief context, then listed the screenshots without much fanfare. But every screenshot included a description of what you were seeing. For example, I had over forty screenshots from installing FreeDOS, and I wrote a one-sentence description for each.
Be your own editor.
No matter how much work you put into it, one will want to use your press kit if it is riddled with spelling errors and poor grammar. Consider writing your press kit material in a word processor and running a spell check against it. Read your text aloud and see if it makes sense to you. When you're done, try to look at your press kit from the perspective of someone who hasn't used your project before. Can they easily understand what it's about? To help you in this step, ask a friend to review the material for you.
Advertise, advertise, advertise!
Don't assume that tech news sites will seek you out. You need to reach out to them to let them know you have a new release coming up. Create your press kit well in advance, and about a week or two before your release, individually email every journalist or tech news website that might be interested in you. Most news sites have a "Contact us" link or list of editor "beats" where you can direct yourself to the writer or editor most likely to write about your topic. Craft a short email that lets them know who you are, what project you're from, when the next release will happen, and what new features it will include. Give them a link to the press kit directly in your email. But make the press kit easy to see in the email. Use the full URL to the press kit, and make it clickable. Also link to the press kit from your website, so anyone else who visits your project can quickly find the information they need to write an article.
By doing a little prep work before your next major release, you can increase the likelihood that others will write about you. And that means you'll get more people who discover your project, so your open source software project can grow.

chromietabs - getting information about open tabs in Google Chrome browser

This blog post is about chromietabs [1] library, that provides information (URL) about currently open tabs in Google Chrome and Chromium web browsers.
TL;DR;

Motivation
I was always curious how do I spend time using my computer - what applications do I use, how much time do I spend using particular app etc. I know there is plenty of software that could track my activity on the market, however, none of them met my requirements, so couple of months ago I've started workertracker project [2], which does the job. I'll blog about the application in the future, since is not ready yet (there's a few pre-releases, so feel free to test it), however, the post is about quite important feature of the app - accessing current URL of the browser (in google-chrome, for now).

chromietabs library
Since I couldn't find any good solution on the internet, I've decided to implement a tiny library that will provide information about active tab in Google Chrome and Chromium web browsers.
An Interface of the chromietabs library is very simple, and it consists of few layers - depends on how detailed information you need, you should use another class. The example below demonstrates how can you access the URL of the current tab in google chrome:

ChromieTabs::SessionAnalyzer analyzer{
ChromieTabs::SessionReader("Current Session")};
auto window_id = analyzer.get_current_window_id();
auto active_tab_id = analyzer.get_current_tab_id(window_id);
std::cout << analyzer.get_current_url(active_tab_id) << std::endl;

A full example can be found in the git repository [1].
You can also use the documentation [3].
Please note, that current release (0.1) is a pre-release, and the API might change a bit.

How does it work?
I've noticed, that when I kill (not close) google-chrome, it's able to restore my tabs after the crash. It means, that it has to constantly update some file saving information about the tabs. I was right - there is a binary file in your profile directory - Current Session - that stores that information.
Unfortunately, Current Session is a binary file, so I had to go through the chromium source code to figure out the file format.

Feedback
Feedback is always appreciated! Feel free to comment, report issues[4], or create pull requests [5].

Links
[1] https://github.com/loganek/chromietabs
[2] https://github.com/loganek/workertracker
[3] https://loganek.github.io/chromietabs/master//index.html
[4] https://github.com/loganek/chromietabs/issues
[5] https://github.com/loganek/chromietabs/pulls

ModemManager in OpenWRT (take #2)

mm-openwrt

I’ve been lately working on integrating ModemManager in OpenWRT, in order to provide a unique and consolidated way to configure and manage mobile broadband modems (2G, 3G, 4G, Iridium…), all working with netifd.

OpenWRT already has some support for a lot of the devices that ModemManager is able to manage (e.g. through the uqmi, umbim or wwan packages), but unlike the current solutions, ModemManager doesn’t require protocol-specific configurations or setups for the different devices; i.e. the configuration for a modem running in MBIM mode may be the same one as the configuration for a modem requiring AT commands and a PPP session.

Currently the OpenWRT package prepared is based on ModemManager git master, and therefore it supports: QMI modems (including the new MC74XX series which are raw-ip only and don’t support DMS UIM operations), MBIM modems, devices requiring QMI over MBIM operations (e.g. FCC auth), and of course generic AT+PPP based modems, Cinterion, Huawei (both AT+PPP and AT+NDISDUP), Icera, Haier, Linktop, Longcheer, Ericsson MBM, Motorola, Nokia, Novatel, Option (AT+PPP and HSO), Pantech, Samsung, Sierra Wireless (AT+PPP and DirectIP), Simtech, Telit, u-blox, Wavecom, ZTE… and even Iridium and Thuraya satellite modems. All with the same configuration.

Along with ModemManager itself, the OpenWRT feed also contains libqmi and libmbim, which provide the qmicli, mbimcli, and soon the qmi-firmware-update utilities. Note that you can also use these command line tools, even if ModemManager is running, via the qmi-proxy and mbim-proxy setups (i.e. just adding -p to the qmicli or mbimcli commands).

This is not the first time I’ve tried to do this; but this time I believe it is a much more complete setup and likely ready for others to play with it. You can jump to the modemmanager-openwrt bitbucket repository and follow the instructions to include it in your OpenWRT builds:

https://bitbucket.org/aleksander0m/modemmanager-openwrt

The following sections try to get into a bit more detail of which were the changes required to make all this work.

And of course, thanks to VeloCloud for sponsoring the development of the latest ModemManager features that made this integration possible 🙂

udev vs hotplug

One of the latest biggest features merged in ModemManager was the possibility to run without udev support; i.e. without automatically monitoring the device addition and removals happening in the system.

Instead of using udev, the mmcli command line tool ended up with a new --report-kernel-event that can be used to report the device addition and removals manually, e.g.:

$ mmcli --report-kernel-event="action=add,subsystem=tty,name=ttyUSB0"
$ mmcli --report-kernel-event="action=add,subsystem=net,name=wwan0"

This new way of notifying device events made it very easy to integrate the automatic device discovery supported in ModemManager directly via tty and net hotplug scripts (see mm_report_event()).

With the integration in the hotplug scripts, ModemManager will automatically detect and probe the different ports exposed by the broadband modem devices.

udev rules

ModemManager relies on udev rules for different things:

  • Blacklisting devices: E.g. we don’t want ModemManager to claim and probe the TTYs exposed by Arduinos or braille displays. The package includes a USB vid:pid based blacklist of devices that expose TTY ports and are not modems to be managed by ModemManager.
  • Blacklisting ports: There are cases where we don’t want the automatic logic selection to grab and use some specific modem ports, so the package also provides a much shorter list of ports blacklisted from actual modem devices. E.g. the QMI implementation in some ZTE devices is so poor that we decided to completely skip it and fallback to AT+PPP.
  • Greylisting USB serial adapters: The TTY ports exposed by USB serial adapters aren’t probed automatically, as we don’t know what’s connected in the serial side. If we want to have a serial modem, though, the mmcli --scan-modems operation may be executed, which will include the probing of these greylisted devices.
  • Specifying port type hints: Some devices expose multiple AT ports, but with different purposes. E.g. a modem may expose a port for AT control and another port for the actual PPP session, and choosing the wrong one will not work. ModemManager includes a list of port type hints so that the automatic selection of which port is for what purpose is done transparently.

As we’re not using udev when running in OpenWRT, ModemManager includes now a custom generic udev rules parser that uses sysfs properties to process and apply the rules.

procd based startup

The ModemManager daemon is setup to be started and controlled via procd. The init script controlling the startup will also take care of scheduling the re-play of the hotplug events that had earlier triggered --report-kernel-event actions (they’re cached in /tmp); e.g. to cope with events coming before the daemon started or to handle daemon restarts gracefully.

DBus

Well, no, I didn’t port ModemManager to use ubus 🙂 If you want to run ModemManager under OpenWRT you’ll also need to have the DBus daemon running.

netifd protocol handler

When using ModemManager, the user shouldn’t need to know the peculiarities of the modem being used: all modems and protocols (QMI, MBIM, Generic AT, vendor-specific AT…) are all managed via the same single DBus interfaces. All the modem control commands are internal to ModemManager, and the only additional considerations needed are related to how to setup the network interface once the modem is connected, e.g.:

  • PPP: some modems require a PPP session over a serial port.
  • Static: some modems require static IP configuration on a network interface.
  • DHCP: some modems require dynamic IP configuration on a network interface.

The OpenWRT package for ModemManager includes a custom protocol handler that enables the modemmanager protocol to be used when configuring network interfaces. This new protocol handler takes care of configuring and bringing up the interfaces as required when the modem gets into connected state.

Example configuration

The following snippet shows an example interface configuration to set in /etc/config/network.

 config interface 'broadband'
   option device '/sys/devices/platform/soc/20980000.usb/usb1/1-1/1-1.2/1-1.2.1'
   option proto 'modemmanager'
   option apn 'ac.vodafone.es'
   option username 'vodafone'
   option password 'vodafone'
   option pincode '7423'
   option lowpower '1'

The settings currently supported are the following ones:

  • device: The full sysfs path of the broadband modem device needs to be configured. Relying on the interface names exposed by the kernel is never a good idea, as these may change e.g. across reboots or when more than one modem device is available in the system.
  • proto: As said earlier, the new modemmanager protocol needs to be configured.
  • apn: If the connection requires an APN, the APN to use.
  • username: If the access point requires authentication, the username to use.
  • password: If the access point requires authentication, the password to use.
  • pincode: If the SIM card requires a PIN, the code to use to unlock it.
  • lowpower: If enabled, this setting will request the modem to go into low-power state (i.e. IMSI detach and RF off) when the interface is disconnected.

As you can see, the configuration can be used for any kind of modem device, regardless of what control protocol it uses, which interfaces are exposed, or how the connection is established. The settings are currently only IPv4 only, but adding IPv6 support shouldn’t be a big issue, patches welcome 🙂

SMS, USSD, GPS…

The main purpose of using a mobile broadband modem is of course the connectivity itself, but it also may provide many more features. ModemManager provides specific interfaces and mmcli actions for the secondary features which are also available in the OpenWRT integration, including:

  • SMS messaging (both 3GPP and 3GPP2).
  • Location information (3GPP LAC/CID, CDMA Base station, GPS…).
  • Time information (as reported by the operator).
  • 3GPP USSD operations (e.g. to query prepaid balance to the operator).
  • Extended signal quality information (RSSI, Ec/Io, LTE RSRQ and RSRP…).
  • OMA device management operations (e.g. to activate CDMA devices).
  • Voice call control.

Worth noting that not all these features are available for all modem types (e.g. SMS messaging is available for most devices, but OMA DM is only supported in QMI based modems).

TL;DR?

You can now have your 2G/3G/4G mobile broadband modems managed with ModemManager and netifd in your OpenWRT based system.


Filed under: Development, FreeDesktop Planet, GNOME Planet, Planets Tagged: libmbim, libqmi, ModemManager, openwrt

January 06, 2017

FEDORA and GNOME on VBOX at USIL

I have started Linux classes at University as a professor this 2017! and the course is a review of the GNU/Linux story, followed by the installation and the use of commands to manage the terminal. At the end of the course, some services are also set to prepare students into the IT Infrastructure world.

To start this new adventure, I have recommended they post some experiences from class:

These are some pictures that records this new group of 20 at USIL lab 🙂

img_6638 img_6639 img_6641Thanks USIL (Universidad San Ignacio de Loyola) ❤


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, FEDORA 25, GNOME, Julita Inca, Julita Inca Chiroque, lab Linux, Lima Peru, linux, Perú, Universidad San Ignacion de Loyola, USIL

January 05, 2017

Process API for NoFlo components

It has been a while that I’ve written about flow-based programming — but now that I’m putting most of my time to Flowhub things are moving really quickly.

One example is the new component API in NoFlo that has been emerging over the last year or so.

Most of the work described here was done by Vladimir Sibirov from The Grid team.

Introducing the Process API

NoFlo programs consist of graphs where different nodes are connected together. These nodes can themselves be graphs, or they can be components written in JavaScript.

A NoFlo component is simply a JavaScript module that provides a certain interface that allows NoFlo to run it. In the early days there was little convention on how to write components, but over time some conventions emerged, and with them helpers to build well-behaved components more easily.

Now with the upcoming NoFlo 0.8 release we’ve taken the best ideas from those helpers and rolled them back into the noflo.Component base class.

So, how does a component written using the Process API look like?

// Load the NoFlo interface
var noflo = require('noflo');
// Also load any other dependencies you have
var fs = require('fs');

// Implement the getComponent function that NoFlo's component loader
// uses to instantiate components to the program
exports.getComponent = function () {
  // Start by instantiating a component
  var c = new noflo.Component();

  // Provide some metadata, including icon for visual editors
  c.description = 'Reads a file from the filesystem';
  c.icon = 'file';

  // Declare the ports you want your component to have, including
  // their data types
  c.inPorts.add('in', {
    datatype: 'string'
  });
  c.outPorts.add('out', {
    datatype: 'string'
  });
  c.outPorts.add('error', {
    datatype: 'object'
  });

  // Implement the processing function that gets called when the
  // inport buffers have packets available
  c.process(function (input, output) {
    // Precondition: check that the "in" port has a data packet.
    // Not necessary for single-inport components but added here
    // for the sake of demonstration
    if (!input.hasData('in')) {
      return;
    }

    // Since the preconditions matched, we can read from the inport
    // buffer and start processing
    var filePath = input.getData('in');
    fs.readFile(filePath, 'utf-8', (err, contents) {
      // In case of errors we can just pass the error to the "error"
      // outport
      if (err) {
        output.done(err);
        return;
      }

      // Send the file contents to the "out" port
      output.send({
        out: contents
      });
      // Tell NoFlo we've finished processing
      output.done();
    });
  });

  // Finally return to component to the loader
  return c;
}

Most of this is still the same component API we’ve had for quite a while: instantiation, component metadata, port declarations. What is new is the process function and that is what we’ll focus on.

When is process called?

NoFlo components call their processing function whenever they’ve received packets to any of their regular inports.

In general any new information packets received by the component cause the process function to trigger. However, there are some exceptions:

  • Non-triggering ports don’t cause the function to be called
  • Ports that have been set to forward brackets don’t cause the function to be called on bracket IPs, only on data

Handling preconditions

When the processing function is called, the first job is to determine if the component has received enough data to act. These “firing rules” can be used for checking things like:

  • When having multiple inports, do all of them contain data packets?
  • If multiple input packets are to be processed together, are all of them available?
  • If receiving a stream of packets is the complete stream available?
  • Any input synchronization needs in general

The NoFlo component input handler provides methods for checking the contents of the input buffer. Each of these return a boolean if the conditions are matched:

  • input.has('portname') whether an input buffer contains packets of any type
  • input.hasData('portname') whether an input buffer contains data packets
  • input.hasStream('portname') whether an input buffer contains at least one complete stream of packets

For convenience, has and hasData can be used to check multiple ports at the same time. For example:

// Fail precondition check unless both inports have a data packet
if (!input.hasData('in1', 'in2')) return;

For more complex checking it is also possible to pass a validation function to the has method. This function will get called for each information packet in the port(s) buffer:

// We want to process only when color is green
var validator = function (packet) {
  if (packet.data.color === 'green') {
    return true;
  }
  return false;
}
// Run all packets in in1 and in2 through the validator to
// check that our firing conditions are met
if (!input.has('in1', 'in2', validator)) return;

The firing rules should be checked in the beginning of the processing function before we start actually reading packets from the buffer. At that stage you can simply finish the run with a return.

Processing packets

Once your preconditions have been met, it is time to read packets from the buffers and start doing work with them.

For reading packets there are equivalent get functions to the has functions used above:

  • input.get('portname') read the first packet from the port’s buffer
  • input.getData('portname') read the first data packet, discarding preceding bracket IPs if any
  • input.getStream('portname') read a whole stream of packets from the port’s buffer

For get and getStream you receive whole IP objects. For convenience, getData returns just the data payload of the data packet.

When you have read the packets you want to work with, the next step is to do whatever your component is supposed to do. Do some simple data processing, call some remote API function, or whatever. NoFlo doesn’t really care whether this is done synchronously or asynchronously.

Note: once you read packets from an inport, the component activates. After this it is necessary to finish the process by calling output.done() when you’re done.

Sending packets

While the component is active, it can send packets to any number of outports using the output.send method. This method accepts a map of port names and information packets.

output.send({
  out1: new noflo.IP('data', "some data"),
  out2: new noflo.IP('data', [1, 2, 3])
});

For data packets you can also just send the data as-is, and NoFlo will wrap it to an information packet.

Once you’ve finished processing, simply call output.done() to deactivate the component. There is also a convenience method that is a combination of send and done. This is useful for simple components:

c.process(function (input, output) {
  var data = input.getData('in');
  // We just add one to the number we received and send it out
  output.sendDone({
    out: data + 1
  });
});

In normal situations there packets are transmitted immediately. However, when working on individual packets that are part of a stream, NoFlo components keep an output buffer to ensure that packets from the stream are transmitted in original order.

Component lifecycle

In addition to making input processing easier, the other big aspect of the Process API is to help formalize NoFlo’s component and program lifecycle.

NoFlo program lifecycle

The component lifecycle is quite similar to the program lifecycle shown above. There are three states:

  • Initialized: the component has been instantiated in a NoFlo graph
  • Activated: the component has read some data from inport buffers and is processing it
  • Deactivated: all processing has finished

Once all components in a NoFlo network have deactivated, the whole program is finished.

Components are only allowed to do work and send packets when they’re activated. They shouldn’t do any work before receiving input packets, and should not send anything after deactivating.

Generator components

Regular NoFlo components only send data associated with input packets they’ve received. One exception is generators, a class of components that can send packets whenever something happens.

Some examples of generators include:

  • Network servers that listen to requests
  • Components that wait for user input like mouse clicks or text entry
  • Timer loops

The same rules of “only send when activated” apply also to generators. However, they can utilize the processing context to self-activate as needed:

exports.getComponent = function () {
 var c = new noflo.Component();
 c.inPorts.add('start', { datatype: 'bang' });
 c.inPorts.add('stop', { datatype: 'bang' });
 c.outPorts.add('out', { datatype: 'bang' });
 // Generators generally want to send data immediately and
 // not buffer
 c.autoOrdering = false;

 // Helper function for clearing a running timer loop
 var cleanup = function () {
   // Clear the timer
   clearInterval(c.timer.interval);
   // Then deactivate the long-running context
   c.timer.deactivate();
   c.timer = null;
 }

 // Receive the context together with input and output
 c.process(function (input, output, context) {
   if (input.hasData('start')) {
     // We've received a packet to the "start" port
     // Stop the previous interval and deactivate it, if any
     if (c.timer) {
       cleanup();
     }
     // Activate the context by reading the packet
     input.getData('start');
     // Set the activated context to component so it can
     // be deactivated from the outside
     c.timer = context
     // Start generating packets
     c.timer.interval = setInterval(function () {
       // Send a packet
       output.send({
         out: true
       });
     }, 100);
     // Since we keep the generator running we don't
     // call done here
   }

   if (input.hasData('stop')) {
     // We've received a packet to the "stop" port
     input.getData('stop');
     if (!c.timer) {
       // No timers running, we can just finish here
       output.done();
       return;
     }
     // Stop the interval and deactivate
     cleanup();
     // Also call done for this one
     output.done();
   }
 });

 // We also may need to clear the timer at network shutdown
 c.shutdown = function () {
   if (c.timer) {
     // Stop the interval and deactivate
     cleanup();
   }
   c.emit('end');
   c.started = false;
 }
}

Time to prepare

NoFlo 0.7 included a preview version of the Process API. However, last week during the 33C3 conference we finished some tricky bits related to process lifecycle and automatic bracket forwarding that make it more useful for real-life NoFlo applications.

These improvements will land in NoFlo 0.8, due out soon.

So, if you’re maintaining a NoFlo application, now is a good time to give the git version a spin and look at porting your components to the new API. Make sure to report any issues you encounter!

We’re currently migrating all the hundred-plus NoFlo open source modules to latest build and testing process so that they can be easily updated to the new APIs when they land.

Feeds