GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

May 27, 2017

Free Ideas for UI Frameworks, or How To Achieve Polished UI

Ever since the original iPhone came out, I’ve had several ideas about how they managed to achieve such fluidity with relatively mediocre hardware. I mean, it was good at the time, but Android still struggles on hardware that makes that look like a 486… It’s absolutely my fault that none of these have been implemented in any open-source framework I’m aware of, so instead of sitting on these ideas and trotting them out at the pub every few months as we reminisce over what could have been, I’m writing about them here. I’m hoping that either someone takes them and runs with them, or that they get thoroughly debunked and I’m made to look like an idiot. The third option is of course that they’re ignored, which I think would be a shame, but given I’ve not managed to get the opportunity to implement them over the last decade, that would hardly be surprising. I feel I should clarify that these aren’t all my ideas, but include a mix of observation of and conjecture about contemporary software. This somewhat follows on from the post I made 6 years ago(!) So let’s begin.

1. No main-thread UI

The UI should always be able to start drawing when necessary. As careful as you may be, it’s practically impossible to write software that will remain perfectly fluid when the UI can be blocked by arbitrary processing. This seems like an obvious one to me, but I suppose the problem is that legacy makes it very difficult to adopt this at a later date. That said, difficult but not impossible. All the major web browsers have adopted this policy, with caveats here and there. The trick is to switch from the idea of ‘painting’ to the idea of ‘assembling’ and then using a compositor to do the painting. Easier said than done of course, most frameworks include the ability to extend painting in a way that would make it impossible to switch to a different thread without breaking things. But as long as it’s possible to block UI, it will inevitably happen.

2. Contextually-aware compositor

This follows on from the first point; what’s the use of having non-blocking UI if it can’t respond? Input needs to be handled away from the main thread also, and the compositor (or whatever you want to call the thread that is handling painting) needs to have enough context available that the first response to user input doesn’t need to travel to the main thread. Things like hover states, active states, animations, pinch-to-zoom and scrolling all need to be initiated without interaction on the main thread. Of course, main thread interaction will likely eventually be required to update the view, but that initial response needs to be able to happen without it. This is another seemingly obvious one – how can you guarantee a response rate unless you have a thread dedicated to responding within that time? Most browsers are doing this, but not going far enough in my opinion. Scrolling and zooming are often catered for, but not hover/active states, or initialising animations (note; initialising animations. Once they’ve been initialised, they are indeed run on the compositor, usually).

3. Memory bandwidth budget

This is one of the less obvious ideas and something I’ve really wanted to have a go at implementing, but never had the opportunity. A problem I saw a lot while working on the platform for both Firefox for Android and FirefoxOS is that given the work-load of a web browser (which is not entirely dissimilar to the work-load of any information-heavy UI), it was very easy to saturate memory bandwidth. And once you saturate memory bandwidth, you end up having to block somewhere, and painting gets delayed. We’re assuming UI updates are asynchronous (because of course – otherwise we’re blocking on the main thread). I suggest that it’s worth tracking frame time, and only allowing large asynchronous transfers (e.g. texture upload, scaling, format transforms) to take a certain amount of time. After that time has expired, it should wait on the next frame to be composited before resuming (assuming there is a composite scheduled). If the composited frame was delayed to the point that it skipped a frame compared to the last unladen composite, the amount of time dedicated to transfers should be reduced, or the transfer should be delayed until some arbitrary time (i.e. it should only be considered ok to skip a frame every X ms).

It’s interesting that you can see something very similar to this happening in early versions of iOS (I don’t know if it still happens or not) – when scrolling long lists with images that load in dynamically, none of the images will load while the list is animating. The user response was paramount, to the point that it was considered more important to present consistent response than it was to present complete UI. This priority, I think, is a lot of the reason the iPhone feels ‘magic’ and Android phones felt like junk up until around 4.0 (where it’s better, but still not as good as iOS).

4. Level-of-detail

This is something that I did get to partially implement while working on Firefox for Android, though I didn’t do such a great job of it so its current implementation is heavily compromised from how I wanted it to work. This is another idea stolen from game development. There will be times, during certain interactions, where processing time will be necessarily limited. Quite often though, during these times, a user’s view of the UI will be compromised in some fashion. It’s important to understand that you don’t always need to present the full-detail view of a UI. In Firefox for Android, this took the form that when scrolling fast enough that rendering couldn’t keep up, we would render at half the resolution. This let us render more, and faster, giving the impression of a consistent UI even when the hardware wasn’t quite capable of it. I notice Microsoft doing similar things since Windows 8; notice how the quality of image scaling reduces markedly while scrolling or animations are in progress. This idea is very implementation-specific. What can be dropped and what you want to drop will differ between platforms, form-factors, hardware, etc. Generally though, some things you can consider dropping: Sub-pixel anti-aliasing, high-quality image scaling, render resolution, colour-depth, animations. You may also want to consider showing partial UI if you know that it will very quickly be updated. The Android web-browser during the Honeycomb years did this, and I attempted (with limited success, because it’s hard…) to do this with Firefox for Android many years ago.

Pitfalls

I think it’s easy to read ideas like this and think it boils down to “do everything asynchronously”. Unfortunately, if you take a naïve approach to that, you just end up with something that can be inexplicably slow sometimes and the only way to fix it is via profiling and micro-optimisations. It’s very hard to guarantee a consistent experience if you don’t manage when things happen. Yes, do everything asynchronously, but make sure you do your book-keeping and you manage when it’s done. It’s not only about splitting work up, it’s about making sure it’s done when it’s smart to do so.

You also need to be careful about how you measure these improvements, and to be aware that sometimes results in synthetic tests will even correlate to the opposite of the experience you want. A great example of this, in my opinion, is page-load speed on desktop browsers. All the major desktop browsers concentrate on prioritising the I/O and computation required to get the page to 100%. For heavy desktop sites, however, this means the browser is often very clunky to use while pages are loading (yes, even with out-of-process tabs – see the point about bandwidth above). I highlight this specifically on desktop, because you’re quite likely to not only be browsing much heavier sites that trigger this behaviour, but also to have multiple tabs open. So as soon as you load a couple of heavy sites, your entire browsing experience is compromised. I wouldn’t mind the site taking a little longer to load if it didn’t make the whole browser chug while doing so.

Don’t lose site of your goals. Don’t compromise. Things might take longer to complete, deadlines might be missed… But polish can’t be overrated. Polish is what people feel and what they remember, and the lack of it can have a devastating effect on someone’s perception. It’s not always conscious or obvious either, even when you’re the developer. Ask yourself “Am I fully satisfied with this” before marking something as complete. You might still be able to ship if the answer is “No”, but make sure you don’t lose sight of that and make sure it gets the priority it deserves.

One last point I’ll make; I think to really execute on all of this, it requires buy-in from everyone. Not just engineers, not just engineers and managers, but visual designers, user experience, leadership… Everyone. It’s too easy to do a job that’s good enough and it’s too much responsibility to put it all on one person’s shoulders. You really need to be on the ball to produce the kind of software that Apple does almost routinely, but as much as they’d say otherwise, it isn’t magic.

May 26, 2017

gresg – an XML resources generator

For me, create GTK+ custom widgets is a very common task. Using templates for them, too.

Use GTK+ widgets defined by UI XML files, may be created by Glade, is a powerful feature.

Once you create your UI file, you should add it to a gresource XML file too, in order to use glib-compile-resources to compile an embed, if you wish, in your binaries.

Once your project is big enough, you may fall in a large gresource XML file. Regenerate compiled resources based on resources changes, can be tricky, and a hand work.

So I’ve created gresg, a tool to generate automatically an XML gresource file based in a list of files to be compiled with glib-compile-resources. This will help you to trigger a rebuild of compiled resources at any time you make changes in your files.

gresg, is written in Vala and uses GXml to generate XML resources files. As you can see in gresg’s repository, is a very small program.

If you are using Meson you can create a custom target to generate your XML resources, but you need this patch applied to Vala to take all its advantages and automatic re-build of resources.

C++ Implementations of “Engineering the Compiler” Pseudocode

Implementing is understanding

I’m gradually reading through the Engineering a Compiler book. I’m enjoying it but after a while I started wondering if I really understood the algorithms and I wanted to make sure of that before going further. So I implemented the algorithms in C++ (C++17, because it’s nice), testing them against the example inputs and grammars from the book. For instance, here is my code to construct, and use, the action and goto tables which define a DFA for a bottom-up table-driven LR(1) parser, along with some test code to use it with a simple expression grammar.

So far I’ve done this for Chapter 2 (Scanners) and Chapter 3 (Parsers) and I’ll try to keep going. It is a useful exercise for me, and maybe it’s useful to someone else reading the book.  Please note that the code will probably be meaningless to you if you haven’t read the book or something similar. On the other hand, the code will probably seem childlike if you’ve studied compilers properly.

Trying to get  the code to work often showed me that I had had only the illusion of fully understanding. Much of the pseudocode in the book is not very clear, even when you adjust your mind to its vaguely mathematical syntax, and to know what it really means you have to closely read the text descriptions, sometimes finding clues several pages back. For instance it’s not always clear what is meant by a particular symbol, and I found at least one conditional check that appeared in the description but not in the code. I would much rather see descriptions inline in the code.

Pseudocode is vague

I’m not a fan of pseudocode either, though I understand the difficulty in choosing a real programming language for examples. It means putting some readers off. But there could at least be some real code in an appendix. The book has some detailed walkthroughs that show that the authors must have implemented the algorithms somehow, so it’s a shame that I couldn’t just look at that code. I wonder what real programming language might be the most compact for manipulating sets of states when dealing with finite state automata states and grammar symbols.

For the parsers chapter this wasn’t helped by the, presumably traditional, ambiguity of the “FIRST” sets terminology. FIRST(symbol), FIRST(symbols), FIRST(production), and FIRST+(production) are all things.

My code isn’t meant to be efficient. For instance, my LR(1) parser table-building code has some awfully inefficient use of std::set as a key in a std::map. The code is already lengthier than I’d like, so I would prefer not to make it more efficient at the cost of readability. I’ve also probably overdone it with the half-constexpr and vaguely-concepty-generic Grammar classes. But I am tempted by the idea of making these fully constexpr with some kind of constexpr map, and then one day having the compiler build the generated (DFA) tables at compile time, so I wanted to explore that just a little.

But the book is good

Despite my complaints about the presentation of the algorithms, so far I’d still recommend the book. I feel like it’s getting the ideas into my head , it’s not really that bad, and as far as I know there’s nothing better. Of course, given that I haven’t read the alternatives, my recommendation shouldn’t mean much.

Also, these days I always write up my notes as a bigoquiz.com quiz, so here are my quizzable compiler notes so far.

May 25, 2017

OS2: Danish Municipalities Collaborating in the Open


OS²: The public digitization association. (CC-BY-SA 4.0)

OS² is an association for Danish municipalities to pool together efforts in building a free and open source IT infrastructure. I first heard about it at the LibreOffice conference happening in Aarhus back in 2015 through a talk about “BibOS” and “TING”. The early efforts has since then inspired a formal association for municipalities which hosts a number of open source IT components. The components are developed, installed and supported by external suppliers who are hired by municipalities individually or together. This approach has benefits both for the municipality and the suppliers compared to traditional license-based solutions.

Out of curiosity I decided to attend an open general assembly for OS². Municipalities participating in the association and a number of suppliers were present as well. Rasmus Frey, OS²’s business manager opened up the general assembly, explaining the highlights in OS² of the past year.


Rasmus Frey presenting the past year’s highlights in OS² (CC-BY-SA 4.0).

Efforts has been made over the past year to use a governance model to transform OS² into a platform usable both for playground projects in development as well as for production-ready solutions. The transformation is a step in the process of making the OS² platform a viable alternative for municipalities coming from other systems and solutions.

OS² currently contain 12 different products. For example OS2BorgerPC which is a fork of Ubuntu running on many computers in public libraries or OS2web which is a content management system for municipality websites based on Drupal. The projects are released under the MPL 2.0, giving suppliers a number of ways to build their business.

Rasmus presented the next product in line at the general assembly: OS2cloud. It provides infrastructure that makes it easy for municipalities to deploy the OS² products from a web interface based on Origo. This also makes it easy for municipalities to self-host non-OS² open source software like Piwik for web analytics, instead of relying on external services such as Google Analytics with possible tracking and privacy issues.


Networking at the OS² general assembly (CC-BY-SA 4.0).

The talks at the OS² assembly demonstrated the many benefits of developing IT infrastructure around an open source model. One municipality told that they had found out about incidents of public library computers being key-logged. The perpetrator had done this by inserting USB hardware between the keyboard and the computer which logged user input. In response, the municipality hired a supplier to patch OS2BorgerPC so that library staff would be notified of any insertion or interruption of USB devices. A patch, which every other municipality deploying OS2BorgerPC subsequently would benefit from.

The openness and the fact that the OS² association maintains ownership of the produced code, also means that municipalities have wider range of suppliers to choose from for support and development. Compared to the traditional license-based products, this minimizes the risk of vendor lock-in and shared infrastructure across municipalities. For suppliers this potentially creates opportunities for consistent income, new market possibilities and closer collaboration between municipality and supplier.


Debate panel between suppliers and municipalities (CC-BY-SA 4.0).

The general assembly ended in networking and with a panel debate between suppliers and municipalities. The debate brought up a number of interesting challenges, one being in the transition from the traditional culture of selling software in license form. Concerns are raised in the industry on whether business really can be made on developing open source software and why “the free market can’t be used to solve this” (although IMO, a free market is exactly what open source in this case creates). There is a need for current suppliers in open source to spread awareness in the industry of the new models which business in IT can be built upon. On the other hand, the suppliers raised concerns with the mindset of some municipalities. They asked that the OS² association should emphasize to municipalities that software, being open source, does not mean you get free support the same way you might do with some license-based products. Expenses should be calculated for continuous maintenance and software development.

Initiatives like OS² excite me in many aspects. From a political perspective I think spending tax payers’ money on technology which then is released back to the public under an open license makes a ton of sense. It creates possibilities, not only for creating a fair market, but also for education and labor. The publicly available code enables studying and knowledge sharing for students like me and hobby groups like Open Source Aalborg. From an ethical perspective I further find the transparency which come with public code appealing to address questions of privacy and data collection. Finally, from a broader perspective I believe knowledge-sharing initiatives like OS² can advance technology at a much faster pace.

May 24, 2017

Container secrets

I recently spent some time tracking down a problem with GTK+ containers and drawing. Here is what I found out.

CSS drawing

In GTK+ 3.22, most, if not all, containers support the full CSS drawing model with multiple layers of backgrounds and borders. This is how, for example, GtkFrame draws its frame nowadays. But also containers that normally only arrange their children, such as GtkBox, can draw backgrounds and borders. The possibilities are endless!

For example, we can use a GtkBox to put a frame around a list box and a label, to make the label visually appear as part of the list. You can even make it colorful and fun, using some CSS like:

box.frame {
 border: 5px solid magenta;
}

Allocation and resizing

Traditionally, most containers in GTK+ are not doing any drawing of their own and just arrange their children, and thus there is no real need for them to do a full redraw when their size changes – it is enough to redraw the children. This is what gtk_container_set_reallocate_redraws() is about. And it defaults to FALSE in GTK+ 3, since we did not want to risk adding excessive redraws  whenever allocations change.

You can see where this is going: If I use the delete button to remove Butter and Salt from the list of ingredients, the allocation of the list, and thus of the box around it, will shrink, and we get a redraw problem.

The solution

If you plan to make plain layout containers draw backgrounds or borders, make sure to set reallocate-redraws to TRUE for the right widgets (in this case, the parent of the fun box).

gtk_container_reallocate_redraws (GTK_CONTAINER (parent), TRUE);

Note that gtk_container_reallocate_redraws() is deprecated in GTK+ 3.22, since we will get rid of it in GTK+ 4 and do the right thing automatically. But that shouldn’t stop you from using it to fix this issue.

Another (and maybe better) alternative is to use a container that is meant to draw a border, such as GtkFrame.

Formatting a new extFAT USB on Fedora

I have a new 64GB USB and it was not show up at first time:

Thanks to this video I typed fdisk -l, then I was able to see 58.2 GB

After trying to install the exfat package with dnf -y install fuse-exfat, I failed

What I did after many failings was, setting the partition using the GUI:

Then you can see the new format as Ext4:

It is OK to have a little FreeSpace with no extension. It is time to write into the USB:

Now we can see the USB device in the list of devices 😀

Screenshot from 2017-05-24 13:33:06


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: device, extfat, extfat mount, fedora, format, GNOME, Julita Inca, Julita Inca Chiroque, mnt, mount, USB, USB flash drive

May 23, 2017

Please run for GNOME Board

You have two more days to announce your candidacy for the upcoming Board term.
Are you a member of the GNOME Foundation? Please consider running for Board.

Serving on the Board is a great way to contribute to GNOME, and it doesn't take a lot of your time. The GNOME Board of Directors meets every week via a one-hour phone conference to discuss various topics about the GNOME Foundation and GNOME. In addition, individual Board members may volunteer to take on actions from meetings—usually to follow up with someone who asked the Board for action, such as a funding request.

At least two current Board members have decided not to run again this year. (I am one of them.) So if you want to run for the GNOME Foundation Board of Directors, this is an excellent opportunity!

If you are planning on running for the Board, please be aware that the Board meets 2 days before GUADEC begins to do a formal handoff, plan for the upcoming year, and meet with the Advisory Board. GUADEC 2017is 28 July to 2 August in Manchester, UK. If elected, you should plan on attending meetings this year on 26 and 27 July in Manchester, UK.

To announce your candidacy, just send an email to foundation-announcethat gives your name, your affiliation (who you work for), and a few sentences about your background and interest in serving on the Board.

GSoC 2017 : GNOME Logs

Hello GNOME’ers,

Hope all of you are doing well 🙂

In this blog post, I want to introduce you to my GSoC project on GNOME Logs for this year. Last year, GNOME Logs saw the addition of a search popover and many improvements to it’s backend in terms of searching functionality. Moreover, some initial work regarding a Shell search provider for GNOME Logs was also done. I would also like to tell you that I was the GSoC 2016 intern on GNOME Logs and I must say, it was one of my summers spent well ! I got acquainted with the vibrant and diverse GNOME community and and on the other hand improved my skills in open source software development too , due to the valuable guidance given by my mentor, David King. This year, I will be mentored by David King and Jonathan Kang, two awesome people who are also the maintainers of GNOME Logs.

Regarding this year improvements to GNOME Logs, I will be working on the following tasks:

  1. Move sorting by timestamp functionality from the frontend to backend.
  2. Compress similar logs shown in the event list.
  3. Write a shell search provider for GNOME Logs so that search results can be exposed to GNOME Shell.
  4. Write unit tests for testing the user interface and search functionalities.

Here, I will be briefly explaining you about the first two tasks and the progress done on them. The first task is mostly about transferring the computational overhead to the model which is the actual backend in GNOME Logs. Currently, the list of events shown in the user interface are sorted w.r.t timestamp by GtkListBox using GtkListBoxSortFunc. The main motive of moving the sorting functionality to model is to reduce the complexity from the Logs frontend and make it simple. A challenging task here was to get the GListModel interface (which is implemented by GNOME Logs model class) to return the entries filled in the model array in a reverse order. It took me almost six months to figure out a correct approach for doing so. It would not have been possible without the guidance from my mentors and Lars Karlitski. Progress regarding the enhancement can be tracked on this bugzilla bug.

The second task is related to improvements in user interface and crucial for GNOME Logs from usability point of view. Currently, many adjacent events shown in the events list are either exact copy of each other or they are from the same process which logs some unnecessary events. This clutters the event list which makes it hard for a user to see the events which are of prime importance to him. So, the aim of this task is to compress these similar adjacent events. Allan Day of GNOME Design team has developed an excellent mockup to present the compressed events in an intuitive way. I would like to post his mockup here:logs-message-compression

To keep things simple initially, adjacent events will be compressed if the first word in their respective messages is same which takes care of exact adjacent duplicates too. This similarity criteria will of course be extended in the future to include other types of similar messages.

As of now, I have implemented some part of the mockup except the popover. A video of the working prototype can be seen here. I will be ready with the patches in the coming weeks and further progress regarding the event compression enhancement can be tracked on this bugzilla bug.

That’s all for now. Let me know in the comments what you think of this year’s GSoC improvements to GNOME Logs. We will meet again soon.

Regards.


Manchester

Last night an suicide attack took place in Manchester killing at least 22 people. I don’t have much to comment on that apart from that everyone’s thoughts are with those who have been injured or lost friends and family to the attack, and to quote a friend of mine:

If you think you can sow disunity in Manchester with a bomb, you don’t know Manchester.


xinput list shows a "xwayland-pointer" device but not my real devices and what to do about it

TLDR: If you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging/configuration with xinput will not work.

For many years, the xinput tool has been a useful tool to debug configuration issues (it's not a configuration UI btw). It works by listing the various devices detected by the X server. So a typical output from xinput list under X could look like this:


:: whot@jelly:~> xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=22 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=23 [slave pointer (2)]
⎜ ↳ ELAN Touchscreen id=20 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Lid Switch id=8 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=24 [slave keyboard (3)]
Alas, xinput is scheduled to go the way of the dodo. More and more systems are running a Wayland session instead of an X session, and xinput just doesn't work there. Here's an example output from xinput list under a Wayland session:

$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ xwayland-pointer:13 id=6 [slave pointer (2)]
⎜ ↳ xwayland-relative-pointer:13 id=7 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ xwayland-keyboard:13 id=8 [slave keyboard (3)]
As you can see, none of the physical devices are available, the only ones visible are the virtual devices created by XWayland. On a Wayland session, the X server doesn't have access to the physical devices. Instead, it talks via the Wayland protocol to the compositor. This image from the Wayland documentation shows the architecture:
In the above graphic, devices are known to the Wayland compositor (1), but not to the X server. The Wayland protocol doesn't expose physical devices, it merely provides a 'pointer' device, a 'keyboard' device and, where available, a touch and tablet tool/pad devices (2). XWayland wraps these into virtual devices and provides them via the X protocol (3), but they don't represent the physical devices.

This usually doesn't matter, but when it comes to debugging or configuring devices with xinput we run into a few issues. First, configuration via xinput usually means changing driver-specific properties but in the XWayland case there is no driver involved - it's all handled by libinput inside the compositor. Second, debugging via xinput only shows what the wayland protocol sends to XWayland and what XWayland then passes on to the client. For low-level issues with devices, this is all but useless.

The takeaway here is that if you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging with xinput will not work. If you're trying to configure a device, use the compositor's configuration system (e.g. gsettings). If you are debugging a device, use libinput-debug-events. Or compare the behaviour between the Wayland session and the X session to narrow down where the failure point is.

May 22, 2017

Updating Logitech Hardware on Linux

Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The issues included the ability to:

  • Pair new devices with the receiver without user prompting
  • Inject keystrokes, covering various scenarios
  • Inject raw HID commands

This gave an attacker with $15 of hardware the ability to basically take over remote PCs within wireless range, which could be up to 50m away. This makes sitting in a café quite a dangerous thing to do when any affected hardware is inserted, which for the unifying dongle is quite likely as it’s explicitly designed to remain in an empty USB socket. The main manufacturer of these devices is Logitech, but the hardware is also supplied to other OEMs such as Amazon, Microsoft, Lenovo and Dell where they are re-badged or renamed. I don’t think anybody knows the real total, but by my estimations there must be tens of millions of affected-and-unpatched devices being used every day.

Shortly after this announcement, Logitech prepared an update which mitigated some of these problems, and then again a few weeks later prepared another update that worked around and fixed the various issues exploited by the malicious firmware. Officially, Linux isn’t a supported OS by Logitech, so to apply the update you had to start Windows, and download and manually deploy a firmware update. For people running Linux exclusively, like a lot of Red Hat’s customers, the only choice was to stop using the Unifying products or try and find a Windows computer that could be borrowed for doing the update. Some devices are plugged in behind racks of computers forgotten, or even hot-glued into place and unremovable.

The MouseJack team provided a firmware blob that could be deployed onto the dongle itself, and didn’t need extra hardware for programming. Given the cat was now “out of the bag” on how to flash random firmware to this proprietary hardware I asked Logitech if they would provide some official documentation so I could flash the new secure firmware onto the hardware using fwupd. After a few weeks of back-and-forth communication, Logitech released to me a pile of documentation on how to control the bootloader on the various different types of Unifying receiver, and the other peripherals that were affected by the security issues. They even sent me some of the affected hardware, and gave me access to the engineering team that was dealing with this issue.

It took a couple of weeks, but I rewrote the previously-reverse-engineered plugin in fwupd with the new documentation so that it could update the hardware exactly according to the official documentation. This now matches 100% the byte-by-byte packet log compared to the Windows update tool. Magic numbers out, #define’s in. FIXMEs out, detailed comments in. Also, using the documentation means we can report sensible and useful error messages. There were other nuances that were missed in the RE’d plugin (for example, making sure the specified firmware was valid for the hardware revision), and with the blessing of Logitech I merged the branch to master. I then persuaded Logitech to upload the firmware somewhere public, rather than having to extract the firmware out of the .exe files from the Windows update. I then opened up a pull request to add the .metainfo.xml files which allow us to build a .cab package for the Linux Vendor Firmware Service. I created a secure account for Logitech and this allowed them to upload the firmware into a special testing branch.

This is where you come in. If you would like to test this, you first need a version of fwupd that is able to talk to the hardware. For this, you need fwupd-0.9.2-2.fc26 or newer. You can get this from Koji for Fedora.

Then you need to change the DownloadURI in /etc/fwupd.conf to the testing channel. The URI is in the comment in the config file, so no need to list it here. Then reboot, or restart fwupd. Then you can either just launch GNOME Software and click Install, or you can type on the command line fwupdmgr refresh && fwupdmgr update — soon we’ll be able to update more kinds of Logitech hardware.

If this worked, or you had any problems please leave a comment on this blog or send me an email. Thanks should go to Red Hat for letting me work on this for so long, and even more thanks to Logitech to making it possible.

Tracker 💙 Meson

A long time ago I started looking at rewriting Tracker’s build system using Meson. Today those build instructions landed in the master branch in Git!

Meson is becoming pretty popular now so I probably don’t need to explain why it’s such a big improvement over Autotools. Here are some key benefits:

  • It takes 2m37s for me to build from a clean Git tree with Autotools,  but only 1m08s with Meson.
  • There are 2573 lines of meson.build files, vs. 5013 lines of Makefile.am, a 2898 line configure.ac file, and various other bits of debris needed for Autotools
  • Only compile warnings are written to stdout by default, so they’re easy to spot
  • Out of tree builds actually work

Tracker is quite a challenging project to build, and I hit a number of issues in Meson along the way plus a few traps for the unwary.

We have a huge number of external dependencies — Meson handles this pretty neatly, although autodetection of backends requires a bit of boilerplate.

There’s a complex mix of Vala and C code in Tracker, including some libraries that are written in both. The Meson developers have put a lot of work into supporting Vala, which is much appreciated considering it’s a fairly niche language and in fact the only major problem we have left is something that’s just as broken with Autotools: failing to generate a single introspection repo for a combined C + Vala library

Tracker also has a bunch of interdependent libraries. This caused continual problems because Meson does very little deduplication in the commandlines it generates, and so I’d get combinational explosions hitting fairly ridiculous errors like commandline too long (the limit is 262KB) or too many open files inside the ld   process. This is a known issue. For now I work around it by manually specifying some dependencies for individual targets instead of relying on them getting pulled in as transitive dependencies of a declare_dependency target.

A related issue was that if the same .vapi file ends up on the valac commandline more than once it would trigger an error. This required some trickery to avoid. New versions of Meson work around this issue anyway.

One pretty annoying issue is that generated files in the source tree cause Meson builds to fail. Out of tree builds seem to not work with our Autotools build system — something to do with the Vala integration — with the result that you need to make clean before running a Meson build even if the Meson build is in a separate build dir. If you see errors about conflicting types or duplicate definitions, that’s probably the issue. While developing the Meson build instructions I had a related problem of forgetting about certain files that needed to be generated because the Autotools build system had already generated them. Be careful!

Meson users need to be aware that the rpath is not set automatically for you. If you previously used Libtool you probably didn’t need to care what an rpath was, but with Meson you have to manually set install_rpath for every program that depends on a library that you have installed into a non-standard location (such as a subdirectory of /usr/lib). I think rpaths are a bit of a hack anyway — if you want relocatable binary packages you need to avoid them — so I like that Meson is bringing this implementation detail to the surface.

There are a few other small issues: for example we have a Gtk-Doc build that depends on the output of a program, which Meson’s gtk-doc module currently doesn’t handle so we have to rebuild that documentation on every build as a workaround. There are also some workarounds in the current Tracker Meson build instructions that are no longer needed — for example installing generated Vala headers used to require a custom install script, but now it’s supported more cleanly.

Tracker’s Meson build rules aren’t quite ready for prime time: some tests fail when run under Meson that pass when run under Autotools, and we have to work out how best to create release tarballs. But it’s pretty close!

All in all this took a lot longer to achieve than I originally hoped (about 9 months of part-time effort), but in the process I’ve found some bugs in both Tracker and Meson, fixed a few of them, and hopefully made a small improvement to the long process of turning GNU/Linux users into GNU/Linux developers.

Meson has come a long way in that time and I’m optimistic for its future. It’s a difficult job to design and implement a new general purpose build system (plus project configuration tool, test runner, test infrastructure, documentation, etc. etc), and the Meson project have done so in 5 years without any large corporate backing that I know of. Maintaining open source projects is often hard and thankless. Ten thumbs up to the Meson team!


Announcing new high-level PKCS#11 HSM support for Python

Recently I’ve been working on a project that makes use of Thales HSM devices to encrypt/decrypt data. There’s a number of ways to talk to the HSM, but the most straight-forward from Linux is via PKCS#11. There were a number of attempts to wrap the PKCS#11 spec for Python, based on SWIG, cffi, etc., but they were all (a) low level, (b) not very Pythonic, (c) have terrible error handling, (d) broken, (e) inefficient for large files and (f) very difficult to fix.

Anyway, given that nearly all documentation on how to actually use PKCS#11 has to be discerned from C examples and thus I’d developed a pretty good working knowledge of the C API, and I’ve wanted to learn Cython for a while, I decided I’d write a new binding based on a high level wrapper I’d put into my app. It’s designed to be accessible, pick sane defaults for you, use generators where appropriate to reduce work, stream large files, be introspectable in your programming environment and be easy to read and extend.

https://github.com/danni/python-pkcs11

It’s currently a work in progress, but it’s now available on pip. You can get a session on a device, create a symmetric key, find objects, encrypt and decrypt data. The Cryptoki spec is quite large, so I’m focusing on the support that I need first, but it should be pretty straightforward for anyone who wanted to add something else they needed. I like to think I write reasonably clear, self-documenting code.

At the moment it’s only tested on SoftHSMv2 and the Thales nCipher Edge, which is what I have access to. If someone at Amazon wanted this to work flawlessly on CloudHSM, send me an account and I’ll do it :-P Then I can look at releasing my Django integrations for fields, storage, signing, etc.

Computer discoveries from February 2016

I found a text file named TIL.md lying around on my computer, with one section dated 17th February 2016. Apparently I’d planned to keep a log of the weird or interesting computer things I learned each day, but forgot after a day. I’d also forgotten all the facts in the file and was surprised afresh. Maybe you’ll be surprised too:

  • Windows’ shell and user interface do not support filenames with trailing spaces, so if you have a directory called worstever.christmas˽ (where ˽ represents a space) on your Unix fileserver, and serve it to Windows over SMB, you’ll see a filename like CQHNYI~0. I think this is the DOS-style 8.3 compatibility filename but I’m not sure where it gets generated in this case – Samba?
  • TIFF files can contain multiple images.
  • If you have a multi-subfile TIFF, multi.tiff, and run convert multi.tiff multi.jpeg, you will not get back a file called multi.jpeg; convert will silently assume you meant convert multi.tiff multi-%d.jpeg and give you back multi-0.jpeg, multi-1.jpeg, etc.

For some context: at the time, I was trying to work out why a script that imported a few tens of thousands of photographs into pan.do/ra – which doesn’t like TIFFs – had skipped some photographs, and imported others as a blank white rectangle; and why a Windows application pointed at the same fileserver showed a different number of photographs again. This was also the first time I encountered an inadvertent homoglyph attack: x.jpg and х.jpg are indistinguishable in most fonts.

The GJS documentation is back

We have once again a set of accurate, up-to-date documentation for GJS. Find it at devdocs.baznga.org!

Many thanks are due to Everaldo Canuto, Patrick Griffis, and Dustin Falgout for helping get the website back online, and to Nuveo for sponsoring the hosting.

In addition, thanks to Patrick’s lead, we have a docker image if you want to run the documentation site yourself.

If you find any inaccuracies in the documentation, please report a bug at this issue tracker.


May 21, 2017

GIMP rocks!

One problem that I used to have is to hide two main windows such as Toolbox-Tool Options Layer Brushes, that make my GIMP seems with no accessible icons on it.

Go to Windows -> Dockable Dialogs to have them back:

After cutting the edges of the original picture using sicssors select tools:

I added another layer with a full black rectangle Edit – Fill FGBlack Color

As well as another horizontal rectangle and I used Text tool to add words:

To set in black and white, Colors – Components – Channel Mixer

Check the Monochrome option. To color, I selected Color – Colorize

Blur the edge of my hair gave a nice look at the end.

Last and not least, I added another layer to the GNOME logo to put it inside my eye 😉

* Zoom out was done with SHIFT + and ESC in case CTRL Z is not working.


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: blanco negro gimp, colorear gimp, fedora, GIMP, GNOME, GNOME Peru Challenge, Julita Inca, Julita Inca Chiroque, letras GIMP

Frogr 1.3 released

Quick post to let you know that I just released frogr 1.3.

This is mostly a small update to incorporate a bunch of updates in translations, a few changes aimed at improving the flatpak version of it (the desktop icon has been broken for a while until a few weeks ago) and to remove some deprecated calls in recent versions of GTK+.

Ah! I’ve also officially dropped support for OS X via gtk-osx, as I was systematically failing to update and use (I only use frogr from GNOME these days) since a loooong time ago,  and so it did not make sense for me to keep pretending that the mac version is something that is usable and maintained anymore.

As usual, you can go to the main website for extra information on how to get frogr and/or how to contribute to it. Any feedback or help is more than welcome!

 

May 19, 2017

Fractional scaling goes east

When we introduced HiDPI support in GNOME a few years ago, we took the simplest possible approach that was feasible to implement with the infrastructure we had available at the time.

Some of the limitations:

  • You either get 1:1 or 2:1 scaling, nothing in between
  • The cut-off point that is somewhat arbitrarily chosen and you don’t get a say in it
  • In multi-monitor systems, all monitors share the same scale

Each of these limitations had technical reasons. For example, doing different scales per-monitor is hard to do as long as you are only using a single, big framebuffer for all of them. And allowing scale factors such as 1.5 leads to difficult questions about how to deal with windows that have a size like 640.5×480.5 pixels.

Over the years, we’ve removed the technical obstacles one-by-one, e.g. introduced per-monitor framebuffers. One of the last obstacles was the display configuration API that mutter exposes to the control-center display panel, which was closely modeled on XRANDR, and not suitable for per-monitor and non-integer scales. In the last cycle, we introduced a new, more suitable monitor configuration API, and the necessary support for it has just landed in the display panel.

With this, all of the hurdles have been cleared away, and we are finally ready to get serious about fractional scaling!

Yes, a hackfest!

Jonas and Marco happen to both be in Taipei in early June, so what better to do than to get together and spend some days hacking on fractional scaling support:

https://wiki.gnome.org/Hackfests/FractionalScaling2017

If you are a compositor developer (or plan on becoming one), or just generally interested in helping with this work, and are in the area, please check out the date and location by following the link. And, yes, this is a bit last-minute, but we still wanted to give others a chance to participate.

Further experiments in Meson

Meson is definitely getting more traction in GNOME (and other projects), with many components adding support for it in parallel to autotools, or outright switching to it. There are still bugs, here and there, and we definitely need to improve build environments — like Continuous — to support Meson out of the box, but all in all I’m really happy about not having to deal with autotools any more, as well as being able to build the G* stack much more quickly when doing continuous integration.

Now that GTK+ has added Meson support, though, it’s time to go through the dependency chain in order to clean up and speed up the build in the lower bits of our stack. After an aborted attempt at porting GdkPixbuf, I decided to port Pango.

All in all, Pango proved to be an easy win; it took me about one day to port from Autotools to Meson, and most of it was mechanical translation from weird autoconf/automake incantations that should have been removed years ago1. Most of the remaining bits were:

  • ensuring that both Autotools and Meson would build the same DSOs, with the same symbols
  • generating the same introspection data and documentation
  • installing tests and data in the appropriate locations

Thanks to the ever vigilant eye of Nirbheek Chauhan, and thanks to the new Meson reference, I was also able to make the Meson build slightly more idiomatic than a straight, 1:1 port would have done.

The results are a full Meson build that takes about the same time as ./autogen.sh to run:

* autogen.sh:                         * meson
  real        0m11.149s                 real          0m2.525s
  user        0m8.153s                  user          0m1.609s
  sys         0m2.363s                  sys           0m1.206s

* make -j$(($(nproc) + 2))            * ninja
  real        0m9.186s                  real          0m3.387s
  user        0m16.295s                 user          0m6.887s
  sys         0m5.337s                  sys           0m1.318s

--------------------------------------------------------------

* autotools                           * meson + ninja
  real        0m27.669s                 real          0m5.772s
  user        0m45.622s                 user          0m8.465s
  sys         0m10.698s                 sys           0m2.357s

Not bad for a day’s worth of work.

My plan would be to merge this in the master branch pretty soon; I also have a branch that drops Autotools entirely but that can wait a cycle, as far as I’m concerned.

Now comes the hard part: porting libraries like GdkPixbuf, ATK, gobject-introspection, and GLib to Meson. There’s already a GLib port, courtesy of Centricular, but it needs further testing; GdkPixbuf is pretty terrible, since it’s a really old library; I don’t expect ATK and GObject introspection to be complicated, but the latter has a non-recursive Make layout that is full of bees.

It would be nice to get to GUADEC and have the whole G* stack build with Meson and Ninja. If you want to help out, reach out in #gtk+, on IRC or on Matrix.


  1. The Windows support still checks for GCC 2.x or 3.x flags, for instance. 

Can't make GUADEC this year

This year, the GNOME Users And Developers European Conference (GUADEC) will be hosted in beautiful Manchester, UK between 28th July and 2nd August. Unfortunately, I can't make it. I missed last year, too. The timing is not great for me.

I work in local government, and just like last year, GUADEC falls during our budget time at the county. Our county budget is set every two years. That means during an "on" year, we make our budget proposals for the next two years. In the "off" year, we share a budget status.

I missed GUADEC last year because I was giving a budget status in our "off" year. And guess what? This year, department budget presentations again happen during GUADEC.

During GUADEC, I'll be making our county IT budget proposal. This is our one opportunity to share with the Board our budget priorities for the next two years, and to defend any budget adjustment. I can't miss this meeting.

3.26 Developments

My approach to development can often differ from my peers. I prefer to spend the early phase of a cycle doing lots of prototypes of various features we plan to implement. That allows me to have the confidence necessary to know early in the cycle what I can finish and where to ask for help.

We have some big stuff coming this cycle.

Panel Engine Revamp

Allan has been working on some major design work in how our panels and documents work. This has been needed for some time and things are looking good. To keep up with this, I’ve been doing some major improvements to panel-gtk, our panel engine. I managed to shake out a few bugs in the process and those fixes have made their way into the gnome-builder-3-24 branch.

The test program is not much to look at, but we have some necessary plumbing in place to do new things.

Shortcut Engine and Key Themes

Furthermore, I’ve been building a new shortcut engine to do the more advanced features we need. Gtk Shortcut Engine (GSE) provides plumbing for applications that need complex features such as multi-key “chords”, keyboard themes, and custom overrides by users. Many of you have asked for this in Builder, and I’m confident in saying it is coming for 3.26. You can find the work in progress in the shortcut-engine repository.

Ultimately I had to import a copy of the upstream’d GtkShortcutsWindow (based on what we wrote for Builder in 3.20) so that we could support chords. So the code-base looks bigger than it really is. The primary design (besides the keyboard themes) is the concept of a GseShortcutController and GseShortcutContext. These two things allow us to do some fun stuff like emacs-style “minor modes” as well as Vim-style modal keybindings.

I expect this to allow us to cleanup our Vim emulation quite a bit. It also solves some of our outstanding problems with keyboard shortcuts and unpredictable GAction activation. It’s really quite fundamental to how you’ll be interacting with Builder from a keyboard going forward.

Debugger

The big feature for 3.26 is the debugger. I have enough of a working prototype in place to have a reasonably good idea of what the moving parts are. As soon as we land the new shortcut engine and some of the panel updates I’ll be back finishing up that feature.

That’s it for now!

PostgreSQL date ranges in Django forms

Django’s postgres extensions support data types like DateRange which is super useful when you want to query your database against dates, however they have no form field to expose this into HTML.

Handily Django 1.11 has made it super easy to write custom widgets with complex HTML.

Start with a form field based off MultiValueField:

from django import forms
from psycopg2.extras import DateRange


class DateRangeField(forms.MultiValueField):
    """
    A date range
    """

    widget = DateRangeWidget

    def __init__(self, **kwargs):
        fields = (
            forms.DateField(required=True),
            forms.DateField(required=True),
        )
        super().__init__(fields, **kwargs)

    def compress(self, values):
        try:
            lower, upper = values
            return DateRange(lower=lower, upper=upper, bounds='[]')
        except ValueError:
            return None

The other side of a form field is a Widget:

from django import forms
from psycopg2.extras import DateRange


class DateRangeWidget(forms.MultiWidget):
    """Date range widget."""
    template_name = 'forms/widgets/daterange.html'

    def __init__(self, **kwargs):
        widgets = (
            forms.DateInput(),
            forms.DateInput(),
        )
        super().__init__(widgets, **kwargs)

    def decompress(self, value):
        if isinstance(value, DateRange):
            return (value.lower, value.upper)
        elif value is None:
            return (None, None)
        else:
            return value

    class Media:
        css = {
            'all': ('//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/daterangepicker.min.css',)  # noqa: E501
        }

        js = (
            '//cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/jquery.daterangepicker.min.js',  # noqa: E501
        )

Finally we can write a template to use the jquery-date-range-picker:

{% for widget in widget.subwidgets %}
<input type="hidden" name="{{ widget.name }}"{% if widget.value != None %} value="{{ widget.value }}"{% endif %}{% include "django/forms/widgets/attrs.html" %} />
{% endfor %}

<div id='container_for_{{ widget.attrs.id }}'></div>

With a script block:

(function() {
    var format = 'D/M/YYYY';
    var isoFormat = 'YYYY-MM-DD';
    var startInput = $('#{{ widget.subwidgets.0.attrs.id }}');
    var endInput = $('#{{ widget.subwidgets.1.attrs.id }}');

    $('#{{ widget.attrs.id }}').dateRangePicker({
        inline: true,
        container: '#container_for_{{ widget.attrs.id }}',
        alwaysOpen: true,
        format: format,
        separator: ' ',
        getValue: function() {
            if (!startInput.val() || !endInput.val()) {
                return '';
            }

            var start = moment(startInput.val(), isoFormat);
            var end = moment(endInput.val(), isoFormat);

            return start.format(format) + ' ' + end.format(format);
        },
        setValue: function(s, start, end) {
            start = moment(start, format);
            end = moment(end, format);

            startInput.val(start.format(isoFormat));
            endInput.val(end.format(isoFormat));
        }
    });
})();

You can now use this DateRangeField in a form, retrieve it from cleaned_data for database queries or store it in a model DateRangeField.

May 18, 2017

Plans for the next GNOME docs hackfest

The GNOME documentation team started planning the next docs hackfest after some (rather long) months of decreased activity on that front. The previous docs sprint was actually held in Cincinnati, OH, in 2015, produced lots of content updates and we’d like to repeat that experience again this year from August 14th through 16th, 2017.

As with the previous event, the 2017 docs sprint will happen right after the Open Help Conference, which is returning this year thanks to Shaun.

What we want to do differently this year is extending invitation to all people interested in GNOME content, whether it is upstream or downstream. We would especially like to see some Ubuntu folks attending. With Ubuntu moving to upstream GNOME, we are already seeing an increased number of docs patches coming from Ubuntu contributors, which is great, and I think having a joint documentation event could strengthen and expand the connections even more!

GNOME docsGNOME docs are a friendly bunch of people!

Interested? Let us know! I’ve set up a wiki page with details on the event where you can sign up and propose your own ideas for agenda.

As always, you can find GNOME docs folks in #docs on irc.gnome.org.

Hope to see you all at the sprint!

May 17, 2017

Boost Graph Library: modernization

Modern C++ (C++11 and later) can greatly simplify generic templated C++ code. I’ve recently been playing around with modernizing the Boost Graph Library code. The experience is similar to how I modernized the libsigc++ code, though I have not gone nearly into that much depth yet. The BGL is currently a big messy jumble of code that isn’t getting much love, and modernizing it could start to let its accumulated wisdom shine through, while also freeing it of other boost dependencies.

Please note that this is just an experiment in my own fork that even I am not pursuing particularly seriously. These changes are not likely to ever be accepted into the BGL because it would mean requiring modern C++ compilers. Personally, I think that’s the only way to move forward. I also think that Boost’s monolithic release process, and lack of a real versioning strategy, holds its libraries back from evolving. At the least, I think only a small set of generally-useful libraries should be released together, and I think that set should drop anything that’s now been moved into the standard library. BGL seems to have been stagnant for the last decade, so there doesn’t seem to be much to lose.

I’ve modernized the example code (and also tried to remove the “using namespace boost” lines), and done some work to modernize the boost graph code itself.

At the least, liberal use of auto can let you delete lots of ugly type declarations that really exist just to make things compile rather than to provide useful clues to the reader. For instance, auto makes the example code less cluttered with magic incantations. Range-based for loops simplify more code – for instance, in the examples.The simpler code is then easier to understand, letting you see the useful work underneath the boiler plate.

I’ve jumped right into C++17 and used structured bindings (and in the examples) because they are particularly helpful with the BGL API, which has many methods that return the begin and end of a range inside a std::pair. This lets us avoid some more type declarations. However, in the examples, I used a make_range_pair() utility function in many places instead, so I could use a simple range-based for. I guess that the range library would provide a type to use instead, maybe as part of the standard library one day.

I also replaced most uses of the boost type traits (some from boost::mpl) with the std:: equivalents. For instance, std::is_same instead of boost::is_same. It should now be possible to remove the boost::mpl dependency completely with a little work.

I’ve also tried putting all of BGL in the boost::graph namespace, instead of just in the boost namespace, but the API currently expects application code to use “using namespace boost”, to make its generic API work, and this makes that even more obvious.

As a next step, I guess that the boost graph implementation code could be simplified much more by use of decltype(auto). As I was modernizing libsigc++, I sometimes found templates that were used only by specializations that were used only by typedefs that were no longer needed, because they could be replaced by decltype(auto) return types. You have to pull at the threads one by one.

May 16, 2017

New Simple Scan designs

Simple Scan is a great app and there’s a lot of love for it. It’s one of those reliable, indispensable tools that it would be hard to live without. Just because it’s great doesn’t mean that it can’t be improved, of course, and it was recently suggested that I take a look at its design.

Most of the improvements can be described as refinements. Right now there are a bunch of operations which can be a bit tricky to find, or which aren’t as obvious as they could be. There are also a few actions that are well and truly buried!

Let’s look at the mockups. First, the initial “ready to scan” state:

How it could look when scanning:

Finally, what a scanned document could look like:

The primary changes are the introduction of a sidebar for selecting a page and the use of an action bar at the bottom for performing page actions. This makes it much clearer which page is selected, which is important when you want to perform edits. It also helps to communicate the purpose of the edit buttons – right now crop and rotate are a little ambiguous, largely due to their placement.

There are some other smaller improvements. The scan button now communicates the image/text and single page/document feeder modes, which means that you don’t have to dig into the UI to find out what will happen when you click the button. Some options, like reorder pages, have been rescued from the relative obscurity of the app menu. The current “new document” action has been rebranded as “start again”, in order to communicate that it’s how you clear the current scan as well as start a new one.

I’ve also reworked the settings:

This uses an experimental approach to the brightness and contrast settings here. To be able to use these, someone really needs feedback on what the different settings look like in practice. To enable this, I’ve sketched out a test scan mode which produces samples using a range of settings. This allows the user to specify the settings by selecting the best sample.

Observant readers will notice a crop of new controls for features that don’t currently exist. This includes OCR text reading and editing, a zoom control, and a magic enhancement feature. These are mostly placeholders for features that Robert Ancell, the Simple Scan maintainer, would like to add at some point in the future.

As far as I’m aware, no one is lined up to implement these changes, so if anyone fancies taking a shot at them, that would be great. The initial changes are described in a couple of bug reports. For more details, you can also see the full mockups in all their warty glory.

Recently released applications in GNOME Software

By popular request, there is now a list of recently updated applications in the gnome-software overview page.

Upstream applications have to do two things to be featured here:

  1. Have an upstream release within the last 2 months (will be reduced as the number of apps increases)
  2. Have upstream release notes in the AppData file

Quite a few applications from XFCE, GNOME and KDE have already been including <release> tags and this visibility should hopefully encourage other projects to do the same. As a small reminder, the release notes should be small, and easily understandable by end users. You can see lots of examples in the GNOME Software AppData file. Any questions, please just email me or leave comments here. Thanks!

Improve focus and productivity by listening to different sounds

I have always had a hard time avoiding distractions during working hours. My hyperactive brain wants to wander off distracted by any kind of noise around me.

Lately I found out that having a background ambient sound such as rain, wind, fireplace, really constrains me from any distraction.

In doing so, inspired by Noisli.com, I created a GNOME Shell extension with similar functionality.

It is quite simple but it does the job for me.

It is available for download at the extensions website: https://extensions.gnome.org/extension/1224/focusli/

Bug reports/feature requests are welcomed at https://github.com/felipeborges/gnome-shell-extension-focusli/issues

May 15, 2017

Emulating the Rust borrow checker with C++ part II: the borrowining

The most perceptive among you might have noticed that the last blog post did not actually do any borrowing, only single owner semantics with moves. Let's fix that. But first a caveat.

As far as I can tell, it is not possible to emulate Rust's borrow checker fully in compile time with C++. You can get pretty close (for some features at least) but there is some runtime overhead and violations are detected only at runtime, not compile time. See the end of this post for a description why that is. Still, it's better than nothing.

At the core is a resource whose ownership we want to track. We either want to allow either one accessor that can alter the object or many read-only accessors. We put the item inside a class that looks like this:

template<typename T>
class Owner final {
 private:
    T i;
    int roBorrows = 0;
    bool rwBorrow = false;
<rest of class omitted for brevity>

The holder object does not give out pointers or references to the underlying object. Instead you can only request either a read only or read-write (or mutable) reference to it. The read only function looks like this (the rw version is almost identical):

template<typename T>
RoBorrow<T> Owner<T>::borrowReadOnly() {
    if(rwBorrow) {
        throw std::runtime_error("Tried to create read only borrow when a rw borrow exists.");
    } else {
        return ::RoBorrow<T>(this); // increments borrow count
    }
}

Creating read only borrows only succeeds if there are no rw borrows. This code throws an exception but it could also call std::terminate or something else. A rw borrow would fail in a similar way if there are any ro borrows.

The interesting piece is the implementation of the proxy object RoBorrow<T>. It is the kind of move-only type that was described in part I. When it goes out of scope its destructor decrements the owner's borrow count:

~RoBorrow() { if (o) { o->decrementRoBorrows();} }

The magic lies in the conversion operator that is slightly different than the original version:

operator const T&() const { return owner->i; }

The conversion operator only gives out a const reference to the underlying object, which means only const operations can be invoked on it. Obviously this does not work for e.g. raw file descriptors because a "const int" does not protect the stat of the kernel fd object.  In order to call non-const functions you first need to get an RwBorrow whose conversion operator gives a constless reference, and Owner will only provide one if there are no other borrows outstanding.

When using this mechanism the following code fails (at runtime):

 auto b1 = owner.borrowReadOnly();
 auto b2 = owner.borrowReadWrite();
 
But this code works:

{
    auto b1 = owner.borrowReadOnly();
    auto b2 = owner.borrowReadOnly();
}
{
    auto b3 = owner.borrowReadWrite();
}
{
    auto b4 = owner.borrowReadOnly();
}

because the borrows go out of scope and are destroyed decrementing borrow counts to zero.

Why can't this be done at compile time?

I'm not a TMP specialist so maybe it can but as far as I know it is not possible. I'd love to be proven wrong on this one. The issue is due to limitations of constexpr which can be distilled into this dummy function:

template<typename T>
void constexpr foo(boolean b) {
    T x;
    constexpr int refcount = 1;
    if constexpr(b) {
        // do something
        --refcount;
    } else {
        // do something else
        --refcount;
    }
    static_assert(refcount == 0);
}

The static assert is obviously true no matter which code path is executed but the compiler can't prove that. First of all the if clause does not compile because its argument is not a compile-time constant. Second of all, the refcount decrements do not compile because constexpr variables can not be mutated during evaluation. You can create a new variable with the value x-1 but it would go out of scope when the subblocks end and there is no phi-node -like concept to get the value out to the outer scope. Finally, destructors can not be constexpr so even if we could mutate the count variable, it would not be done automatically (even though in theory the compiler has all the information it needs to do that).

Printing Improvements for Fedora 27 Workstation

Fedora 26 is not out yet, but it’s already time to think about how to improve the Workstation edition of Fedora 27. One of the areas my team is focusing on is printing (the desktop side of it). For GNOME 3.24 and Fedora 26 Workstation we landed a new interface for the printing module in GNOME Control Center. It gives a much cleaner overview of printers that are set up on your system.

One thing that I think deserves an improvement is printer sharing. GNOME Control Center doesn’t allow you to easily share a printer with other devices over the network. I’ve heard users complain about it and the competition provides it (even though Windows do it very unintuitively). Sharing via IPP is a pretty low hanging fruit because that’s what CUPS already perfectly supports, you just need to expose it in the UI.

A common use case is sharing a printer with your mobile devices. iOS uses AirPrint which is an extension of the IPP,  you just need to convince the device that it’s talking to an AirPrint server. To support Android devices, I think the best way is to use Google Cloud Print. We already support Google Cloud Print, but from the client side. I wonder if it’d be useful to support the server side as well. Google provides an open source server implementation, but it’s written in Go and unnecessarily advanced for our use cases, so writing our own implementation would probably be a better way to go. But I wonder if it’d be worth it. Do people use Google Cloud Print? If not, how do you print from your Android device?

Or are there any other things you think we should improve in printing (desktop-wise)?


May 13, 2017

Emulating the Rust borrow checker with C++ move-only types

Perhaps the greatest thing to come out of C++11 is the notion of move semantics. Originally it was designed to make it efficient to return big objects like matrices from functions. In move operations the destination "steals the guts" of the source object rather than making a copy of it. Usually this means taking hold of some pointer to a reserved memory block and assigning the source's pointer to nullptr.

This mechanism is not reserved to pointers and can be used for any data type. As an example let's write a move-only integer class. A typical use for this would be to store file descriptors to ensure that they are closed. The code is straightforward, let's go through it line by line:

class MoveOnlyInt final {
 private:
    int i;

    void release() { /* call to release function here */ }

This is basic boilerplate. The release function would typically be close, but can be anything.

 public:
    explicit MoveOnlyInt(int i) : i(i) {}
    ~MoveOnlyInt() { release(); }

We can construct this from a regular integer and destroying this object means calling the release function. Simple. Now to the actual meat and bones.

    MoveOnlyInt() = delete;
    MoveOnlyInt(const MoveOnlyInt &) = delete;
    MoveOnlyInt& operator=(const MoveOnlyInt &) = delete;

These declarations specify that this object can not be copied (which is what C++ does by default). Thus creating two objects that hold the same underlying integer is not possible unless you explicitly create two objects with the same bare integer.

    MoveOnlyInt(MoveOnlyInt &&other) { i = other.i; other.i = -1; }
    MoveOnlyInt& operator=(MoveOnlyInt &&other) { release(); i = other.i; other.i = -1; return *this; }

Here we define the move operators. A move means releasing the currently held integer (if it exists) and grabbing the source's integer instead. This is all we need to have a move only type. The last thing to add is a small helper function.

    operator int() const { return i; }
};

This means that this object is convertible to a plain integer. In practice this means that if you have a function like this:

void do_something(int x);

then you can call it like this:

MoveOnlyInt x(42);
do_something(x);

You could achieve the same by creating a get() function that returns the underlying integer but this is nicer and more usable. This is not quite as useful for integers but extremely nice when using plain C libraries with opaque structs. Then your move only wrapper class can be used directly when calling plain C functions.

What does this allow us to do?

All sorts of things, the object behaves much in the same way as Rust's movable types (but is not 100% identical). You can for example return it from a function, which transfers ownership in a compiler enforced way:

MoveOnlyInt returnObject() {
    MoveOnlyInt retval(42);
    return retval;
}

If you try to pass an object as an argument like this:

int byValue(MoveOnlyInt mo);
...
byValue(mo);

a regular class would get copied but for a move-only type you get a compiler error:

./prog.cpp:39:13: error: call to deleted constructor of 'MoveOnlyInt'
    byValue(mo);
            ^~
../prog.cpp:13:5: note: 'MoveOnlyInt' has been explicitly marked deleted here
    MoveOnlyInt(const MoveOnlyInt &) = delete;
    ^
../prog.cpp:22:28: note: passing argument to parameter here
int byValue(MoveOnlyInt mo) {

Instead you have explicitly tell the compiler to move the object:

byValue(std::move(mo));

Some of you might have spotted a potential issue. Since a MoveOnly object can be converted to an int and there is a constructor that takes an int, that could create two objects for the same underlying integer. Like this:

MoveOnlyInt mo(42);
MoveOnlyInt other(mo);

The compiler output looks like the following:

../prog.cpp:37:14: error: call to deleted constructor of 'MoveOnlyInt'
    MoveOnlyInt other(mo);
             ^     ~~
../prog.cpp:13:5: note: 'MoveOnlyInt' has been explicitly marked deleted here
    MoveOnlyInt(const MoveOnlyInt &) = delete;

The compiler prevents creating invalid objects.

The wrapper object has zero memory overhead compared to a plain int and code generation changes are minimal. The interested reader is encouraged to play around with the compiler explorer to learn what those are.

Is this just as good as native Rust then?

No. Rust's borrow checker does more and is stricter. For example in C++ you can use a moved-from object, which may yield a null pointer dereference if your underlying type was a pointer. You won't get a use-after-free error, though. On the other hand this class is 18 lines of code that can be applied to any existing C++ code base immediately whereas Rust is a whole new programming language, framework and ecosystem.

Anyone is also free to do the wrong thing and take a copy of the integer value without telling anyone but this issue remains in every other language as well, even in unsafe Rust.


May 11, 2017

2017-05-11 Thursday.

  • Mail, more admin, testing, chat with Thorsten.
  • Annoyed to see that Ahok goes to jail in Indonesia for two years for blasphemy, despite no prosecution case or real offence; pleased to see that Stephen Fry walks free, although he is beyond utterly wrong about God who as Psalm 89 tells us has Righteousness and justice as the foundation of his throne. Not a fan of blasphemy laws, discussing all things is important.
  • Poked at improving online unit tests.

Meson and GXml

After a call, Yannick has pushed a patch to add Meson build system to GXml. This is my first time using Meson and I really love it.

After a set of patches, I’ve managed to fix most installation and Unit Test integration.

Meson is well documented and provides a clean syntax.

Vala support is really good. In Autotools I’ve added some obscure rules to fix some old bugs. With Meson GXml has just a few ones, no obscure, commands in order to build Vala documentation and GObject Introspection binary files.

Meson exposes a bug in TDocument parsing, same test pass without error in Autotools. Using mesontest --gdb I was able to run tests in gdb, making things much convenient to debug than in Autotools, unless for the way I managed to debug in the past.

Meson is really fast! This will improve my development/tests/back to development processes, reducing time.

Next step is to find a way to get GXml compiled under Visual Studio, but first Gee needs to get Meson support too.

May 10, 2017

NetworkManager 1.8: What’s new

Three and a half months (and some 700+ commits) after NetworkManager 1.6, we’re pleased to announce NetworkManager 1.8 is ready. This release is generally focused on fixing bugs and addressing usability annoyances, yet it delivers some new features as well. Let’s have a look!

Source: https://commons.wikimedia.org/wiki/File:Telefontornet6838150900.jpgInternet connectivity checking was significantly improved in the new release

Reliable daemon restarts

In general, NetworkManager is not something that is restarted too frequently. But when it is, chances are it will end up looking slightly confused. In particular, a different connection profile may appear to be active on a device than before the restart.

There is a reason for this. NetworkManager tries to leave the network interfaces configured on shutdown. This is done to prevent unpleasant surprises in the form of broken remote shell sessions or even network mounts.

On daemon restart, we’d like to pick up the existing configuration. The problem is that we don’t just want to pick up any existing configuration — we do out best not to mess up with the configuration created outside NetworkManager. We’ve ended up with a rather complicated heuristics to determine the connection profiles that could be active. As it turned out, it was possible to end up in an ambiguous situation where our guess didn’t match the situation prior to the restart.

With NetworkManager 1.8 we’ve gotten rid of the guesswork. We save the runtime state to a file on shutdown and pick it up on startup, bringing the ambiguity to an end.

Better connectivity checking

NetworkManager periodically attempts to access a pre-configured web page in order to determine whether the host is able to access the Internet using its “best” default route. This is used to detect “captive portals” — typically wireless routers that hijack connectivity for the purposes of network authentication so they could be dealt with in secure manner (they would often attempt to do ill-advised tricks, such as hijacking secure connections, practically conducting man-in-the middle attacks).

In NetworkManager 1.8, connectivity checks are done for all connections with a default route. This allows us to do neat things, such as penalize connections that fail the check with a higher route metric. The practical consequence is that a wired connection that has a default route, but no Internet connectivity will have a higher metric (and thus a lower priority) than simultaneously active Wi-Fi connection that is connected to the Internet.

Aside from that, the connectivity checking now utilizes libcurl instead of libsoup, resulting in smaller dependency chains in typical small installations.

…and a lot more

We’ve made nmcli better: it is now easier to use in scripts and provides better error handling. NM now supports setting attributes for static routes. The dependency chain is smaller with libgudev no longer being required and libsoup being replaced by libcurl. Support for mobile broadband devices, team devices, bonds, dummy links, SR-IOV capable devices have all seen improvements. More control over hostname update is now allowed.

If you’d like to check for yourself, read the NEWS file or grab the new release!

2017-05-10 Wednesday.

  • Took E. to Addenbrooks in the morning; back, mail, built ESC stats, call, plugged through the task backlog. Read Firefox & Chrome's seccomp-bpf usage - which are not particularly obvious.

Maps news

It´s been a while since the last post here, so I thought I should share some now.

3.24.2 was just released and right before the release a nasty crash-on-exit bug appeared. Actually, the bug has been in there ever since Maps gained the ability to show your contact´s addresses from GNOME Calendar/Evolution, but it was brought into daylight by the new version of GJS (our JavaScript engine, based on SpiderMonkey). The problem actually is that in the dispose vfunc of the ContactStore object (this is in our glue C code) we had forgotten to NULL out some pointer memebers when freeing the objects (with g_list_free and g_free) and dispose can be called multiple times and we probably got away before because GJS leaked these objects in the earlier versions. We got this bug report from Ubuntu by the way, in 17.04 the new version of GJS is already used. Thanks to Emmanuele Bassi for spotting this use-after-free bug, this is now fixed in the new version (and in master of course).

Other than this, things have been a bit calm lately. But I have some goodies as candidates for 3.26 functionallity.

The first I thought I should show involves using the Wikipedia tag data from OpenStreetMap and Wikimedia´s thumbnailing API to obtain thumbnails for the map bubbles shown for search results:


Another thing I have sometimes missed is keyboard shortcuts to switch between street and aerial view, so a couple of new shortcuts for that (I choose alt+1 and alt+2 to match the ones used in Nautilus to switch between icon and list mode):


And finally another idea we had before was being able to edit localized name variants in OpenStreetMap (this helps improving searchability for users in cases where the name of a place might differ a bit in different languages).

So, I modified the edit dialog a bit, so that instead of the delete button for the name field, there is now a “more stuff” button:


Clicking on it shows a page for editing variants of the name:

Here we have provisioning for giving an alternative name (such as a locally-known-as inofficial name for a place), older/historic names of places, and the name in the user´s language (I also added a static English name field, since the English name variant is often defacto used in OpenStreetMap as Romanized version in cases where the native name of a place is writtin in a non-latin script). This feature might need some designer feedback.

Another feature that might be cool that I have been thinking up a bit on is showing upcoming departures for public transit stops (and maybe nearby stops when using your current position). There is not yet any concrete implementation of anything here and this would also need some designer love.

And also, when speaking of transit, we´re still looking for options for hosting an OpenTripPlanner server instance (you still have to run your own and use either the service file override or using the debug environment variable), so if you happen to have some ideas here, it´s always welcome!

May 09, 2017

Intel AMT on wireless networks

More details about Intel's AMT vulnerablity have been released - it's about the worst case scenario, in that it's a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn't super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn't a likely initial vector.

I've since done some more playing and come to the conclusion that it's rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option - if you simply provision AMT it won't be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:
  1. The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it's been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it - there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you've installed the AMT support drivers) is not to do this.
  2. The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel's wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.
Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you're running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn't received a firmware update, they'll be able to do so without needing any valid credentials.

If you're a corporate IT department, and if you have AMT enabled over wifi, turn it off. Now.

[1] Assuming that the network doesn't block client to client traffic, of course

comment count unavailable comments

How I Learned to (Mostly) Love Private Internet Access

TL;DR: I’ve renewed my subscription to Private Internet Access, and intend to continue using the service indefinitely.

This is the third and final blog post in my series on Private Internet Access. Part One lists the different problems I encountered when trying to use Private Internet Access, and Part Two discusses how I solved most of them. When Part Two was published, my remaining unsolved problems were (a) extreme difficulty checking mail in Evolution, (b) my first attempt to connect always failed, and (c) I was blocked from freenode. A day after publishing the second post, I updated it to discuss how to get the first connection attempt to work (save your password system-wide so it’s accessible by the login screen… seems obvious in retrospect).

So what did I do about email and freenode?

Email

I’m really happy with my solution for email. The problem was that I experienced a very high number of timeout errors sending and receiving messages when using Private Internet Access, far more than when not using it. A PR representative from Private Internet Access told me I needed to ask them to whitelist our mail server for SMTP, but I knew that wasn’t the problem because it worked OK sometimes, and I was having trouble with IMAP too. Everything email-related was just so much slower when using Private Internet Access.

My solution was to uninstall Evolution and install Geary instead. I now wish I had done this a long time ago. Geary has many shortcomings and significant room for improvement, but I’ve never been more pleased with a mail client. With Geary, reading my mail is no longer painful. Whereas Evolution takes several seconds to load every individual message, and often times out and fails, even when not using Private Internet Access, Geary takes a few seconds to load an entire conversation, which speeds things up tremendously. Conversation view is killer, a real must-have for a mail client. More importantly, timeouts and error messages are extremely rare with Geary, even when using Private Internet Access. Probably the difference is that Geary just waits a lot longer before timing out. I did experience one day shortly after switching to Geary where I was unable to send any mail from my Igalia account, which at the time I attributed to Private Internet Access. However, I’ve had no trouble since then, so I think this was  just an intermittent problem.  Geary also has a much slicker user interface than Evolution. I’m not comfortable saying that Geary is going to be the future of mail for GNOME, since there is no question that Evolution is a far more capable client right now, but I’m very pleased with Geary and am looking forward to future development.

freenode

I’m really unhappy about my solution for freenode. If you use an  IRC client that has good support for NickServ or SASL authentication, then apparently there is nothing you need to do to access freenode besides configure that. However, neither Empathy nor Polari qualify here, and those are the only IRC clients that are interesting to me personally. With a little experimentation (and some help from Florian), I found both clients could be configured to authenticate with NickServ automatically. However, there’s no way to avoid being pestered with a private message in GNOME Shell from NickServ every time I connect, with the accompanying chat box to type my response. The Telepathy integration in GNOME Shell needs some serious work.

So I went a couple weeks where I rarely ever logged into freenode, using only the KiwiIRC web client when I needed to join for something specific, like for a meeting or to contact a specific person. Now, KiwiIRC is actually pretty nice and functional, but a web client doesn’t really meet my needs for daily use. In the end, I settled on connecting to freenode via Igalia’s Matrix server. Yes, I’m using Riot and, yes, that’s another web client, but I have to use it anyway, so it’s no difference to me.  Now, Matrix seems to be a really nice chat protocol, and Riot is at least decent as a Matrix client, but it is an extremely terrible IRC client. For one, there’s no way to tell who is online outside your own Matrix server. Seriously. (Why so many people are using it to access GIMPNet, I have no clue.) So I still log in to KiwiIRC whenever I need to check if someone is online on freenode, while continuing to connect to GIMPNet from Empathy, because — and I never thought I’d say this — Empathy at least works properly. This is a very poor solution, but it’s a worthwhile tradeoff for me to be able to use Private Internet Access. It also allows me to avoid the disastrous non-bug where Matrix silently drops any private messages that a non-authenticated IRC user sends to a Matrix bridge user on freenode. (Silently!) I’m told this is an intentional anti-spam feature, but I think it’s totally unacceptable. It just sounds to me like maybe Matrix should not be in the business of bridging to IRC at all, if it can’t figure out how to handle PMs. I’m not sure I’ve ever experienced any PM spam. But anyway, this is supposed to be a blog post about Private Internet Access, not a rant about Matrix’s IRC bridge, so that’s enough complaining about that.

Conspiracy Theory

So besides the fact that IRC is terrible, I’m pretty satisfied with Private Internet Access. You have to trust it, though. It’s not often that relatively small companies decide to spend tens of thousands of dollars sponsoring free software projects like GNOME. (Private Internet Access does that!) For all we know, it could be run by the NSA, seeking to gobble up the web browsing history of people who think they have something to hide, and donating to free software because it knows that free software users will recommend the service to more people. That’s a totally-unsubstantiated claim that I just made up and for which I have zero evidence to support, but I don’t know, and you don’t know, and that’s the point. You have to trust it. Or at least, you have to trust it relative to the level of trust you have in your ISP. But you probably shouldn’t trust your ISP, at least not if it’s a national company, so that makes Private Internet Access an easy choice for me.

Update: Read the first comment below. What on Earth is going on?

May 08, 2017

Rust Memory Management

In the light of my latest fascination with Rust programming language, I've started to make small presentation about Rust at my office, since I'm not the only one at our company who is interested in Rust. My first presentation in Feb was about a very general introduction to the language but at that time I had not yet really used the language for anything real myself so I was a complete novice myself and didn't have a very good idea of how memory management really works. While working on my gps-share project in my limited spare time, I came across quite a few issues related to memory management but I overcame all of them with help from kind folks at #rust-beginners IRC channel and the small but awesome Rust-GNOME community.

Having learnt some essentials of memory management, I thought I share my knowledge/experience with folks at the office. The talk was not well-attended due to conflicts with other meetings at office but the few folks who attended were very interested and asked some interesting and difficult questions (i-e the perfect audience). One of the questions was if I could put this up as a blog post so here I am. :)

Basics


Let's start with some basics: In Rust,

  1. stack allocation is preferred over the heap allocation and that's where everything is allocated by default.
  2. There is strict ownership semantics involved so each value can only and only have one owner at a particular time.
  3. When you pass a value to a function, you move the ownership of that value to the function argument and similarly, when you return a value from a function, you pass the ownership of the return value to the caller.

Now these rules make Rust very secure but at the same time if you had no way to allocate on the heap or be able to share data between different parts of your code and/or threads, you can't get very far with Rust. So we're provided with mechanisms to (kinda) work around these very strict rules, without compromising on safety these rules provide. Let's start with a simple code that will work fine in many other languages:

fn add_first_element(v1: Vec<i32>, v2: Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(v1, v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This gives us an error from rustc:

error[E0382]: use of moved value: `v1`
--> sample1.rs:13:30
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v1` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

error[E0382]: use of moved value: `v2`
--> sample1.rs:13:37
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v2` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

What's happening is that we passed 'v1' and 'v2' to add_first_element() and hence we passed its ownership to add_first_element() as well and hence we can't use it afterwards. If Vec was a Copy type (like all primitive types), we won't get this error because Rust will copy the value for add_first_element and pass those copies to it. In this particular case the solution is easy:

Borrowing


fn add_first_element(v1: &Vec<i32>, v2: &Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(&v1, &v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This one compiles and runs as expected. What we did was to convert the arguments into reference types. References are Rust's way of borrowing the ownership. So while add_first_element() is running, it owns 'v1' and 'v2' but not after it returns. Hence this code works.

While borrowing is very nice and very helpful, in the end it's temporary. The following code won't build:

struct Heli {
reg: String
}

impl Heli {
fn new(reg: String) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = "G-HONI".to_string();
let heli = Heli::new(reg);

println!("Registration {}", reg);
heli.hover();
}

rustc says:

error[E0382]: use of moved value: `reg`
--> sample3.rs:20:33
|
18 | let heli = Heli::new(reg);
| --- value moved here
19 |
20 | println!("Registration {}", reg);
| ^^^ value used here after move
|
= note: move occurs because `reg` has type `std::string::String`, which does not implement the `Copy`

If String had Copy trait implemented for it, this code would have compiled. But if efficiency is a concern at all for you (it is for Rust), you wouldn't want most values to be copied around all the time. We can't use a reference here as Heli::new() above needs to keep the passed 'reg'. Also note that the issue here is not that 'reg' was passed to Heli:new() and used afterwards by Heli::hover() afterwards but the fact that we tried to use 'reg' after we have given its ownership to Heli instance through Heli::new().

I realize that the above code doesn't make use of borrowing but if we were to make use of that, we'll have to declare lifetimes for the 'reg' field and the code still won't work because we want to keep the 'reg' in our Heli struct. There is a better solution here:

Rc


use std::rc::Rc;                                                                                         

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

println!("Registration {}", reg);
heli.hover();
}

This code builds and runs successfully. Rc stands for "Reference Counted" so by putting data into this generic container, adds reference counting to the data in question. Note that while you had to explicitly call clone() method of Rc to increment its refcount, you don't need to do anything to decrease the refcount. Each time an Rc reference goes out of scope, the reference is decremented automatically and when it reaches 0, the container Rc and its contained data are freed.

Cool, Rc is super easy to use so we can just use it in all situations where we need shared ownership? Not quite! You can't use Rc to share data between threads. So this code won't compile:

use std::rc::Rc;                                                                                         
use std::thread;

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

It results in:

error[E0277]: the trait bound `std::rc::Rc<std::string::String>: std::marker::Send` is not satisfied in `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
--> sample5.rs:22:13
|
22 | let t = thread::spawn(move || {
| ^^^^^^^^^^^^^ within `[closure@sample5.rs:22:27: 24:6 heli:Heli]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::string::String>`
|
= note: `std::rc::Rc<std::string::String>` cannot be sent between threads safely
= note: required because it appears within the type `Heli`
= note: required because it appears within the type `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
= note: required by `std::thread::spawn`

The issue here is that to be able to share data between more than one threads, the data must be of a type that implements Send trait. However not only implementing Send for all types would be very impractical solution, there is also performance penalties associated with implementing Send (which is why Rc doesn't implement Send).

Introducing Arc


Arc stands for Atomic Reference Counting and it's the thread-safe sibling of Rc.

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

This one works and the only difference is that we used Arc instead of Rc. Cool, so now we have a very efficient by thread-unsafe way to share data between different parts of the code but also a thread-safe mechanism as well. We're done then? Not quite! This code won't work:

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<String>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
self.status.clear();
self.status.push_str("hovering");
println!("{} is {}", self.reg, self.status);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new("".to_string());
let mut heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("main: {} is {}", reg, status);

t.join().unwrap();
}

This gives us two errors:

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:16:9
|
16 | self.status.clear();
| ^^^^^^^^^^^ cannot borrow as mutable

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:17:9
|
17 | self.status.push_str("hovering");
| ^^^^^^^^^^^ cannot borrow as mutable

The issue is that Arc is unable to handle mutation of data from difference threads and hence doesn't give you mutable reference to contained data.

Mutex


For sharing mutable data between threads, you need another type in combination with Arc: Mutex. Let's make the above code work:

use std::sync::Arc;                                                                                      
use std::sync::Mutex;
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<Mutex<String>>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<Mutex<String>>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
let mut status = self.status.lock().unwrap();
status.clear();
status.push_str("hovering");
println!("thread: {} is {}", self.reg, status.as_str());
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new(Mutex::new("".to_string()));
let heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});

println!("main: {} is {}", reg, status.lock().unwrap().as_str());

t.join().unwrap();
}

This code will work. Notice how you don't have to explicitly unlock the mutex after using. Rust is all about scopes. When the unlocked value goes out of the scope, mutex is automatically unlocked.

Other container types


Mutexes are rather expensive and sometimes you have shared date between threads but not all threads are mutating it (all the time) and that's where RwLock becomes useful. I won't go into details here but it's almost identical to Mutex, except that threads can take read-only locks and since it's possible to safely share non-mutable state between threads, it's a lot more efficient than threads locking other threads each time they access the data.

Another container types I didn't mention above, is Box. The basic use of Box is that it's a very generic and simple way of allocating data on the heap. It's typically used to turn an unsized type into a sized type. The module documentation has a simple example on that.

What about lifetimes


One of my colleagues who had had some experience with Rust was surprised that I didn't cover lifetimes in my talk. Firstly, I think it deserves a separate talk of it's own. Secondly, if you make clever use of the container types available to you and described above, most often you don't have to deal with lifetimes. Thirdly, lifetimes is Rust is something that I still struggle with, each time I have to deal with it so I feel a bit unqualified to teach others about how they work.

The end


I hope you find some of the information above useful. If you are looking for other resources on learning Rust, the Rust book is currently your best bet. I am still a newbie at Rust so if you see some mistakes in this post, please do let me know in the comments section.

Happy safe hacking!

May 05, 2017

Recipes growing team

With the big push towards 1.0 now over, the development in GNOME recipes has moved to a more relaxed pace. But that doesn’t mean that nothing is happening! In fact, our team is growing, we will have two interns joining us this cycle, Ekta and Paxana.

While we are waiting for Ekta and Paxana to start working on the big projects for this cycle (sharing and unit conversion),  a number of smaller improvements have landed and will hopefully appear in a development release soon.

More recipes

We were somewhat successful in getting recipe contributions from the GNOME community.

Thanks to everybody who has contributed – keep it coming !

One consequence of this success is that we have too much data to ship with the application – the tarball for 1.0 was bigger than 100MB. To avoid this problem growing even further, the current development release downloads all recipe and image data at runtime, when needed. I’m very interested in feedback about how well that works.

More cuisines

Another consequence is that we now have so many cuisines represented that they don’t fit on one page anymore.

To address this, I’ve added an expander to show more cuisines.

Another improvement around cuisines is that we now offer all the cuisines that we know about to the cuisine combo box when you are editing recipes. A small step towards a user interface that adapts to your use of the application.

Inline editing

One of the findings of our testing session with Jakub and Tuomas at devconf was that creating a recipe was too fiddly, in particular the popover-heavy editing of the ingredients list.

To address this, we are moving to an inline editing approach for the ingredients list. To make this easier, I first refactored the ingredients list to be a separate widget which is now shared between the edit page and the details page (in a read-only mode).

Ekta is helping me with this.

Row reordering

Another outcome of our testing session was that we need to let the user reorder the ingredients list, which was not possible back in January. For 1.0, we added buttons to move a row up or down, but that was more of a stop-gap solution.

What we really want is to reorder rows by drag-and-drop,so I spent a bit of time recently to figure out drag-and-drop support for list boxes.

Temperature conversion

Last, but not least, we also added some preliminary support for unit conversion.

For now, we can display temperatures in Celsius or Fahrenheit. Currently, this gets determined by a setting, but as the next step, we are going to pick this up from the locale.

Paxana is working on this, as a warm-up for her internship project.

ZeMarmot work in progress: from animatics to animation

While production on the animation is still going full-steam, we thought we could show what exactly this is about. How do you go from static images to animated ones? Well this is all like progress layers, one step after another.

The Storyboard, then the Animatics

We have already talked about these at length so we won’t do it again. Feel free to check out our previous blog posts on the topic. These are the first 2 layers: comics-like static images (storyboard), and static images displayed in video (animatics).

Key Framing

In the digital world, “keyframes” is used with different meaning. On video formats, it is usually used to distinguish a standalone image in the stream with partial images which cannot be displayed by themselves. On 3D or vector animation software nowadays, it is usually used as extreme points in smooth transition which are computed by algorithm (interpolation). This is more or less the definition given by Wikipedia: “A key frame in animation and filmmaking is a drawing that defines the starting and ending points of any smooth transition.

This definition is a little too “mechanic” and tied to modern way of animating with vector or 3D (actually it is not entirely true even in 3D and vector but this is what one might think when discovering interpolation magic). Key frames are actually simply “important images” as determined by the animator in a purely judgemental way. Keyframing is part of the art of the animator, more than a science. It is true that they are often starting/ending points of movements, but this is not a necessity. Also called sometimes “key poses”, these are what the animator feels make the movement good or not, in one’s guts as an artist.

Pose to Pose vs Straight Ahead animation

When animating, there are 2 main techniques. The first method is to decompose the movement in key poses (keyframes) as a first step. Then later, when it looks good, you complete with intermediate frames (inbetweens). This is the pose to pose method and demonstrated a bit in the above video.

When you are a big studio, keyframes would usually be drawn by the main animators, and the inbetweens would be left to the assistants (less experienced animators). This allows to share the work with more multitasking. In ZeMarmot‘s case unfortunately, Aryeom does everything, since we don’t have the funds to hire more artists as of yet.

The other method is called “Straight Ahead” and consists on doing all frames one after another without prior decomposition. Timing is much harder to plan with such a technique and you may end wasting more drawing. On the other hand, some animators prefer the freedom it gives and by making movements less perfect, you can also avoid them being too mechanical (in other words, perfect movement are not always what you are looking for when you want to represent living being in their whole perfect imperfection).

Observing Aryeom, she uses both methods, depending on the cuts, as is the case for many animators.

Conclusion

Hopefully you appreciate this insight on the work behind animating life, and this small video where we display the same pieces of a scene at different steps in the work-in-progress, first one after another, then side by side.

You will notice that we mostly show always pieces of the same scene since we really want to try and avoid any spoiler as much as possible. 🙂

Have fun!

ZeMarmot team

Reminder: if you want to support our animation film, made with Free
Software, for which we also contribute back a lot of code, and
released under Creative Commons by-sa 4.0 international, you can
support it in USD on Patreon or in EUR on Tipeee.

Feeds