GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

September 24, 2017

Blender Daily Doodles

If you follow me on Instagram or Youtube, you’ve probably noticed all my spare time has been consumed by flying racing drones recently. Winter is approaching, so I’d rather spare my fingers from freezing and focus on my other passion, 3D doodling.

Modifier stack explorations Modifier stack explorations

This blog post is the equivalent of a new year’s resolution. I’ll probably be overwhelmed by duties and will drop out from this, but at least being public about it creates some pressure to keep trying. Feel free to help out with the motivation :)

Animation Nodes is amazing Animation Nodes is amazing

GNOME 3.26 Release Party in Nuremberg

The last Friday GNOME 3.26 was celebrated in the SUSE headquarters in Nuremberg with pizzas and a cake. Thanks to GNOME and openSUSE for sponsoring the event! 😃

Richard, Ayoub and Howard chatting and having pizza!

A cake with no footprint on it, hence way less cool than the one from São Paulo.

There were KDE users too! Though Ana Maria came mostly for the free cake. 😏

Oliver explaining how GNOME is tested for openSUSE with openQA.

September 22, 2017

Visual revamp of GNOME To Do

Greetings, GNOME friends!

I’m a fan of productivity. It is not a coincidence that I’m the maintainer of Calendar and To Do. And even though I’m not a power user, I’m a heavy user of productivity applications.

For some time now, I’m finding the overall experience of GNOME To Do clumsy and far from ideal. Recently, I received a thank you email from a fellow user, and I asked they what they think that could be improved.

It was not a surprise when they said To Do’s interface is clumsy too.

That motivated me to experiment and bother our designers about ways to improve GNOME To Do. With the great help of Tobias Bernard, a super awsome contributor, we could figure out a way to improve the current situation.

Opaque Task Rows

One of the problems of GNOME To Do was the translucent task rows. Priorities would be semi-transparent colors applied on top of transparent rows.

Of course this mess could lead to things like this:

MessCan you tell which tasks are high, medium and low priority tasks?

After some investigation, a lot of experimentation and feedback from multiple design team members, we could come up with this:

New colorsAll opaque rows with priorities at the borders.

I personally think this is a small, but huge improvement over the previous state. When you have to stare at tasklists for hours, the minor annoyances are what causes the biggest frustrations.

Inline Editing

Another big aspect of To Do that was the task editor panel. This was initially made based on some old mockups, but this proved to not be the ideal experience.

The biggest problem was that there were no connection between the editor and the task. Of course there is an arrow pointing to the task row, but consider that:

  • The task title is edited in the task row
  • All other fields are edited in the side panel
  • The arrow might now be obvious to spot
  • The real representation of the task was the row, not the panel

So Tobias suggested me inline editing of tasks. I went ahead and implemented it, and the result looked actually very good!

Inline editingEditing the task where the task is represented.

The necessary width was reduced, and now the window can be shrinked to small sizes. And it works nicely on Dark Themes too:

Dark themeNew rows on Dark Theme variant

This work already landed on master, and will be part of GNOME To Do 3.28. And, of course, our traditional sequence of images:

 

Any comments? Thoughts? Please let me know in the comments! And don’t ever forget, you can always get involved – you just need to get in touch, and join us at #gnome-todo at irc.gnome.org.

Enjoy!

September 21, 2017

2017-09-21 Thursday.

  • Walked to the conference, abortive customer call; gave my talk:
    Collabora Online and ownCloud update - hybrid PDF
  • Caught up with Markus over lunch, talked with various customers and users, poked at a bug with Thomas.

GNOME 3.26 release party in Strasbourg

GNOME 3.26 was released yesterday.

As for the 20th birthday party we had a month ago and a year and a half after the previous release party we had here, we gathered with a few local LUG members to have some drinks and celebrate at Brasserie le Scala. I summed up the newest additions to the environment, advocated for Flatpak, and we discussed many more topics around pints of local craft beer.

libinput and the HUION PenTablet devices

HUION PenTablet devices are graphics tablet devices aimed at artists. These tablets tend to aim for the lower end of the market, driver support is often somewhere between meh and disappointing. The DIGImend project used to take care of them, but with that out of the picture, the bugs bubble up to userspace more often.

The most common bug at the moment is a lack of proximity events. On pen devices like graphics tablets, we expect a BTN_TOOL_PEN event whenever the pen goes in or out of the detectable range of the tablet ('proximity'). On most devices, proximity does not imply touching the surface (that's BTN_TOUCH or a pressure-based threshold), on anything that's not built into a screen proximity without touching the surface is required to position the cursor correctly. libinput relies on proximity events to provide the correct tool state, which again is relied upon by compositors and clients.

The broken HUION devices only send BTN_TOOL_PEN once whenever the pen first goes into proximity and then never again until the device is disconnected. To make things more fun, HUION re-uses USB ids, so we cannot even reliably detect the broken devices and do the usual approach to hardware-quirking. So far, libinput support for HUION devices has thus been spotty. The good news is that libinput git master (and thus libinput 1.9) will have a fix for this. The one thing we can rely on is that tablets keep sending events at the device's scanout frequency. So in libinput we now add a timeout to the tablets and assume proximity-out has happened. libinput fakes a proximity out event and waits for the next event from the tablet - at which point we'll fake a proximity in before processing the events. This is enabled on all HUION devices now (re-using USB IDs, remember?) but not on any other device.

One down, many more broken devices more to go. Yay.

September 20, 2017

A new gutter for Builder

The GtkSourceView library has this handy concept of a GtkSourceGutterRenderer. They are similar in concept to a GtkCellRenderer but for the gutter to the left or right of the text editor.

Like a GtkCellRenderer, you pack it into a container and they are placed one after another with some amount of optional spacing in-between. This is convenient because you can start quickly by mixing and matching what you need from existing components. Those include text (such as line numbers), pixbuf rendering, or even code folding regions.

However, there is a cost to this sort of composition. One is function call overhead, but that isn’t particularly interesting to me because there are ways to amortize that away (like we did with the pixel cache). The real problem is one of physical space. Each time a renderer is added, the width of the gutter is increased.

Builder 3.26.0 added a new column for breakpoints, and so we increased our width by another 18 pixels or so. Enough to be cumbersome. It looked like the following which has 4 renderers displayed.

  • Breakpoints renderer
  • Diagnostics renderer
  • Line numbers
  • Line changes (git)

Once you reach some level of complexity, you need to bite the bullet and implement a single renderer that has all the features you want in one place. It allows you to overlap content for density and use the background itself as a component. We just did that for Builder and here is what it looks like.

There are a couple other nice points performance-wise by implementing the gutter as a single renderer. We can take a number of “shortcuts” in the render path that a generic renderer cannot without sacrificing flexibility. Since the gutter is not pixel cached, this has improved the performance of kinetic scrolling on various HiDPI displays. There is always more performance work to do, but I’m rather happy with the result so far.

You’ll find this in the upcoming 3.26.1 release of Builder and is already available in Builder’s Nightly flatpak.

2017-09-20 Wednesday.

  • Up unfeasibly early, coach to Stansted for an early flight to Nuremburg for the ownCloud conference. Poked at slides. Arrived, good to catch up with lots of people, out for dinner with Holger, Thomas, Michael & some CERN guys in the evening.

Bluetooth on Fedora: joypads and (more) security

It's been a while since I posted about Fedora specific Bluetooth enhancements, and even longer that I posted about PlayStation controllers support.

Let's start with the nice feature.

Dual-Shock 3 and 4 support

We've had support for Dual-Shock 3 (aka Sixaxis, aka PlayStation 3 controllers) for a long while, but I've added a long-standing patchset to the Fedora packages that changes the way devices are setup.

The old way was: plug in your joypad via USB, disconnect it, and press the "P" button on the pad. At this point, and since GNOME 3.12, you would have needed the Bluetooth Settings panel opened for a question to pop up about whether the joypad can connect.

This is broken in a number of ways. If you were trying to just charge the joypad, then it would forget its original "console" and you would need to plug it in again. If you didn't have the Bluetooth panel opened when trying to use it wirelessly, then it just wouldn't have worked.

Set up is now simpler. Open the Bluetooth panel, plug in your device, and answer the question. You just want to charge it? Dismiss the query, or simply don't open the Bluetooth panel, it'll work dandily and won't overwrite the joypad's settings.


And finally, we also made sure that it works with PlayStation 4 controllers.



Note that the PlayStation 4 controller has a button combination that allows it to be visible and pairable, except that if the device trying to connect with it doesn't behave in a particular way (probably the same way the 25€ RRP USB adapter does), it just wouldn't work. And it didn't work for me on a number of different devices.

Cable pairing for the win!

And the boring stuff

Hey, do you know what happened last week? There was a security problem in a package that I glance at sideways sometimes! Yes. Again.

A good way to minimise the problems caused by problems like this one is to lock the program down. In much the same way that you'd want to restrict thumbnailers, or even end-user applications, we can forbid certain functionality from being available when launched via systemd.

We've finally done this in recent fprintd and iio-sensor-proxy upstream releases, as well as for bluez in Fedora Rawhide. If testing goes well, we will integrate this in Fedora 27.

Ubuntu GNOME Shell in Artful: Day 13

Now that GNOME 3.26 is released, available in Ubuntu artful, and final GNOME Shell UI is confirmed, it’s time to adapt our default user experience to it. Let’s discuss how we worked with dash to dock upstream on the transparency feature. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 13: Adaptive transparency for Ubuntu Dock

GNOME Shell 3.26 excellent new release ships thus some dynamic panel transparency by default. If no window is next to the top panel, the bar is itself is translucent. If any windows is next to it, the panel becomes opaque. This feature is highlighted on the GNOME 3.26 release note. As we already discussed this on a previous blog post, it means that the Ubuntu Dock default opacity level doesn’t fit very well with the transparent top panel on an empty desktop.

Previous default Ubuntu Dock transparency

Even if there were some discussions within GNOME about keeping or reverting this dynamic transparency feature, we reached out the Dash to Dock guys during the 3.25.9x period to be prepared. Started then some excellent discussions on the pull request which was already rolling full speed ahead.

The first idea was to have dynamic transparency. Having one status for the top panel, and another one for the Dock itself. However, this gives some weirds user experience after playing with it a little bit:

We can feel there are too much flickering, having both parts of the UI behaving independently. The idea I raised upstream was thus to consider all Shell UI (which is, in the Ubuntu session the top panel and Ubuntu Dock) as a single entity. Their opacity status is thus linked, as one UI element. François agreed and had the same idea on his mind, before implementing it. The results is way more natural:

Those options are implemented as options in Dash to Dock settings panel, and we just set this last behavior as the default in Ubuntu Dock.

We made sure that this option is working well with the various dock settings we expose in the Settings application:

In particular, you can see that intelli-hide is working as expected: dock opacity changing while the Dock is vanishing and when forcing it to show up again, it’s at the maximum opacity that we set.

The default with no application next to the panel or dock is now giving some good outlook:

Default empty Ubuntu artful desktop

The best part is the following: as we are getting closer to release and there is still a little bit of work upstream to get everything merged in Dash to Dock itself for options and settings UI which doesn’t impact Ubuntu Dock, Michele has prepared a cleaned up branch that we can cherry-pick from directly in our ubuntu-dock branch that they will keep compatible with master for us! Now that the Feature Freeze and UI Freeze exceptions have been approved, the Ubuntu Dock package is currently building in the artful repository alongside other fixes and some shortcuts improvements.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

It’s really a pleasure to work with Dash to Dock upstream, I’m using this blog opportunity to thank them again for everything they do and the cooperation they ease out for our use case.

September 19, 2017

Polari joined the Gitlab pilot project

I am happy to announce that yesterday Polari joined the Gitlab pilot project. This means that if you encounter any issue that has not been reported yet, it should now be reported in GNOME’s gitlab instance.

Happy hacking!

Launching Pipewire!

In quite a few blog posts I been referencing Pipewire our new Linux infrastructure piece to handle multimedia under Linux better. Well we are finally ready to formally launch pipewire as a project and have created a Pipewire website and logo.Pipewire logo

To give you all some background, Pipewire is the latest creation of GStreamer co-creator Wim Taymans. The original reason it was created was that we realized that as desktop applications would be moving towards primarly being shipped as containerized Flatpaks we would need something for video similar to what PulseAudio was doing for Audio. As part of his job here at Red Hat Wim had already been contributing to PulseAudio for a while, including implementing a new security model for PulseAudio to ensure we could securely have containerized applications output sound through PulseAudio. So he set out to write Pipewire, although initially the name he used was PulseVideo. As he was working on figuring out the core design of PipeWire he came to the conclusion that designing Pipewire to just be able to do video would be a mistake as a major challenge he was familiar with working on GStreamer was how to ensure perfect audio and video syncronisation. If both audio and video could be routed through the same media daemon then ensuring audio and video worked well together would be a lot simpler and frameworks such as GStreamer would need to do a lot less heavy lifting to make it work. So just before we starting sharing the code publicaly we renamed the project to Pinos, named after Pinos de Alhaurín, a small town close to where Wim is living in southern Spain. In retrospect Pinos was probably not the worlds best name :)

Anyway as work progressed Wim decided to also take a look at Jack, as supporting the pro-audio usecase was an area PulseAudio had never tried to do, yet we felt that if we could ensure Pipewire supported the pro-audio usecase in addition to consumer level audio and video it would improve our multimedia infrastructure significantly and ensure pro-audio became a first class citizen on the Linux desktop. Of course as the scope grew the development time got longer too.

Another major usecase for Pipewire for us was that we knew that with the migration to Wayland we would need a new mechanism to handle screen capture as the way it was done under X was very insecure. So Jonas Ådahl started working on creating an API we could support in the compositor and use Pipewire to output. This is meant to cover both single frame capture like screenshot, to local desktop recording and remoting protocols. It is important to note here that what we have done is not just implement support for a specific protocol like RDP or VNC, but we ensured there is an advaned infrastructure in place to support any protocol on top of. For instance we will be working with the Spice team here at Red Hat to ensure SPICE can take advantage of Pipewire and the new API for instance. We will also ensure Chrome and Firefox supports this so that you can share your Wayland desktop through systems such as Blue Jeans.

Where we are now
So after multiple years of development we are now landing Pipewire in Fedora Workstation 27. This initial version is video only as that is the most urgent thing we need supported for Flatpaks and Wayland. So audio is completely unaffected by this for now and rolling that out will require quite a bit of work as we do not want to risk breaking audio on your system as a result of this change. We know that for many the original rollout of PulseAudio was painful and we do not want a repeat of that history.

So I strongly recommend grabbing the Fedora Workstation 27 beta to test pipewire and check out the new website at Pipewire.org and the initial documentation at the Pipewire wiki. Especially interesting is probably the pages that will eventually outline our plans for handling PulseAudio and JACK usecases.

If you are interested in Pipewire please join us on IRC in #pipewire on freenode. Also if things goes as planned Wim will be on Linux Unplugged tonight talking to Chris Fisher and the Unplugged crew about Pipewire, so tune in!

Reflection on trip to Kiel

On Sunday, I flew home from my trip to Kiel, Germany. I was there for the Kieler Open Source und LinuxTage, September 15 and 16. It was a great conference! I wanted to share a few details while they are still fresh in my mind:

I gave a plenary keynote presentation about FreeDOS! I'll admit I was a little concerned that people wouldn't find "DOS" an interesting topic in 2017, but everyone was really engaged. I got a lot of questions—so many that we had to wrap up before I could answer all the questions.

FreeDOS has been around for a long time. We started FreeDOS in 1994, when I was still an undergraduate physics student. I loved DOS at the time, and I was upset that Microsoft planned to eliminate DOS when they released the next version of Windows. If you remember, the then-current version was Windows 3.1, and it wasn't great. And Windows's history up to this point wasn't promising: Windows 1 looked pretty much like Windows 2, and Windows 2 looked like Windows 3. I decided that if Windows 4 would look anything like Windows 3.1, I wanted nothing to do with it. I preferred DOS to clicking around the clumsy Windows interface. So I decided to create my own version of DOS, compatible with MS-DOS so I could continue to run all my DOS programs.

We recently published a free ebook about the history of FreeDOS. You can find it on our website, at 23 Years of FreeDOS.

My second talk that afternoon was about usability testing in open source software. The crowd was smaller, but they seemed very engaged during the presentation, so that's good.

I talked about how I got started in usability testing in open source software, and focused most of my presentation on the usability testing we've done as part of the Outreachy internships. I highlighted the GNOME usability testing from my interns throughout my participation in Outreachy: Sanskriti, Gina, Renata, Ciarrai, and Diana.

Interesting note: Ciarrai's paper prototype test on the then-proposed Settings redesign will be published this week on OpenSource.com, so watch for that.

The conference recorded both presentations, and they'll be uploadeded to YouTube in the next few days. I'll link to them when they are up.

September 18, 2017

On Noticing That Your Project Is Draining Your Soul

I was talking with a fellow consultant about what to do if you have a gig getting you down. Especially when you realize that the client isn't being helpful, and there's a bunch of learning curves that are exhausting you, and you still have several weeks to go.

In my master's in tech management coursework, I learned the lens that thriving is a function of a person times their environment. I think those of us who are used to trying harder, overcoming obstacles, etc. can be -- kind of out of self-protective instinct -- bad at noticing "this environment is so crappy it makes it systematically hard for me to achieve and thrive". Especially with short-term projects. At first, things like "I feel tired" or "ugh, new thing, I don't want to learn this and be bad at it (at first)" and "I'm worried that person doesn't like me" or "they missed the email/meeting/call and now it's harder to execute the plan" are identical to problems that we are reasonably sure we can overcome. Maybe we notice patterns about what's not working but think: I can take initiative to solve this, myself, or with my few allies.

Several papercraft pieces I made out of gold-colored wrapping paper, some alike and some differentThe data points accumulate and we chat with other people and, in the process, learn more data points and shape our data points into narratives and thus discover: this is a bad environment, structurally. But by the time we really figure out the effect a short-term project is having on us, it's supposedly the home stretch.

I'm looking back at gigs that I found draining, where, eventually, I had this realization, although I have never quite framed it this way before now. On some level I realized that I could not succeed by my own standard in these projects/workplaces, because there was so much arrayed against me (e.g., turf war, a generally low level of skill in modern engineering practices, lack of mission coherence, low urgency among stakeholders) that I could not do the things that it is kind of a basic expression of my professional self/competence to do.*

So I had to change what it was I aimed to achieve. For example, I've had a gig where I was running my head into the wall constantly trying to bring better practices to a project. I finally talked with an old hand at the organization and learned the institutional reasons this was practically impossible, why I would not be able to overcome the tectonic forces at play and get the deeper conflicts to resolve any faster. So we changed what I was trying to do. Running a daily standup meeting, by itself, is a thing I can do to bring value. I changed my expectations, and made mental notes about the pain points and the patterns, because I could not fix them right away, but I can use those experiences to give better advice to other people later.

An editor recently told me that, in growing as an editor, he'd needed to cultivate his sense of boredom. He needed to listen to that voice inside him that said this is boring me -- and isn't that funny? Parents and teachers tell us not to complain about being bored -- "only boring people are bored", or -- attributed to Sydney Wood -- An educated man is one who can entertain a new idea, another person, or himself. But pain is a signal, boredom is a signal, aversion and exhaustion are signals. Thriving is a function of a person times their environment.

Also, the other day I read "Living Fiction, Storybook Lives" (which has spoilers for Nicola Griffith's excellent novel Slow River).

How come I spent many years living a rather squalid existence... yet managed to find my way out, to the quite staid and respectable life I have now, when others in the same situation never escaped? In the course of writing the book, I found that the answer to my question was that the question itself was not valid: people are never in the same situation.

It takes substantial introspection and comparison to figure out: what kind of situation am I in, both externally and internally? Is it one where I will be able to move the needle? It gets easier over time, I think, and easier if I take vacations so I can have a fresh perspective when I come back, share my stories with others and listen to their stories, and practice mindfulness meditation so I am better at noticing things (including my own reactions). Maybe "wisdom" is what feels like the ability to X-ray a messy blobby thing and see the structures inside, see the joints that can bend and the bones that can't. In some ways, my own motivation and mindfulness are like that for me -- I need to recognize the full truth of the situation I'm in, internally and externally, to see what needs changing, to see how I might act.

The thing that gets me down most, on exhausting projects, is the meta-fear that nothing will improve, that I am stuck. When I realize that, I try to attend to that feeling of stuckness. Sometimes the answer is in the problem.


* As Alexandra Erin discusses, regarding her political commentary via Twitter threads: "I do the work I do on here because I feel called to it. For the non-religious, I mean: I have a knack for it and I find meaning in it."

Summer Hiatus (Unscheduled)

Hey, my blog's back up. Quick summary: I'm one of the bloggers at the new Geek Feminism blog, I got a work visa for the United Kingdom in case Leonard & I move there late this year, the Nokia N900 smartphone launched complete with a bunch of software Collabora's written, I'm denting, and basically I'm fine. Thanks Open Computing Facility staffers for your work and achievements.

September 17, 2017

GUADEC 2017: timeline

After the statistics perhaps you are interested in reading a timeline of GUADEC 2017! In particular you can compare it to the burn down chart from the GUADEC HowTo and see how that interacts with reality.

Of course lots of details are excised from this overview but it gives a general sense of the timings. In some follow up posts I’ll go in more detail about what I think went well and what didn’t. We also welcome your feedback on the event (if you can still remember it 🙂

Summer 2014: At some point during GUADEC 2014 I start going on about doing a Manchester edition.

August 2015: Alberto and Allan both float the idea of doing a Manchester bid with me; it seems like there’s just about enough of a team to go for it. I was already planning to be away in summer 2016 at this point so we decided to target 2017.

Alberto has a friend working at MIDAS who gives us a good start and we end up meeting with the Marketing Manchester conference bureau, the University of Manchester and Manchester Metropolitan University.

The meeting with University of Manchester was discouraging (to be honest, they seemed to be geared up only for corporate conferences rather than volunteer-driven events) but Manchester Metropolitan were much more promising.

Winter 2015: We lost touch with MMU for a few months (presumably as University started back up), but we eventually got a proper contact in the conferences department and started moving forwards with the bid.

Spring 2016: Our bid is produced, with Marketing Manchester doing most of the content and layout (as you might be able to tell). Normally I would worry to see only one GUADEEC bid on the table but, having been thinking about our bid for almost a year already I was also glad that it looked like we’d be the main option.

Summer 2016: GUADEC 2016 in Karlsruhe; Manchester is selected as the location for 2017. Much rejoicing (although I am on a 9000 mile road trip at the time).

August 2016: Talks begin with venue drawing up contracts for venue and accommodation. The venue was reasonably painless to sort out but we spent lots of time figuring out accommodation; the University townhouses required final numbers and payment 6 months in advance of the event, so we spent a lot of time looking into other options (but ended up deciding that the townhouses would be best even though we would inevitably lose a bit money on them).

September 2016: We begin holding monthly-ish meetings with myself, Alberto, Allan and Javier present. Work begins on sponsorship brochure (which complicated by needing to coordinate with GNOME.Asia and potentially LAS), talks continue with venue.

December 2016: Contracts finally signed for venue and accommodation (4 months later!), conference dates finalized. We apply for a UK bank account as an “unincorporated association”. Discussion begins about the website, we decide to hold off on announcing the dates until we have some kind of website in place.

January 2017: Basic website finished, dates announced. Lots of work on getting the registration system ready. We begin meeting each week on a Monday evening. Initial logo made by Jakub and Allan.

February 2017: Trip to FOSDEM, where we put up a few GUADEC posters. Summer still seems a long way off. Codethink sponsorship confirmed. We start thinking about keynote speakers. Javier and Lene look into social event venues, including somewhere for the 20th birthday party(with hearts already set on MOSI). The search for new Executive Director for GNOME finally comes to a close with Neil McGovern being hired, and he soon starts joining the GUADEC calls and helping out (in particular with the search for sponsors, which up til now has been nearly all Alberto’s work).

March 2017: After 4 months of bureaucracy, our bank account finally approved. After much hacking and design work, we can finally open registration and the call for papers. We have to finalize room numbers at the University already, although most rooms are still unbooked. Investigation into getting GNOME Beer brewed (which ended up going nowhere, sadly). Requests for visa invites begin to arrive.

April 2017: Lots of planning for social events, the talk days and the unconference days. PIA sponsorship confirmed. Posters being designed. Call for papers closes, voting begins and Kat starts putting together the talks schedule.

May 2017: Birthday planning with help from the engagement team (in particular Nuritzi). The University temporarily decide that we’ll have to pay staff costs of £500 per day to have the canteen open; we do a bunch of research into alternatives but then we go back to the previous agreement of having the canteen open with just a minimum spend. Planning of video recording and design. Schedule and social events planning.

June and July 2017: Continual planning and discussion of everything. More sponsors confirmed. Allan does prodigious amounts of graphic design and organizing printing. Travel sponsorship finally confirmed and lots of visa invitation requests start to arrive. Accommodation bookings continue to come in, along with an increasing amount of queries, changes and cancellations that become quite time-consuming to keep track of and respond to. Evening events being booked and finalized, including more planning of the birthday party with Nuritzi. Discussions of how to make sure the conference is inclusive to newcomers. Water bottles, cake and T-shirts ordered. Registrations keep coming in until we actually hit and go over 200 registrations. We contact volunteers and come up with a timetable.

Finally, the day before GUADEC we collect the last of the printing, bring everything to the venue and hole up in a room on the 2nd floor ready to pre-print names on badges and stuff the lanyard pouches with gift bags. We discover two major issues: firstly the ink on the badges gets completely smudged when we run it through the printer to print a name on it; and secondly the emergency telephone number that we’ve printed on the badges has actually been recycled as the SIM card was inactive for a while and now goes through to some poor unsuspecting 3rd party.

guadec-badges.jpgWe lay out all the badges to try and dry the ink out but 3 hours later the smudging is still happening. We realise that the names will just have to be drawn on with marker pens. As for the emergency telephone… if you look closely at a GUADEC 2017 badge you’ll notice that there’s a sticky label with the correct number covering up the old number on the badge. Each one of these was printed onto stickyback paper and lovingly chopped out and stuck on by hand. You’re welcome! (Nobody actually called the emergency phone during the event).

Javier pointed out that we should be at the registration event at least an hour early (it started at 18:00). I said this was nonsense because most people wouldn’t get there til later anyway. How wrong I was !!! I’m used to organizing music events where people arrive about an hour after you tell them to, but we got to Kro Bar about 17:45 and it was already full to bursting with eager GNOME contributors, many of whom of course hadn’t seen each other for months. This was not the ideal environment to try and set up a registration desk for the first time and I mostly just stood around looking at boxes feeling confused and occasionally moving things around. Thankfully Kat and Benjamin soon arrived and made registration a reality leaving me free to drink a beer and remain confused.

And the rest is history!


Documentation needs usability, too

If you're like most developers, writing documentation is hard. Moreso if you are writing for end-users. How do you approach writing your documentation?

Remember that documentation needs good usability, too. If documentation is too difficult to read—if it's filled with grammatical mistakes, or the vocabulary is just too dense, or even if it's just too long—then few people will bother to read it. Your documentation needs to reach your audience where they are.

Finding the right tone and "level" of writing can be difficult. When I was in my Master's program, I referred to three different styles of writing: "High academic," "Medium academic," and "Low academic."
High academic is typical for many peer-reviewed journals. This writing is often very dense and uses large words that demonstrate the author's command of the field. High academic writing can seem very imposing.

Medium academic is more typical of undergraduate writing. It is less formal than high academic, yet more formal than what you find in the popular press.

Low academic tends to include most professional and trade publications. Low academic authors may sprinkle technical terms here and there, but generally write in a way that's approachable to their audience. Low academic writing uses contractions, although sparingly. Certain other formal writing conventions continue, however. For example, numbers should be written out unless they are measurements; "fifty" instead of "50," and "two-thirds" instead of "2/3." But do use "250 MB" and "1.18 GHz."
In my Master's program, I learned to adjust my writing style according to my instructors' preferences. One professor might have a very formal attitude towards academic writing, so I would use High academic. Another professor might approach the subject more loosely, so I would write in Medium academic. When I translated some of my papers into articles for magazines or trade journals, I wrote in Low academic.

And when I write my own documentation, I usually aim for Low academic. It's a good balance of professional writing that's still easy to read.

To make writing your own documentation easier, you might also consult the Google Developer Documentation Style Guide. Google just released their guide for anyone to use. The guide has guidelines for style and tone, documenting future features, accessible content, and writing for a global audience. The guide also includes details about language, grammar, punctuation, and formatting.

GNOME 3.26 is here

..and I did a video! (click the picture below to watch it)


Activity on the GNOME 3.26 Release Video

You might notice by comparing to the 3.24 release video that I’ve been considerably less active on this cycle’s video. The biggest factor playing into this is that I have moved to Brisbane, Australia where I will be staying for the next few months (it’s lovely btw!) with less time to contribute. Secondly the time span between GUADEC and release has been considerably shorter which has pressured this cycle’s release material a bit. An unfortunate consequence of this is that translators have very little time to translate the video.


The GNOME 3.26 Release video in Blender’s VSE.

To make the video efficiently I have skipped much content in the animation step. The manuscript has been tailored to only concern the screencasts which has meant that I could focus my time editing everything together in the Video Sequence Editor. This limits what I can do creatively, but I also learned that for some aspects of the video, simple is better too. I was initially working with Simon on music, but he unfortunately found himself ill prior to the release so we have this time used a nice soundtrack from the Youtube Audio Library.

My plan for next time is to start earlier and try to get a collaboration going with developers about screencasting as soon as new features land. Having fellow contributors helping me screencast really saves me a lot of energy and time – which I in turn can put into making a better release video. Acting earlier should hopefully give me a better opportunity to write the manuscript and send it off to Karen for voice-over production, so we can have timing in place as early as possible. This should give me and Simon better room to closely collaborate on audio and visuals and have the video translated in as many languages as possible before release.

I’d like to thank everyone who helped me with this video, you know who you are! :)

3.26 Release Party in São Paulo

We had a Release Party in São Paulo, Brazil. It happened at the release day, and it was absolutely great:

IMG_9656.JPG

And we had a super awsome cake too!

IMG_9644.JPG

What else could we ask for? 🙂

September 16, 2017

Fun with fonts

I had the opportunity to spend some time in Montreal last week to meet with some lovely font designers and typophiles around the ATypI conference.

At the conference, variable fonts celebrated their first birthday. This is a new feature in OpenType 1.8 – but really, it is a very old feature, previously known under names like multiple master fonts.

The idea is simple: A single font file can provide not just the shapes for the glyphs of a single font family, but can also axes along which these shapes can be varied to generate multiple variations of the underlying design. An infinite number, really. Additionally, fonts may pick out certain variations and give them a name.

A lot has to happen to realize this simple idea. If you want to get a glimpse at what is going on behind the scenes, you can look at the OpenType spec.

A while ago, Behdad and I agreed that we want to have font variations available in the Linux text rendering stack. So we used the opportunity of meeting in Montreal to work on it. It is a little involved, since there are several layers of libraries that all need to know about these features before we can show anything: freetype, harfbuzz, cairo, fontconfig, pango, gtk.

freetype and harfbuzz are more or less ready with APIs like FT_Get_MM_Var or hb_font_set_variations that let us access and control the font variations. So we concentrated on the remaining pieces.

As the conference comes to a close today, it is time to present how far we got.

This video is showing a font with several axes in the Font Features example in gtk-demo. As you can see, the font changes in real time as the axes get modified in the UI. It is worth pointing out that the minimum, maximum and default values for the axes, as well as their names, are all provided by the font.

This video is showing the named variations (called Instances here) that are provided by the font. Selecting one of them makes the font change in real time and also updates the axis sliders below.

Eventually, it would be nice if this was available in the font chooser, so users can take advantage of it without having to wait for specific support in applications.

This video shows a quick prototype of how that could look. With all these new font features coming in, now may be a good time to have a hackfest around improving the font chooser.

One frustrating aspect of working on advanced font features is that it is just hard to know if the fonts you are using on your system have any of these fancy features, beyond just being a bag of glyphs. Therefore, I also spent a bit of time on making this information available in the font viewer.

And thats it!

Our patches for cairo, fontconfig, pango, gtk and gnome-font-viewer are currently under review on various mailing lists, bugs and branches, but they should make their way into releases in due course, so that you can have more fun with fonts too!

GUADEC 2017 by numbers

I’m finally getting around to doing a bit of a post-mortem for the 2017 edition of GUADEC that we held in Manchester this year. Let’s start with some statistics!

GUADEC 2017 had…

  • 264 registrations (up from 186 last year)
  • 209 attendees (up from 160 last year)
  • 72 people staying at the University (30 of whom had sponsorship awarded by the travel committee)
  • 7 people who were sadly unable to attend because their visa application was refused at the last minute

We put four optional questions on the registration form asking for your country of residence, your age, your gender identity and how you first heard about GUADEC. The full set of responses (anonymous, of course) is available here.

I don’t plan to do much data mining of this, but here are some interesting stats:

  • 61 attendees said they are resident in the UK, roughly 32%.
  • The most common age of attendees was 35 (the full age range was between 11 years and 65 years)
  • 14 attendees said they heard about the conference through working at Codethink

We asked for an optional, “pay as you feel” donation towards the costs of the conference at registration time and we suggested payments of £15/€15 for students, £40/€40 for hobbyists and £150/€150 for professionals.

  • 47 attendees (22%) chose to donate nothing
  • 29 attendees (13%) chose 1-15
  • 75 attendees (36%) chose 16-40
  • 51 attendees (24%) chose >40
  • 7 attendees somehow chose “NULL” (I think these were on-site registrations, which followed a different process)

Note that we told Codethink staff that they shouldn’t feel required to donate from their company-provided conference budget as Codethink was already sponsoring at Platinum level, which should account for 15 or more of the people who chose to donate nothing with their registration.

The financial side of things is tricky for me to summarize as the sponsor money and registration donations mostly went straight to the Foundation’s bank account, which I don’t have access to. The fluctionation of GBP against the US dollar makes my own budget spreadsheet even less reliable,but I estimate that we raised around $10,000 USD for the GNOME Foundation from GUADEC 2017. This is of course only possible due to the generosity of our sponsors, and through the great work that Alberto and Neil did in this area.

My van did 94 miles around Manchester during the week of GUADEC. My house is only 4 miles from the centre so this is surprisingly high!

 


…and today is Software Freedom Day!

For its fourteenth edition the Digital Freedom Foundation is happy to celebrate Software Freedom Day! At the time of this writting we have 112 teams listed on the wiki and about 80+ events registered. Over the year we’ve notice that this “double registration process” (creating a wiki page and then filling the registration form) is a bit difficult for some of our participants and we wish to change that. In the plan for the coming months we plan to have a single registration process which will in turn generate a wiki page. We also want to display the event date as some of us cannot celebrate exactly on this international day due to local celebrations or other reasons.

Never the less please do look on the map and in the country listing and find an event in your area. If there is none, maybe it’s a good opportunity to start an event with your community! And then join us and celebrate!

Happy SFD to all!

September 14, 2017

Cheese's pipeline

I have been reading the source code of Cheese. I wanted to get an idea of how it works. So this is what I understand. Below the diagram you will see an explanation of this, if you know about GStreamer you may want to skip the explanation.

Developer Console I

Cheese reads some data from the selected webcam device using the bin camera_source. In parallel, Cheese uses an autoaudiosrc to capture data from the sound card. The data from the webcam is sent to a tee which work is to duplicate the output.

Firstly, one of the outputs of this tee is passed to an element called element_bin_preview (this also uses a tee of nine outputs… but that’s not showed in the diagram). An elements_bin_preview has 9 sink pads. The output that flows out from there is (for each sink pad) passed to a filter and finally to a cluttersink. Its name on itself is just descriptive, elements_bin_preview is used to preview the effects in that 3x3 “grid” used for Cheese when you click on the “Effects” button.

Secondly, the data that flows from the other sink pad of the tee is sent to an element called current_selected_filter which is the filter that has been currently selected by the user by clicking the grid of effects in Cheese. In case of recording video, the output that flows out from one of the sink pads of the tee is sent in parallel with the autoaudiosrc to an encodebin which (of course) encodes and mixes to pass it to a filesink so you have the result (a video file) on your disk. In case of using the “burst mode” in Cheese, the second output of the tee is used to be passed to a filesink. The filesink can then capture some bunch of frames (by default 4 in Cheese) saving the output in the disk. filesink usually receives an index in its filename, so when you take a picture with Cheese, you will usually see that base names of file names have a number from 1 to 4 in the sufix when using “Burst mode”. Finally, the third output of the sink is used to show the output of the camera with the filter applied in the Cheese main window using a cluttersink element.

Builder 3.26 has landed

We’ve updated our Wiki page to give people discovering our application more insight into what Builder can do. You can check it out at wiki.gnome.org/Apps/Builder.

Furthermore, you’ll see prominently links to download either our Stable (now 3.26) or Nightly (3.27) builds via Flatpak.

We have also continued to fill in gaps in Builder’s documentation. If Builder is missing a plugin to do something you need, it’s high time you started writing it. ;-)

We want plugins upstream for the same reason the Linux kernel does. It helps us ship better software and avoid breaking API you use.

PSA: All newcomers apps build again

For some time a few of our newcomers apps were not building due to an issue with Builder and Flatpak.

Finally the issues are fixed, in Flatpak 0.9.10 and in Builder stable (that you will have from Flatpak if you followed the Newcomers tutorial). Only thing needed is to update Builder and make sure Flatpak is at version 0.9.10.

If your distribution has already latest Flatpak (e.g. Fedora 26 or newer) and GNOME Software 3.24 (e.g. Fedora 26 or newer) you simply need to go Software, click the refresh icon, wait to download updates and then click “Update all”.

If you want to do it in the command line:

flatpak update
flatpak update --user

And to update Flatpak in Fedora

sudo dnf update -y

Apologies for having broken builds of Newcomers apps, we definitely need a way to ensure Builder integration with apps are tested regularly in the future!

Feel free to report if something is not working as expected in our newcomers channel.

Remember you can start contributing to GNOME apps and technologies in a few minutes following our newcomers guide, give it a look!


Downloading RHEL 7 ISOs for free

A year and a half ago, frighteningly close to 1st April, Red Hat announced the availability of a gratis, self-supported, developer-only subscription for Red Hat Enterprise Linux and a series of other products. Simply put, if you went to developers.redhat.com, created an account and clicked a few buttons, you could download a RHEL ISO without paying anything to anybody. For the past few months, I have been investigating whether we can leverage this to do something exciting in Fedora Workstation. Particularly for those who might be building applications on Fedora that would eventually be deployed on RHEL.

While trying to figure out how the developers.redhat.com website and its associated infrastructure works, I discovered that its source code is actually available on GitHub. Sadly, my ineptitude with server-side applications and things like JBoss, Ruby, etc. meant that it wasn’t enough for me to write code that could interface with it. Luckily, the developers were kind enough to answer my questions, and now I know enough to write a C program that can download a free RHEL ISO.

The code is here: download-rhel.c.

I’ll leave it here in the hope that some unknown Internet traveller might find it useful. As for Fedora Workstation, I’ll let you know if we manage to come up with something concrete. 😉


Ubuntu GNOME Shell in Artful: Day 12

We’ll focus today on our advanced user base. We, of course, try to keep our default user experience as comprehensible as possible for the wider public, but we want as well to think about our more technical users by fine tuning the experience… and all of this, obviously, while changing our default session to use GNOME Shell. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 12: Alt-tab behavior for advanced users.

Some early feedbacks that we got (probably from people being used to Unity or other desktop environments) is that application switching via Alt-Tab isn’t something completely natural to them. However, we still think, and this was a common shared experience between both GNOME Shell and Unity that model fits better in general. People who disagrees can still install one of the many GNOME Shell extensions for this.

When digging a little bit more, we see that the typical class of users complaining about that model is the power users, who has multiple windows of the same applications (typically terminals), and want to switch quickly between the last 2 windows of either:

  • the current application
  • the focused window of the last 2 applications

The first case is covered by [Alt] + [Key above tab] (and that won’t change even for ex-Unity users)1. However the second case isn’t.

That could lead to some frustrating experience if you have a window (typically a browser) standing in the background to read documentation, and having a terminal on top. If you want to quickly switch back and forth to your terminal (having multiple windows), you end up with:

Note that I started from one terminal window and a partially covered browser window, to end up, after 2 quick alt-tabs to two terminal windows covering the browser application.

We want to experiment back with quick alt-tab. Quick alt-tab is alt-tabbing before the switcher appears (the switcher doesn’t appear right away to avoid flickering). We can imagine in that case, we can switch between the last focused window of the last 2 applications. In the previous example, it would put us back to the initial state. However, if we wait enough for the switcher to be displayed, then, you are in the “application mode”, where you switch between all applications, and the default (if you don’t go into the thumbnail preview), is then to raise all windows of that applications. That forced us to increase slightly the delay before the switcher appears.

That was the default of our previous user experience and we didn’t experience bug report about that behavior, meaning that it seems that it fits both power user, and a more traditional type of users (having mostly one window per application, and so not being impacted, or not using quick alt-tab, but the application switcher itself, which doesn’t change behavior). We proposed a patch and opened a discussion upstream to see how we can converge to this idea, which might evolve and being refined in a later release to only be restricted to terminals from where the discussion seems to lead on. For now, as usual, this change is limited to the ubuntu session and doesn’t impact the GNOME vanilla one.

Here is the result:

Another slight inconsistency was the [Alt] + [key above tab] in the switcher itself. Some GNOME Shell releases ago, going into the window preview mode in the switcher was enabling selecting a particular window instance, but was still raising all other windows of the selected application. The selected window was just the most top one. Later, this behavior has been changed to only raise the selected window.

While using [Alt] + [Tab] followed by [Alt] + [key above tab] selecting directly the second window of the current app made sense in the first approach (as, after all, selecting the first window was the same effect than selecting the whole app), now that only the selected window is raised, it makes sense to select the first window of the current application on initial key press. Furthermore, that’s already the behavior of pressing the [down] array key in the alt-tab switcher. For Ubuntu Artful, this behavior is as well only available in the ubuntu session, however, it seems that upstream is considering the patch and you might see it in the next GNOME release.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

We hope to find a way to satisfy both advanced and casual users, tuning the right balance between the 2 use cases. Focusing on a wider audience doesn’t necessarily mean we can’t make the logical flow compatible with other type of users.


  1. Note that I didn’t write [Alt] + [~] because any sane keyboard layout would have a ² key above tab. :)

September 12, 2017

Attended Flock 2017

Two weeks ago, I had the pleasure to attend Flock 2017, the annual Fedora contributor conference. It moves between North America and Europe and after Krakow, Poland last year it took place in Hyannis, Massachussetts.

The conference started with the traditional keynote by Matthew Miller on the state of the Fedora Project. Matthew does a lot of data mining to create interesting statistics about how the project is doing. The keynote is an opportunity to share it with the public.

The Fedora user base is still growing as you can see on the chart of IP connections to Fedora update servers. Fedora 26 exceeded F25 just before Flock:

Snímek z 2017-09-12 16-58-50

Here are also geologic eras of Fedora as Matthew calls them. As you can see there is still a decent number of very old, unsupported Fedora installations which are still alive:

Snímek z 2017-09-12 17-03-29

It’s a pity that Matthew didn’t include the slide with ISO download shares of Fedora editions and spins. But last time he did Fedora Workstation amounted to ~80 % of all ISO downloads.

But by far the most popular part of the project is EPEL. Just look at its number of IP connections compared to all Fedora editions:

Snímek z 2017-09-12 17-08-50

Which brings me to another interesting talk I attended and that was EPEL State of the Union by a Fedora Project veteran Stephen Smoogen. As a Fedora packager I also maintain a couple of packages for EPEL, so it was interesting to hear how this successful sub-project is doing.

There were not many desktop-related talks this year. No “Status of Fedora Workstation” any more. It was very modularization and infrastructure focused. One of a few desktop talks was “Set up your own Atomic Workstation” by Owen Taylor, who is experimenting with distributing and running Fedora Workstation as an atomic OS, and Patrick Uiterwijk, who has been running it on his machine for a year or so (had a similar talk last year). Wanna try it yourself? Check out https://pagure.io/workstation-ostree-config

Although I didn’t attend the talk about secondary architectures by Dan Horák, we ended up talking and I was very happy to learn that the secondary arch team is doing automated builds of Firefox Nightly to catch problems early. That’s great news for us because with every major release of Firefox secondary architectures consumes a lot of our time. I asked Dan if they could do the same with WebKitGTK+ because it’s a very similar case and it looks like they will!

Several months ago David Labský created a device called Fedorator as his bachelor thesis supervised by a Fedora contributor and Fedora badge champion Miro Hrončok. The device lets you create a bootable USB stick with a Fedora edition of your choice. It’s Raspberry Pi-based, it has a touchscreen. The design is open source and you can assemble it yourself. Two months ago I got an idea to get David to Flock, buy components and assemble a dozen of fedorators which Fedora ambassadors can take home to use at local events. The result of it was a session at Flock where participants indeed assembled a dozen of fedorators. I only provided the idea and connected David with the right people. It wouldn’t have been possible without help of Brian Exelbierd, Paul Frields and others who arranged a budget, bought components etc.

photo_2017-08-30_01-45-54

I also did have a session, but unfortunately it was a complete failure 😦 I coordinate the Fedora Workstation User’s Guide project whose goal is to produce a printed guidebook for new users. We’ve had a Czech version for the last two years and we just finished the English one. I wanted to work on content changes for the next release and help people start versions translated into their languages. Unfortunately my session was scheduled at 6pm on the last day when everyone was ready for dinner or was even leaving the conference. It also overlapped with the docs session which people who I knew had been interested attended.

In the end, not a single person showed up at my session which is my new personal record. I’ve done dozens of talks and sessions at conferences, but zero audience was a new experience.

Anyway, if you’d like to produce a handbook in your language to use at booths and to spread the word about Fedora, check the project on Pagure. As I said the 2017 release is out and will only receive bug fixes, the content is final and thus it’s safe to translate.

Although my session was not really a success I’m still glad I could attend the conference. I had several hallway conversations about the project and countless other interesting conversations, learned new things, caught up with Fedora friends.


Red Hat Graphics team looking for another engineer

So our graphics team is looking for a new Senior Software Engineer to help with our AMD GPU support, including GPU compute. This is a great opportunity to join a versatile and top notch development team who plays a crucial role in making sure Linux has up-to-date and working graphics support and who are deeply involved with most major new developments in Linux graphics.

Also as a piece of advice when you read the job advertisement remember that it is very rare anyone can tick all the boxes in the requirement list, so don’t hesitate to apply just because you don’t fit the description and requirements perfectly. For example even if you are more junior in terms of years you could still be a great candidate if you for instance participated in GPU related Google Summer of Code projects or just as a community contributor. And for this position we are open to candidates from around the globe interested in working as remotees, although as always if you are willing or interested in joining one of our development offices in either Boston-USA, Brisbane-Australia or Brno-Czech Republic that is a plus of course.

So please check out the job advertisement forSenior Software Engineer and see if it could be your chance to join the worlds premier open source company.

September 11, 2017

tl;dr: You need an application icon of at least 64×64 in size

At the moment the appstream-builder in Fedora requires a 48x48px application icon to be included in the AppStream metadata. I’m sure it’s no surprise that 48×48 padded to 64×64 and then interpolated up to 128×128 (for HiDPI screens) looks pretty bad. For Fedora 28 and higher I’m going to raise the minimum icon size to 64×64 which I hope people realize is actually a really low bar.

For Fedora 29 I think 128×128 would be a good minimum. From my point of view the best applications in the software center already ship large icons, and the applications with tiny icons are usually of poor quality, buggy, or just unmaintained upstream. I think it’s fine for a software center to do the equivalent of “you must be this high to ride” and if we didn’t keep asking more of upstreams we’d still be in a world with no translations, no release information and no screenshots.

Also note, applications don’t have to do this; it’s not like they’re going to fall out of the Fedora — they’re still installable on the CLI using DNF, although I agree this will impact the number of people installing and using a specific application. Comments welcome.

A tale of three build systems

TL;DR

While we’re still 30s behind hand-written make, I totally would use any of the two over hand-written Makefiles.

As you might have noticed, meson is the new kid on the block. Step by step I am currently converting some projects to it, spearheading Shotwell. Since Shotwell only “recently” became an autotools project, you may ask why. Shotwell had a hand-written makefile system. This made some tasks that would have been incredibly easy with autotools, such as mallard documentation handling, more complicated than it should be. Since autotools provides all the nice features that you want for your GNOME environment, it made sense to leverage that.

Number games

Here are some numbers from the various transition phases. All taken on my i5 X201 laptop.

Conditions:

  • We build from a fresh git checkout (or git clean -dxf)
  • All builds are generated on the same machine with make -j $(($(nproc) + 2))
  • In the case of Meson, the ccache cache was emptied before

Shotwell 0.22

Configure:
real	0m0,011s
user	0m0,004s
sys	0m0,004s

Compile:
real	5m15,892s
user	17m30,392s
sys	1m10,984s

Shotwell 0.23.0

Shotwell’s makefile had a subtle rule issue that caused the plugins to be compiled several times over with valac. Also it was calling pkg-config with each compiler call, so after fixing that, we get these numbers for the compile step:

real	2m1,760s
user	6m31,788s
sys	0m26,788s

Shotwell master

Autotools

autogen.sh (including configure run):

real	0m32,315s
user	0m26,900s
sys	0m0,984s

Compiling:

real	2m25,164s
user	7m47,380s
sys	0m29,028s

Meson

mkdir build && meson build

real	0m1,529s
user	0m1,056s
sys	0m0,304s

ninja -C build

real	2m9,369s
user	7m17,276s
sys	0m33,660s

 

Back to school

I’ve been teaching at the University of Strasbourg since spring 2011, first at the Institute of Technology where I studied, later at the Faculty of Computer science. Today was my first day for the fall 2017 semester. This year I’ll be teaching Algorithms and data structures (C++) to third year math students, Development techniques (a bit meta, about VCS, compilation, debuggers…) to second year CS students and Introduction to Web programming (HTML, CSS, JavaScript) to first year CS students.

People often get confused when I tell them that I teach at the University, but then I tell them it’s not my job. I already have a full time job, teaching is something I do on the side.

Why do you do it then if it’s not your job? Oh, there must be good money in it!

Well, sure, I’m paid to do it. Compared to the amount of work and time it takes it’s really not that much though. In fact, if you’re only thinking of doing it because of the money, I’d recommend you find something else. Why do I do it then? Because I care. Because I think it’s important that people I’ll be working with tomorrow get quality education, and because I think I’m good at it (and from the feedback I had so far it seems I’m right).

Whenever I think about teaching, I always go back to the great Taylor Mali who tells us What teachers make.

If you’re not familiar with his work, I recommend you fix that as soon as possible. Many of his poems are available on Youtube.

Now here’s to a great year for all of you going back to school!

Containers 101

The term "containers" became popular in the recent times, thanks to Docker. However, the idea of containers is there for long, through things like: Solaris Zones Linux Containers, etc. (even though the underlying implementations are different). In this post, I try to give a small overview of the containers ecosystem (as it stands in 2017), from my perspective.

This post is written in response to a question by,  hacker extraordinaire, Varun on what one should know about Containers as of today. Though the document is mostly generic, some lines of it are India specific, which I have highlighted clearly. Please mention in comments, if there is anything else that should have been covered, or if I have made any mistakes or if you have any opinions.

So, What exactly are Containers ?

Containers are an unit of packaging and deployment, that will guarantee repeatability and isolation. Let us see what each part of that sentence means.

Containers are a packaging tool like RPMs or EARs in the sense that they offer you a way to bundle up your binaries (or sources in case of interpreted languages). But instead of merely archiving your sources, Containers provide a way to even deploy your archive, repeatably too.

Anyone who has done packaging, knows, how much of a pain dependency-hell can cause. For example, An application A needs a library L of version 0.1, whereas another application B needs the same library L but of version 0.3  Just to screw up the life of packagers, the versions 0.1 and 0.3 may be conflicting each other and may not co-exist in a system, even in different installation paths. Containerising your application puts each of these applications A and B into their own bundle, with their own library dependencies. However, the real power of containerising is that for each of your application, A and B, they get a view of isolation that they are running in a private environment and so L1 0.1 and 0.3 may never share any runtime data.


One may be reminded about Virtual Machines (VMs) while reading the above text. Even VMs solve the above isolation problem, but they are very heavy. The fundamental difference between a VM and a Container is: a VM virutalizes/abstracts a hardware/operating-system and gives you a machine abstraction, while a Container virtualizes/abstracts an application of your choice. Containers are thus very lightweight and far more approachable.

The Ecosystem

Docker is the most used container technology today. There are other container runtimes such as rkt too. There is an Open Containers Initiative to create standards for container runtimes. All these container runtimes make use of linux kernel features, especially, cgroups to provide process isolation. Microsoft has been making a lot of efforts to support containers natively in the Windows kernel, to support Containers natively as part of their Azure cloud offering for quite some time now.

Container Orchestration is a way for deploying different containers on a bunch of machines. While Docker is arguably the champion of container runtimes, Kubernetes is unarguably the King/Queen of container orchestration. Google has been using containers in production, for much long before it became fashionable. In fact the first patch of cgroups support in the linux kernel was submitted to LKML by Google as far back as 2006. Google had/s a large scale cluster management system named Borg which deployed containers (not docker containers) across the humongous google cloud farm. Kubernetes is an open source evolution of Borg, supporting Docker containers natively. Docker-Swarm is an attempt by Docker (the company behind the Docker project) to achieve container orchestration across machines, but there simply is no competition in terms of quality or documentation or feature coverage, compared to Kubernetes (in my limited experience).

Also, in addition to these, There are some poorly implemented, company-specific tools that try to emulate Kubernetes, but these are mostly technical debt and it is wise (imho) for companies to ditch such efforts and move to open projects backed by companies like Google, Red Hat and Microsoft. A distinguished engineer once told me, There is no compression algorithm for experience and there is no need for us to repeat the mistakes made by these companies, decades ago. If you are a startup focussing on solving an user problem, you should focus on your business problem and a container orchestration software should be the last thing that you need to implement.

Kubernetes, though initially a Google project, has now attracted a lot of contributors from a variety of companies such as Red Hat, Microsoft etc. Red Hat have built OpenShift, a platform that provides a lot of useful features such as, Pipelines, Blue-Green deployments, etc. on top of Kubernetes. They even offer a hosted version.  Tectonic (on top of Kubernetes) by Core OS is also a big (at least in terms of developer mindshare) player in this ecosystem.

SUSE has come up recently with the Kubic project for containers (even though I have not played with it myself).

Microsoft have hired some high profile names in the container ecosystem for working on the Kubernetes + Azure  (Including people like: Brendan Burns, Jess Frazelle, etc.) cloud. Azure is definitely way ahead of Google in India, when it comes to cloud business. Their pricing page is localised for India, while Google does not even support Indian currency yet and charges in USD (leading to jokes like the oil/dollar conspiracy, among the Indian startup ecosystem ;) ). AWS and Azure definitely have a bigger developer mindshare in India than Google Cloud Platform (as of 2017).

The founding team of kubernetes (Xooglers) have started a company named Heptio. While I have no doubts on their engineering prowess, I am skeptical if relying on these companies may be risky for startups in India (lack of same timezone support, etc.). If you are in the west, these options (and others such as rancher) may be interesting.

Kubernetes Basics

In Kubernetes, the unit of deployment is a Pod. A pod is merely a collection of Docker containers which will be deployed together always. For example, if your application is a API server that makes use of a Redis cache, before hitting the database for each request, you create a Pod with two containers, a API server container and a Redis container and you deploy them together.

Kubernetes refers to an umbrella of projects that run on a cloud, to manage a cloud. It has various components, such as an API server to interact with the kubernetes system, an agent software named kubelet that runs on each machine in the cloud, a fluentd type of daemon to accumulate logs from various containers and provide a single point of access, a web dashboard, a CLI tool named kubectl to perform various options, etc. In addition to these kubernetes specific components, there are also other services, such as the distributed hashstore etcd (originally from coreos) that you need to setup a basic kubernetes cluster. However,  If you are a small company, It'll be wise to make use of GKE or Azure hosting or OpenShift hosting instead of deploying your own kubernetes system managed by your own admins. It is not worth the hassle.

If you want to play with kubernetes in your development laptop (unless you can afford to treat production as your test box), there is a tool named minikube to help you with that. If you are an application developer and considering to dockerizing and deploying your application, then minikube is definitely the best place to start.

There are quite a few meetups happening for kubernetes all around the world. Visiting some of these may be enlightening. The webinar series by Janakiram was good, but it is a little too long to my taste and I lost interest halfway. The persistent ones among you may find it very useful.

Docker Compose

One of the tools from the Docker project that I love a lot is the handy Docker Compose. It is a tool to work with multiple containers, in a sense it is somewhat like your kubernetes Pods, but without having to install / manage the heavyweight kubernetes ecosystem. I use Docker Compose extensively in CI, where it is the perfect fit for doing end-to-end testing of a webstack, if your sources are in a monolithic repository. In your CI system, you can bring up all your components (say, an API server, a database, a front end node server) and perform an end-to-end testing (say, via selenium). In fact, I cannot fathom how I was doing CI earlier without docker-compose, (just like how I cannot fathom how I used cvs before git, etc.)

AWS

No blog post on cloud technologies will be complete, without mentioning the 800 pound gorilla, Amazon Web Services. Amazon supports containers natively. You can deploy either a single container or multi-container images natively, via Amazon Beanstalk. It is very much similar to the Google Appengine (if you have used it). Beanstalk is a PaaS offering and it takes a Container image and scales it automagically depending on various factors (such as CPU usage, HTTP usage, etc.). I've run Beanstalk and is very satisfied with it (perhaps not as much as with AppEngine though). It is very reliable, performant and scales well (tested for a few hundred users in my limited experience).

For the larger workloads and those who want more control, Amazon offers Elastic Container Service. You can create a bunch of EC2 instances and a bunch of Containers, and ask ECS to run these containers on these VMs in a way that you prefer. This, however locks you to the AWS platform (unlike k8s).

Both Beanstalk and ECS do not cost anything extra other than the price of VMs, which you already pay.

I, however, wish that Amazon starts supporting kubernetes natively. There are other ways to make use of kubernetes in AWS. The most enterprisey is probably Tectonic by Core OS, but we also have projects like kube-aws and kops.

Conclusion:

If you have actually read until this point, Thanks a lot :-)  I could have written a little bit in detail about the nuts and bolts of the containers technology, but I believe that this post, as is, will be a good material for a 101 type of introduction.  Also, there are people with far more working knowledge than me, who are more equipped to write on the details. So, I have left it as an exercise to the readers to find such talks, blogs or books :)

Celebrate your SFD event this Saturday! Register now!

web-banner-chat-register

The The Digital Freedom Foundation is very happy to announce that its fourteenth edition of Software Freedom Day will be celebrated this coming Saturday on September 16, 2017. If you haven’t registered your team yet it’s time to do it now!

So as usual registering an SFD event is as simple as 1-2-3:
1. Create a wiki account if you don’t already have one
2. Create a wiki page in your area under this page
3. Go to the registration page to register and let us know exactly who and where you are.

At the end of all this you’ll appear on the SFD 2017 Map here with all the other teams and be easy to spot.

We want to thank our usual sponsors Google, Linode, Canonical, the FSF and FSFE. We would also like to thank Freiheit technologies, a software company based in Hamburg, Germany for continuing to support us. Full details of our sponsors are available at this page.

Back to SFD registration we have an exhaustive StartGuide here for newcomers and for the others who need help, and the SFD-Discuss mailing list is probably the best place to get prompt support.

So get ready to celebrate and happy preparations to all!

Celebrate SFD with us on September 17, 2016!

September 10, 2017

Coordinating a FWD event

FWD stands for Fedora Women’s Day and our main goals is to attract more women in the FLOSS world, by using Fedora and GNOME.

As one of my duties in my country is teaching using Linux and I realise that only few women are interested in IT and if I found them… fewers are interested in using IT Linux.  In these pictures of my groups at UNI,  please see how much women we have:

even if I had interact with other researchers groups at UNI, I only found two or three more

Thanks to the invitation to other universities as a speaker, I was be able to contact with other IT enthusiactic, smart and pretty women students. And, after many efforts in months, we are going to present some work we got with Fedora and GNOME, and prove to others that we can do work in IT, using these Linux tecnologies:

Thanks to the Fedora Diversity Team to trust me the organization of the event in Peru, and special thanks to Chhavi for her patient in coordinating the design 🙂

The event is programmed to happen on September 30th at 08:00 am at PUCP LabV207, including a coffee break sponsored by Fedora. I am glad that so far, we have 32 going and 113 interested people in the event 🙂

Thanks to all people of the local community who are helping me to spread the event as wellThanks Giohanny, Solanch, Lizbeth, Azul, Sheyla, Chhavi and Geny for this! 😀


Filed under: FEDORA, GNOME Tagged: fedora, Fedora Women's Day, FWD, FWD Lima, GNOME, Julita Inca, Julita Inca Chiroque, Linux event Peru, Team Diversity Fedora, women Linux

September 09, 2017

GtkSourceView fundraising!

I’m launching a fundraising for GtkSourceView!

If you don’t know what GtkSourceView is, it’s a widely used library for text editors and IDEs (or text editing in general). For example on Debian, more than 50 applications rely on GtkSourceView, including gedit and GNOME Builder.

What less people know about is that GtkSourceView has been almost entirely developed by volunteer work, without being paid, except a few Google Summer of Code. So with the fundraising it’ll hopefully change, to bring the library to the next level!

Go to the fundraising on Liberapay for more information.

Thanks!

WebDriver support in WebKitGTK+ 2.18

WebDriver is an automation API to control a web browser. It allows to create automated tests for web applications independently of the browser and platform. WebKitGTK+ 2.18, that will be released next week, includes an initial implementation of the WebDriver specification.

WebDriver in WebKitGTK+

There’s a new process (WebKitWebDriver) that works as the server, processing the clients requests to spawn and control the web browser. The WebKitGTK+ driver is not tied to any specific browser, it can be used with any WebKitGTK+ based browser, but it uses MiniBrowser as the default. The driver uses the same remote controlling protocol used by the remote inspector to communicate and control the web browser instance. The implementation is not complete yet, but it’s enough for what many users need.

The clients

The web application tests are the clients of the WebDriver server. The Selenium project provides APIs for different languages (Java, Python, Ruby, etc.) to write the tests. Python is the only language supported by WebKitGTK+ for now. It’s not yet upstream, but we hope it will be integrated soon. In the meantime you can use our fork in github. Let’s see an example to understand how it works and what we can do.

from selenium import webdriver

# Create a WebKitGTK driver instance. It spawns WebKitWebDriver 
# process automatically that will launch MiniBrowser.
wkgtk = webdriver.WebKitGTK()

# Let's load the WebKitGTK+ website.
wkgtk.get("https://www.webkitgtk.org")

# Find the GNOME link.
gnome = wkgtk.find_element_by_partial_link_text("GNOME")

# Click on the link. 
gnome.click()

# Find the search form. 
search = wkgtk.find_element_by_id("searchform")

# Find the first input element in the search form.
text_field = search.find_element_by_tag_name("input")

# Type epiphany in the search field and submit.
text_field.send_keys("epiphany")
text_field.submit()

# Let's count the links in the contents div to check we got results.
contents = wkgtk.find_element_by_class_name("content")
links = contents.find_elements_by_tag_name("a")
assert len(links) > 0

# Quit the driver. The session is closed so MiniBrowser 
# will be closed and then WebKitWebDriver process finishes.
wkgtk.quit()

Note that this is just an example to show how to write a test and what kind of things you can do, there are better ways to achieve the same results, and it depends on the current source of public websites, so it might not work in the future.

Web browsers / applications

As I said before, WebKitWebDriver process supports any WebKitGTK+ based browser, but that doesn’t mean all browsers can automatically be controlled by automation (that would be scary). WebKitGTK+ 2.18 also provides new API for applications to support automation.

  • First of all the application has to explicitly enable automation using webkit_web_context_set_automation_allowed(). It’s important to know that the WebKitGTK+ API doesn’t allow to enable automation in several WebKitWebContexts at the same time. The driver will spawn the application when a new session is requested, so the application should enable automation at startup. It’s recommended that applications add a new command line option to enable automation, and only enable it when provided.
  • After launching the application the driver will request the browser to create a new automation session. The signal “automation-started” will be emitted in the context to notify the application that a new session has been created. If automation is not allowed in the context, the session won’t be created and the signal won’t be emitted either.
  • A WebKitAutomationSession object is passed as parameter to the “automation-started” signal. This can be used to provide information about the application (name and version) to the driver that will match them with what the client requires accepting or rejecting the session request.
  • The WebKitAutomationSession will emit the signal “create-web-view” every time the driver needs to create a new web view. The application can then create a new window or tab containing the new web view that should be returned by the signal. This signal will always be emitted even if the browser has already an initial web view open, in that case it’s recommened to return the existing empty web view.
  • Web views are also automation aware, similar to ephemeral web views, web views that allow automation should be created with the constructor property “is-controlled-by-automation” enabled.

This is the new API that applications need to implement to support WebDriver, it’s designed to be as safe as possible, but there are many things that can’t be controlled by WebKitGTK+, so we have several recommendations for applications that want to support automation:

  • Add a way to enable automation in your application at startup, like a command line option, that is disabled by default. Never allow automation in a normal application instance.
  • Enabling automation is not the only thing the application should do, so add an automation mode to your application.
  • Add visual feedback when in automation mode, like changing the theme, the window title or whatever that makes clear that a window or instance of the application is controllable by automation.
  • Add a message to explain that the window is being controlled by automation and the user is not expected to use it.
  • Use ephemeral web views in automation mode.
  • Use a temporal user profile in application mode, do not allow automation to change the history, bookmarks, etc. of an existing user.
  • Do not load any homepage in automation mode, just keep an empty web view (about:blank) that can be used when a new web view is requested by automation.

The WebKitGTK client driver

Applications need to implement the new automation API to support WebDriver, but the WebKitWebDriver process doesn’t know how to launch the browsers. That information should be provided by the client using the WebKitGTKOptions object. The driver constructor can receive an instance of a WebKitGTKOptions object, with the browser information and other options. Let’s see how it works with an example to launch epiphany:

from selenium import webdriver
from selenium.webdriver import WebKitGTKOptions

options = WebKitGTKOptions()
options.browser_executable_path = "/usr/bin/epiphany"
options.add_browser_argument("--automation-mode")
epiphany = webdriver.WebKitGTK(browser_options=options)

Again, this is just an example, Epiphany doesn’t even support WebDriver yet. Browsers or applications could create their own drivers on top of the WebKitGTK one to make it more convenient to use.

from selenium import webdriver
epiphany = webdriver.Epiphany()

Plans

During the next release cycle, we plan to do the following tasks:

  • Complete the implementation: add support for all commands in the spec and complete the ones that are partially supported now.
  • Add support for running the WPT WebDriver tests in the WebKit bots.
  • Add a WebKitGTK driver implementation for other languages in Selenium.
  • Add support for automation in Epiphany.
  • Add WebDriver support to WPE/dyz.

GUADEC 2017

It’s been a full month since GUADEC, and I’m just starting to get this blog post completed from my TODO list. :-)

GUADEC 2017 was held in Manchester, UK. I was really exited about it, because UK is a country that I was looking forward to paying a visit to.

Conference

We had a great selection of talks this year for GUADEC, and I enjoyed the ones I attended a lot. Here are a few of them worth mentioning here:

  • The GNOME Way by Allan Day – We’re principled.
  • The History of GNOME by Jonathan Blandford – In GNOME’s 20th year, it’s good to look back what we’ve done in the past 20 years and know the history.
  • Please use GNOME Web by Michael Catanzaro – Actually I missed this talk that day. But I did viewed it online when I arrived home. I was a chrome user, I enjoyed Google account sync and those plugins a lot. I did tried GNOME Web a few months ago, I missed those chrome feature a lot and I switched back to chrome. But this time, I was hugely surprised by the overall experience that GNOME Web provides. As it mentions in the wiki, it provides a simple, clean, beautiful view of the web. And I can’t agree more. It does its job well.
  • Resurrecting dinosaurs, what can possibly go wrong – It’s good to see such a talk and it makes me think.
  • Newcomer Genesis Evolution by Carlos Soriano and Bastian Ilsø – In 2014, it was the GNOME Love project that helped me getting started with GNOME. It just worked for me. It’s rather important for a community to attract more newcomers and get them involved. So I’m glad to see Carlos and Bastian are working hard on this to make newcomers’ life much easier.
  • Payments and donations in GNOME Software by Richard Hughes – It seems a small feature of GNOME Software, but Richard mentioned lots of concerns he met while working on it. It was a good discussion overall.
  • Lighting Talk – Lighting talk is always a lot of fun.

Unconference

I like the Unconference Days as usual. It provides a chance for hackers to gather together to get something done. And it can be very productive. I discussed some issues of Logs with David and investigated an issue of PK’s zypp backend. I also joined the hiking to Peak District National Park. The view is absolutely gorgeous and I enjoyed it a lot.

Social Events

As usual, we had lots of social events this year, which is always good. I enjoyed the 20th birthday party the most. It’s great that we have a chance to party together celebrating we’ve accomplished as a whole.

Here are some pictures, enjoy :-)

In the end, I’d like to thank GNOME Foundation for sponsoring my accommodation during GUADEC and my employer SUSE for sponsoring my time and this trip. And much thanks to the GUADEC team and everyone else to make it happen.

 

How glib-rs works, part 3: Boxed types

(First part of the series, with index to all the articles)

Now let's get on and see how glib-rs handles boxed types.

Boxed types?

Let's say you are given a sealed cardboard box with something, but you can't know what's inside. You can just pass it on to someone else, or burn it. And since computers are magic duplication machines, you may want to copy the box and its contents... and maybe some day you will get around to opening it.

That's a boxed type. You get a pointer to something, who knows what's inside. You can just pass it on to someone else, burn it — I mean, free it — or since computers are magic, copy the pointer and whatever it points to.

That's exactly the API for boxed types.

typedef gpointer (*GBoxedCopyFunc) (gpointer boxed);
typedef void (*GBoxedFreeFunc) (gpointer boxed);

GType g_boxed_type_register_static (const gchar   *name,
                                    GBoxedCopyFunc boxed_copy,
                                    GBoxedFreeFunc boxed_free);

Simple copying, simple freeing

Imagine you have a color...

typedef struct {
    guchar r;
    guchar g;
    guchar b;
} Color;

If you had a pointer to a Color, how would you copy it? Easy:

Color *copy_color (Color *a)
{
    Color *b = g_new (Color, 1);
    *b = *a;
    return b;
}

That is, allocate a new Color, and essentially memcpy() the contents.

And to free it? A simple g_free() works — there are no internal things that need to be freed individually.

Complex copying, complex freeing

And if we had a color with a name?

typedef struct {
    guchar r;
    guchar g;
    guchar b;
    char *name;
} ColorWithName;

We can't just *a = *b here, as we actually need to copy the string name. Okay:

ColorWithName *copy_color_with_name (ColorWithName *a)
{
    ColorWithName *b = g_new (ColorWithName, 1);
    b->r = a->r;
    b->g = a->g;
    b->b = a->b;
    b->name = g_strdup (a->name);
    return b;
}

The corresponding free_color_with_name() would g_free(b->name) and then g_free(b), of course.

Glib-rs and boxed types

Let's look at this by parts. First, a BoxedMemoryManager trait to define the basic API to manage the memory of boxed types. This is what defines the copy and free functions, like above.

pub trait BoxedMemoryManager<T>: 'static {
    unsafe fn copy(ptr: *const T) -> *mut T;
    unsafe fn free(ptr: *mut T);
}

Second, the actual representation of a Boxed type:

pub struct Boxed<T: 'static, MM: BoxedMemoryManager<T>> {
    inner: AnyBox<T>,
    _dummy: PhantomData<MM>,
}

This struct is generic over T, the actual type that we will be wrapping, and MM, something which must implement the BoxedMemoryManager trait.

Inside, it stores inner, an AnyBox, which we will see shortly. The _dummy: PhantomData<MM> is a Rust-ism to indicate that although this struct doesn't actually store a memory manager, it acts as if it does — it does not concern us here.

The actual representation of boxed data

Let's look at that AnyBox that is stored inside a Boxed:

enum AnyBox<T> {
    Native(Box<T>),
    ForeignOwned(*mut T),
    ForeignBorrowed(*mut T),
}

We have three cases:

  • Native(Box<T>) - this boxed value T comes from Rust itself, so we know everything about it!

  • ForeignOwned(*mut T) - this boxed value T came from the outside, but we own it now. We will have to free it when we are done with it.

  • ForeignBorrowed(*mut T) - this boxed value T came from the outside, but we are just borrowing it temporarily: we don't want to free it when we are done with it.

For example, if we look at the implementation of the Drop trait for the Boxed struct, we will indeed see that it calls the BoxedMemoryManager::free() only if we have a ForeignOwned value:

impl<T: 'static, MM: BoxedMemoryManager<T>> Drop for Boxed<T, MM> {
    fn drop(&mut self) {
        unsafe {
            if let AnyBox::ForeignOwned(ptr) = self.inner {
                MM::free(ptr);
            }
        }
    }
}

If we had a Native(Box<T>) value, it means it came from Rust itself, and Rust knows how to Drop its own Box<T> (i.e. a chunk of memory allocated in the heap).

But for external resources, we must tell Rust how to manage them. Again: in the case where the Rust side owns the reference to the external boxed data, we have a ForeignOwned and Drop it by free()ing it; in the case where the Rust side is just borrowing the data temporarily, we have a ForeignBorrowed and don't touch it when we are done.

Copying

When do we have to copy a boxed value? For example, when we transfer from Rust to Glib with full transfer of ownership, i.e. the to_glib_full() pattern that we saw before. This is how that trait method is implemented for Boxed:

impl<'a, T: 'static, MM: BoxedMemoryManager<T>> ToGlibPtr<'a, *const T> for Boxed<T, MM> {
    fn to_glib_full(&self) -> *const T {
        use self::AnyBox::*;
        let ptr = match self.inner {
            Native(ref b) => &**b as *const T,
            ForeignOwned(p) | ForeignBorrowed(p) => p as *const T,
        };
        unsafe { MM::copy(ptr) }
    }
}

See the MM:copy(ptr) in the last line? That's where the copy happens. The lines above just get the appropriate pointer to the data data from the AnyBox and cast it.

There is extra boilerplate in boxed.rs which you can look at; it's mostly a bunch of trait implementations to copy the boxed data at the appropriate times (e.g. the FromGlibPtrNone trait), also an implementation of the Deref trait to get to the contents of a Boxed / AnyBox easily, etc. The trait implementations are there just to make it as convenient as possible to handle Boxed types.

Who implements BoxedMemoryManager?

Up to now, we have seen things like the implementation of Drop for Boxed, which uses BoxedMemoryManager::free(), and the implementation of ToGlibPtr which uses ::copy().

But those are just the trait's "abstract" methods, so to speak. What actually implements them?

Glib-rs has a general-purpose macro to wrap Glib types. It can wrap boxed types, shared pointer types, and GObjects. For now we will just look at boxed types.

Glib-rs comes with a macro, glib_wrapper!(), that can be used in different ways. You can use it to automatically write the boilerplate for a boxed type like this:

glib_wrapper! {
    pub struct Color(Boxed<ffi::Color>);

    match fn {
        copy => |ptr| ffi::color_copy(mut_override(ptr)),
        free => |ptr| ffi::color_free(ptr),
        get_type => || ffi::color_get_type(),
    }
}

This expands to an internal glib_boxed_wrapper!() macro that does a few things. We will only look at particularly interesting bits.

First, the macro creates a newtype around a tuple with 1) the actual data type you want to box, and 2) a memory manager. In the example above, the newtype would be called Color, and it would wrap an ffi:Color (say, a C struct).

        pub struct $name(Boxed<$ffi_name, MemoryManager>);

Aha! And that MemoryManager? The macro defines it as a zero-sized type:

        pub struct MemoryManager;

Then it implements the BoxedMemoryManager trait for that MemoryManager struct:

        impl BoxedMemoryManager<$ffi_name> for MemoryManager {
            #[inline]
            unsafe fn copy($copy_arg: *const $ffi_name) -> *mut $ffi_name {
                $copy_expr
            }

            #[inline]
            unsafe fn free($free_arg: *mut $ffi_name) {
                $free_expr
            }
        }

There! This is where the copy/free methods are implemented, based on the bits of code with which you invoked the macro. In the call to glib_wrapper!() we had this:

        copy => |ptr| ffi::color_copy(mut_override(ptr)),
        free => |ptr| ffi::color_free(ptr),

In the impl aboe, the $copy_expr will expand to ffi::color_copy(mut_override(ptr)) and $free_expr will expand to ffi::color_free(ptr), which defines our implementation of a memory manager for our Color boxed type.

Zero-sized what?

Within the macro's definition, let's look again at the definitions of our boxed type and the memory manager object that actually implements the BoxedMemoryManager trait. Here is what the macro would expand to with our Color example:

        pub struct Color(Boxed<ffi::Color, MemoryManager>);

        pub struct MemoryManager;

        impl BoxedMemoryManager<ffi::Color> for MemoryManager {
            unsafe fn copy(...) -> *mut ffi::Color { ... }
            unsafe fn free(...) { ... }
        }

Here, MemoryManager is a zero-sized type. This means it doesn't take up any space in the Color tuple! When a Color is allocated in the heap, it is really as if it contained an ffi::Color (the C struct we are wrapping) and nothing else.

All the knowledge about how to copy/free ffi::Color lives only in the compiler thanks to the trait implementation. When the compiler expands all the macros and monomorphizes all the generic functions, the calls to ffi::color_copy() and ffi::color_free() will be inlined at the appropriate spots. There is no need to have auxiliary structures taking up space in the heap, just to store function pointers to the copy/free functions, or anything like that.

Next up

You may have seen that our example call to glib_wrapper!() also passed in a ffi::color_get_type() function. We haven't talked about how glib-rs wraps Glib's GType, GValue, and all of that. We are getting closer and closer to being able to wrap GObject.

Stay tuned!

September 07, 2017

Initial posts about librsvg's C to Rust conversion

The initial articles about librsvg's conversion to Rust are in my old blog, so they may be a bit hard to find from this new blog. Here is a list of those posts, just so they are easier to find:

Within this new blog, you can look for articles with the librsvg tag.

Feeds