24 hours a day, 7 days a week, 365 days per year...

July 05, 2015

2015-07-05 Sunday.

  • Off to a baptism at re:new and a shared lunch which was interesting, back in the afternoon slept on the sofa; pizza dinner, dancing competition on the trampoline, read stories; bed.

Midterm minute of ponderation...

...or the meaning of who you surround yourself with.

The past week has been full of adjustments to my PRs on github, endless discussions with my mentor about what would be the best solution, continuous PR reviews... well, it was a lesson and I'm happy to have undertaken it. Search and editor still need many refurbishments though.

For those of you expecting further technical description of what I managed to do over the past week in GTG, stop reading immediately. You will be disappointed at the end and I don't want to do that to you (not on purpose of course). STOP HERE!

Anyway, I warned you.

Now for the remaining (very few brave) ones. What is it that I'm about to compose here? I've been thinking. For a long time. What is this going to be? Hmm, maybe some ambitious (and unsuccessful) product development post aimed at best work practices? Or a sincere confession of my current moods and feelings? 

I'd say it's going to be a bit of both, judge it as you wish.

So in my thoughts, I came across an idea of how the people around influence me, primarily in the context of my work, that is, in the context of GSoC. I considered my family, girlfriend, friends... all in all not so many people honestly in my village, so it was not that difficult.

My statement is this: The environment I work in is quintessential if I want to be a good and efficient programmer. It is all about having conditions where I can be calm, without stress, and free from gazillion of (pretty much meaningless) information and obligations.

And after this weekend, being offered some time for cogitation, I realised that I've been lacking these gifts of calmness and convenience in my workplace.

At this point, I'm glad to thank my mentor Izidor for introducing me to Joel Spolsky's blog (or should I say programmer's bible?). In one of his articles, he mentions exactly what I've been describing above: the working conditions (point 8) And that reinforces my statement even more.

So what is up for me for the next week or even for the rest of the GSoC?
I believe that a significant change in how I organise my working place (a.k.a. my tiny room which is boiling during these summer days!) would give new energy to my activity.
Motivated by passing the midterm evaluation, I am about to try some of the proved work practices I've been brushing up on thanks to immense amounts of sources Izidor's been offering me (Currently it's the rigorous Code Complete book :) ). 

I'm not going to reveal to anyone yet what my plan is. 
However, the most general and primary goal is best expressed by an English adage:

Early to bed, early to rise, makes a man healthy, wealthy and wise.

That's where I want to start, since I consider my lifestyle quite unsupportive towards what I want to do well and what I want to learn well. We'll see what happens, but in the meantime, if you got to this point in your reading, I'd just advise you not to stick to the current, but think outside the box constantly and search for how you can do better at what you do.

I'm trying again and I hope I become again a little bit more organised and effective :) 
This summer is an opportunity, so why not seize it!

irc link handling

Back in 1996 the World Wide Web Consortium made a draft specification for how you can link to an IRC chatroom. The draft does not seem to have developed further, but it is regardless possible to stumble upon these links around the internet now and then. Many other IRC clients support the handling the links too and this week I have enabled Polari to open them.

The links typically take on the following shape:

If I wanted to link to the #polari chatroom, the link could look like so:

Polari can open these links regardless of whether it is running or not. Furthermore, if you open a link to a server you haven’t used before, then Polari will ask you to specify a username to use for that connection.

07-05-15 new-connection-thumb

The bug in question is bug Bug 728593. I am keeping a solution to solving it inside the wip/bastianilso/irc-url-parsing branch. However, there’s a few things I am considering before calling the work in this area done:

  • It would be a good idea to show a notification if Polari is unable to parse the irc link.
  • It might be worth to also support corner cases of the irc link specification such URLs targeting nicknames, URLs containing a passphrase and so on.
  • It could be cool to have a “Copy link adress” menu item per room so you can copy/paste a room’s irc link to other people. The menu item could be placed in the room menu and/or in the right-click menu for each channel.
  • It could be cool if we parsed mentions of rooms (fx #polari) in chats as IRC links you could click or copy/paste.

I’ll be off for a small trip the next four days but will return in the weekend for more polari bug fixing. (:

Peruvian products: Quinoa, Causa Limeña, Pisco

I decided to write this post because I think it is fair to let know the world that quinoa, Causa Limeña and Pisco are Peruvians products.

We have done efforts to promote our products around the world through the valuable work of one of the authorities of the Peruvian food: Gaston Acurio.

Moreover, we presented our products (to mention the last event): Peru Meet & Greet 2015books, receiptscampaigns and we had gotten recognitions from World Food Programme.  

Besides that, during the last Expo Milan 2015 world food event, these Peruvian products were being marketed by another country as if their products.

Therefore, I am just clarifying this situation from my point of view… In someway I can share some varieties of products that are grown from seeds here and what we can provide.

In some manner, it pushed me up with the idea of organising the first edition of the GNOME America in Peru. This idea was in my mind years ago, but I did not dare to do it until I had the opportunity to talk with Max Chun-Hung Huang during the GNOME Asia. He encouraged me to do it!  It seems huge but not impossible, so one the challenge of the life might start now. 

Could Cuzco, one wonder of the world, be a good place to do it?


Filed under: Events, GNOME, Laikes & Dislaikes :: Entertainment Tagged: Causa Limeña, Julita Inca, Julita Inca Chiroque, Perú, Peruvian Pisco, peruvian products, Pisco, Quinoa, Quinua

Something in last month--Back home

Two weeks ago, I received an Email from Gnome-Travel about application of sponsorship for GUADEC 2015. I talked with my mentor about things to do preparing the GUADEC 2015. I realized that I have many things to deal with. The most important one is to get a passport.

In China, we have to apply a passport where your household registration locates, not where you lived right now. My hometown is Shijiazhuang, China. But right now I live in Changchun, China to pursue my bachelor's degree. So I have to go back to my hometown to apply for the passport.

It takes me almost one day from leaving my university to getting back home. It's really a long time! I leaved Changchun at noon in 20th, June, and get back home in the morning the next day. Arriving the train station in Changchun, what surprised me a lot is the ad on the train. I was amazed because it never happened before.

The next day, in the very morning, just right before getting off the train, I opened Google Now to see some updates. Once again, I was surprised to see the cover on top of the application. It was Fathers' Day! Yep, I was Fathers' Day, and I was going back home. I felt so happy that I could get back home to be with my dad at Fathers' Day.

Finally, I was back to hometown! And I saw the scene below. What a day!

It was Dragon Boat Festival in China when I went back home. So it's not available for me to apply for the passport. I have to wait for 23th June, which is the day after the holiday. Finishing the application for the passport, I gotta get back to school the next day.

It was an amazing trip back home!

The passport will be available in 6th July. And I have to move on to finish something else that are required to attend GUADEC 2015. Good luck to myself.

July 04, 2015

2015-07-04 Saturday.

  • Up lateish, J. returned babes to their abodes left & right. Out to Holkam beach en-masse; lots of wonderful sun, sea, sand, swimming, digging, relaxing, reading - sun-burning and so on. Fish & Chips in the car on the way home tired.

Did You Actually Read the Lower Court's Decision?

I'm seeing plenty of people, including some non-profit organizations along with the usual punditocracy, opining on the USA Supreme Court's denial for a writ of certiorari in the Oracle v. Google copyright infringement case. And, it's not that I expect everyone in the world to read my blog, but I'm amazed that people who should know better haven't bothered to even read the lower Court's decision, which is de-facto upheld upon denial by the Supreme Court to hear the appeal.

I wrote at great length about why the decision isn't actually a decision about whether APIs are copyrightable, and that the decision actually gives us some good clarity with regard to the issue of combined work distribution (i.e., when you distribute your own works with the copyrighted material of others combined into a single program). The basic summary of the blog post I linked to above is simply: The lower Court seemed genially confused about whether Google copy-and-pasted code, as the original trial seems to have inappropriately conflated API reimplemenation with code cut-and-paste.

No one else has addressed this nuance of the lower Court's decision in the year since the decision came down, and I suspect that's because in our TL;DR 24-hour-news cycle, it's much easier for the pundits and organizations tangentially involved with this issue to get a bunch of press over giving confusing information.

So, I'm mainly making this blog post to encourage people to go back and read the decision and my blog post about it. I'd be delighted to debate people if they think I misread the decision, but I won't debate you unless you assure me you read the lower Court's decision in its entirety. I think that leaves virtually no one who will. :-/

July 03, 2015

Network management workflow (or, GSoC report #4)

Things are moving absolutely faster than I could expect. Since the last report, I was able to improve the new Other Locations view so much!

I could spend a few boring paragraphs trying to explain how things work now… but no. I love videos. People love videos. Technology enabled us to make videos, so let’s watch a video!

You know what’s best? It’s really getting in shape for merge. It also fixed a very annoying bug in Nautilus (though this work is happening in Gtk+). Next step is Nautilus!

backwards rendering compatibility

The last year I didn’t keep up with the latest GTK versions. With less time to code, installing new software was not my priority, so I did most of my programming on a machine running Ubuntu LTS.

Last week I had some spare time and I installed the latest Fedora. I installed the same Bluefish version (the latest, 2.2.7) but to my surprise some bits didn’t look as good anymore. Bluefish has a sidebar with several plugins, each in a GtkNotebook tab. This normally looks like this:


But on the latest and greatest Fedora, it looks like this, instead of showing all the icons, only two are visible, unless you make the sidebar extremely wide:


This makes me wonder if GTK has something like “backwards rendering compatibility testing”: see if applications that were not developed for the latest GTK release still look good and function properly. I have no idea if this could be done in an automatic way, but there are probably brighter people around that do have an idea how to do this.

b.t.w. I haven’t found a way yet to change my code in such a way that all icons become visible again. If anybody is willing to drop me a hint how to get all icons back in the original space I will be happy :)

On Linux32 chrooted environments

I have a chrooted environment in my 64bit Fedora 22 machine that I use every now and then to work on a debian-like 32bit system where I might want to do all sorts of things, such as building software for the target system or creating debian packages. More specifically, today I was trying to build WebKitGTK+ 2.8.3 in there and something weird was happening:

The following CMake snippet was not properly recognizing my 32bit chroot:

endif ()

After some investigation, I found out that CMAKE_HOST_SYSTEM_PROCESSOR relies on the output of uname to determine the type of the CPU, and this what I was getting if I ran it myself:

(debian32-chroot)mario:~ $ uname -a
Linux moucho 4.0.6-300.fc22.x86_64 #1 SMP Tue Jun 23 13:58:53 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux

Let’s avoid nasty comments about the stupid name of my machine (I’m sure everyone else uses clever names instead), and see what was there: x86_64.

That looked wrong to me, so I googled a bit to see what others did about this and, besides finding all sorts of crazy hacks around, I found that in my case the solution was pretty simple just because I am using schroot, a great tool that makes life easier when working with chrooted environments.

Because of that, all I would have to do would be to specify personality=linux32 in the configuration file for my chrooted environment and that’s it. Just by doing that and re-entering in the “jail”, the output would be much saner now:

(debian32-chroot)mario:~ $ uname -a
Linux moucho 4.0.6-300.fc22.x86_64 #1 SMP Tue Jun 23 13:58:53 UTC 2015
i686 i686 i686 GNU/Linux

And of course, WebKitGTK+ would now recognize and use the right CPU type in the snippet above and I could “relax” again while seeing WebKit building again.

Now, for extra reference, this is the content of my schroot configuration file:

$ cat /etc/schroot/chroot.d/00debian32-chroot
description=Debian-like chroot (32 bit) 

That is all, hope somebody else will find this useful. It certainly saved my day!

Boxes' thumbnails overhaul

I recently spent quite some time reworking the overall look of Boxes' machine thumbnails. Here is the result.

Boxes' new thumbnails

Stopped boxes

Up until now, Boxes' stopped machines were represented by a black box. It was nice as it represented the idea of a shut down screen, but it was pretty hard to differentiate a stopped machine from a running one displaying a black screen. This was stated in bug #730258 where Jimmac suggested to follow this design where thumbnails are draw as gray frames with a medium sized emblem in their center, using the system-shutdown-symbolic icon to suggest the stopped state.

Boxes' old thumbnail for stopped machines Boxes' new thumbnail for stopped machines
Boxes' thumbnail for stopped machines: old (left) and new (right)

Updating the other thumbnails

Machines under construction used to simply display their thumbnail with a spinner on top. This doesn't change but stopped machines being constructed now display their spinner in a frame, to be consistent with the new thumbnail for stopped machines.

Boxes' old thumbnail for machines being imported Boxes' new thumbnail for machines being imported
Boxes' thumbnail for machines being imported: old (left) and new (right)

The default thumbnail for machines was a big computer-symbolic icon. It have been changed to the new gray frame style, keeping the computer-symbolic icon as the thumbnail's emblem.

Boxes' old default machine thumbnail Boxes' new default machine thumbnail
Boxes' default machine thumbnail: old (left) and new (right)

Thumnails are now consistent, elegant, and the machine's status is more understandable.

Working on this feature helped me to discover bug #751494: GDMainIconView draw pictures without their last column of pixel.


The way a machine is shown as favorite has also been revamped. A big heart shaped icon (the emblem-favorite-symbolic icon) was added to the bottom right corner of the thumbnail, and this was causing multiple problems (see bug #745757):

  • the standard icon to show something is favorited is a star (the starred-symbolic icon),
  • and more importantly, its position was conflicting with the one of the selection checkbox!
Boxes' old emblem for favorite machines Boxes' new emblem for favorite machines
Boxes' emblem for favorite machines: old (left) and new (right)

A machine is now shown as favorited by adding a tiny star to the bottom left corner of its thumbnail.

Unfortunately, problems still exist as the white star becomes invisible on a white thumbnail (see bug #751478). I tried to solve this problem, by making the star casting a shadow, which worked well but it required me to implement a blurring function into Boxes' code, adding 100 lines of Vala to an already complex codebase for one tiny functionality which has nothing to do with the application's domain, hence this solution haven't been retained.

The emblem for favorite machines is invisible on a white thumbnail Adding a shadow under the emblem solves this problem
The 'favorites' emblem with and without a shadow

Zeeshan suggested trying to solve this by using the image's energy, as the code to do such a thing already exists. This solution still has to be explored.

Pfmatch, a packet filtering language embedded in Lua

Greets, hackers! I just finished implementing a little embedded language in Lua and wanted to share it with you. First, a bit about the language, then some notes on how it works with Lua to reach the high performance targets of Snabb Switch.

the pfmatch language

Pfmatch is a language designed for filtering, classifying, and dispatching network packets in Lua. Pfmatch is built on the well-known pflang packet filtering language, using the fast pflua compiler for LuaJIT.

Here's an example of a simple pfmatch program that just divides up packets depending on whether they are TCP, UDP, or something else:

match {
   tcp => handle_tcp
   udp => handle_udp
   otherwise => handle_other

Unlike pflang filters written for such tools as tcpdump, a pfmatch program can dispatch packets to multiple handlers, potentially destructuring them along the way. In contrast, a pflang filter can only say "yes" or "no" on a packet.

Here's a more complicated example that passes all non-IP traffic, drops all IP traffic that is not going to or coming from certain IP addresses, and calls a handler on the rest of the traffic.

match {
   not ip => forward
   ip src => incoming_ip
   ip dst => outgoing_ip
   otherwise => drop

In the example above, the handlers after the arrows (forward, incoming_ip, outgoing_ip, and drop) are Lua functions. The part before the arrow (not ip and so on) is a pflang expression. If the pflang expression matches, its handler will be called with two arguments: the packet data and the length. For example, if the not ip pflang expression is true on the packet, the forward handler will be called.

It's also possible for the handler of an expression to be a sub-match:

match {
   not ip => forward
   ip src => {
      tcp => incoming_tcp(&ip[0], &tcp[0])
      udp => incoming_udp(&ip[0], &ucp[0])
      otherwise => incoming_ip(&ip[0])
   ip dst => {
      tcp => outgoing_tcp(&ip[0], &tcp[0])
      udp => outgoing_udp(&ip[0], &ucp[0])
      otherwise => outgoing_ip(&ip[0])
   otherwise => drop

As you can see, the handlers can also have additional arguments, beyond the implicit packet data and length. In the above example, if not ip doesn't match, then ip src matches, then tcp matches, then the incoming_tcp function will be called with four arguments: the packet data as a uint8_t* pointer, its length in bytes, the offset of byte 0 of the IP header, and the offset of byte 0 of the TCP header. An argument to a handler can be any arithmetic expression of pflang; in this case &ip[0] is actually an extension. More on that later. For language lawyers, check the syntax and semantics over in our source repo.

Thanks especially to my colleague Katerina Barone-Adesi for long backs and forths about the language design; they really made it better. Fistbump!

pfmatch and lua

The challenge of designing pfmatch is to gain expressiveness, compared to writing filters by hand, while not endangering the performance targets of Pflua and Snabb Switch. These days Snabb is on target to give ASIC-driven network appliances a run for their money, so anything we come up with cannot sacrifice speed.

In practice what this means is compile, don't interpret. Using the pflua compiler allows us to generalize the good performance that we have gotten on pflang expressions to a multiple-dispatch scenario. It's a pretty straightword strategy. Naturally though, the interface with Lua is more complex now, so to understand the performance we should understand the interaction with Lua.

How does one make two languages interoperate, anyway? With pflang it's pretty clear: you compile pflang to a Lua function, and call the Lua function to match on packets. It returns true or false. It's a thin interface. Indeed with pflang and pflua you could just match the clauses in order:

not_ip = pf.compile('not ip')
incoming = pf.compile('ip src')
outgoing = pf.compile('ip dst')

function handle(packet, len)
   if not_ip(packet, len) then return forward(packet, len)
   elseif incoming(packet, len) then return incoming_ip(packet, len)
   elseif outgoing(packet, len) then return outgoing_ip(packet, len)
   else return drop(packet, len) end

But not only is this tedious, you don't get easy access to the packet itself, and you're missing out on opportunities for optimization. For example, if the packet fails the not_ip check, we don't need to check if it's an IP packet in the incoming check. Compiling a pfmatch program takes advantage of pflua's optimizer to produce good code for the match expression as a whole.

If this were Scheme I would make the right-hand side of an arrow be an expression and implement pfmatch as a macro; see Racket's match documentation for an example. In Lua or other languages that's harder to do; you would have to parse Lua, and it's not clear which parts of the production as a whole are the host language (Lua) and which are the embedded language (pfmatch).

Instead, I think embedding host language snippets by function name is a fine solution. It seems fairly clear that incoming_ip, for example, is some kind of function. It's easy to parse identifiers in an embedded language, both for humans and for programs, so that takes away a lot of implementation headache and cognitive overhead.

We are left with a few problems: how to map names to functions, what to do about the return value of match expressions, and how to tie it all together in the host language. Again, if this were Scheme then I'd use macros to embed expressions into the pfmatch term, and their names would be scoped into whatever environment the match term was defined. In Lua, the best way to implement a name/value mapping is with a table. So we have:

local handlers = {
   forward = function(data, len)
   drop = function(data, len)
   incoming_ip = function(data, len)
   outgoing_ip = function(data, len)

Then we will pass the handlers table to the matcher function, and the matcher function will call the handlers by name. LuaJIT will mostly take care of the overhead of the table dispatch. We compile the filter like this:

local match = require('pf.match')

local dispatcher = match.compile([[match {
   not ip => forward
   ip src => incoming_ip
   ip dst => outgoing_ip
   otherwise => drop

To use it, you just invoke the dispatcher with the handlers, data, and length, and the return value is whatever the handler returns. Here let's assume it's a boolean.

function loop(self)
   local i, o = self.input.input, self.output.output
   while not link.empty() do
      local pkt = link.receive(i)
      if dispatcher(handlers,, pkt.length) then
         link.transmit(o, pkt)

Finally, we're ready for an example of a compiled matcher function. Here's what pflua does with the match expression above:

local cast = require("ffi").cast
return function(self,P,length)
   if length < 14 then return self.forward(P, len) end
   if cast("uint16_t*", P+12)[0] ~= 8 then return self.forward(P, len) end
   if length < 34 then return self.drop(P, len) end
   if P[23] ~= 6 then return self.drop(P, len) end
   if cast("uint32_t*", P+26)[0] == 67305985 then return self.incoming_ip(P, len) end
   if cast("uint32_t*", P+30)[0] == 134678021 then return self.outgoing_ip(P, len) end
   return self.drop(P, len)

The result is a pretty good dispatcher. There are always things to improve, but it's likely that the function above is better than what you would write by hand, and it will continue to get better as pflua improves.

Getting back to what I mentioned earlier, when we write filtering code by hand, we inevitably end up writing interpreters for some kind of filtering language. Network functions are essentially linguistic in nature: static appliances are no good because network topologies change, and people want solutions that reflect their problems. Usually this means embedding an interpreter for some embedded language, for example BPF bytecode or iptables rules. Using pflua and pfmatch expressions, we can instead compile a filter suited directly for the problem at hand -- and while we're at it, we can forget about worrying about pesky offsets, constants, and bit-shifts.


I'm optimistic about pfmatch or something like it being a success, but there are some challenges too.

One challenge is that pflang is pretty weird. For example, attempting to access ip[100] will abort a filter immediately on a packet that is less than 100 bytes long, not including L2 encapsulation. It's wonky semantics, and in the context of pfmatch, aborting the entire pfmatch program would obviously be the wrong thing. That would abort too much. Instead it should probably just fail the pflang test in which that packet access appears. To this end, in pfmatch we turn those aborts into local expression match failures. However, this leads to an inconsistency with pflang. For example in (ip[100000] == 0 or (1==1)), instead of ip[100000] causing the whole pflang match to fail, it just causes the local test to fail. This leaves us with 1==1, which passes. We abort too little.

This inconsistency is probably a bug. We want people to be able to test clauses with vanilla pflang expressions, and have the result match the pfmatch behavior. Due to limitations in some of pflua's intermediate languages, it's likely to persist for a while. It is the only inconsistency that I know of, though.

Pflang is also underpowered in many ways. It has terrible IPv6 support; for example, tcp[0] only matches IPv4 packets, and at least as implemented in libpcap, most payload access on IPv6 packets does the wrong thing regarding chained extension headers. There is no facility in the language for binding names to intermediate results, there is no linguistic facility for talking about fragmentation, no ability to address IP source and destination addresses in arithmetic expressions by name, and so on. We can solve these in pflua with extensions to the language, but that introduces incompatibilities with pflang.

You might wonder why to stick with pflang, after all of this. If this is you, Juho Snellman wrote a great article on this topic, just for you: What's wrong with pcap filters.

Pflua's optimizer has mostly helped us, but there have been places where it could be more helpful. When compiling just one expression, you can often end up figuring out which branches are dead-ends, which helps the rest of the optimization to proceed. With more than one successful branch, we had to make a few improvements to the optimizer to actually get decent results. We also had to relax one restriction on the optimizer: usually we only permit transformations that make the code smaller. This way we know we're going in the right direction and will eventually terminate. However because of reasons™ we did decide to allow tail calls to be duplicated, so instead of having just one place in the match function that tail-calls a handler, you can end up with multiple calls. I suspect using a tracing compiler will largely make this moot, as control-flow splits effectively lead to trace duplication anyway, and making sure control-flow joins later doesn't effectively counter that. Still, I suspect that the resulting trace shape will rejoin only at the loop head, instead of in some intermediate point, which is probably OK.


With all of these concerns, is pfmatch still a win? Yes, probably! We're going to start using it when building Snabb apps, and will see how it goes. We'll probably end up adding a few more pflang extensions before we're done. If it's something you're in to, snabb-devel is the place to try it out, and see you on the bug tracker. Happy packet hacking!

FUDCon Pune 2015

This year FUDCon, held at Pune, last week was my first ever FUDCon, and my first steps in to awesome Fedora community. This was also the first conference where I delivered a full-fledged talk about ‘Automating UI testing’ presenting some of the work I did in automating the UI tests for gnome-photos. The talk was more about how they can make their UI tests automated.

I also talked about ‘Integrating LibreOffice with your applications’ in a barcamp talk sharing and discussing ideas with few people, presenting what I am up to in this project in LibreOffice, and how they can take advantage by either directly using the new, evolving LibreOfficeKit API, or by using the new Gtk3 widget in their applications. I talked about how I am acheiving this using tiled rendering, and how I (with Michael and Miklos) am planning to enhance this in future by incoporating the support for opengl, efficient tile management, and multi-threaded support.

Besides that, it was a wonderful opportunity for me to meet new people contributing to Fedora project, and sharing ideas with them. I now have a better idea of how I can contribute more to Fedora, and feel motivated enough to continue my contributions. I have made quite a few friends who, I think, would be happy to help me if I plan to get started with any of the Fedora teams, and I do plan to involve myself in few more interesting teams in future sparing time out of my regular work.

Last but not the least, I would like to thank all the organizers for making this event possible. They have been working hard for months, and have had many sleepless nights just to make sure everything remains on track. I would also like to thank them for sponsoring my stay and travel, without which I would not have been able to attend the event.

July 01, 2015

West Coast Summit

This is the last day of the GNOME West Coast Summit, and for the past three days we’ve been working and discussing topics such as:

  • Application Sandboxing / xdg-apps
  • Application development / Builder
  • GTK, multi-touch,
  • Appstores, Appstream
  • Mutter

In attendance are GNOME, Elementary OS, and Endless.   In addition, various individual contributors have also joined us.  Other attendees will blog about the technical work, I want to focus on community and outreach.

One of the most unique things about this hackfest versus the others is that this is the first time desktops using the same GNOME software stack are meeting and discussing with each other.  Elementary and GNOME have very similar goals which was quite apparent in last year’s summit.  This year, attendees have been almost equal parts GNOME and equal parts Elementary.  Unlike desktop summit, everybody are still using the common core and so the conversations were much more rooted on how to enable  features, fix bugs, and trading technology between desktops. BTW, we might get quarter tiling next GNOME release in mutter!

I discussed some strategic directions that I would like to see GNOME, specifically using Builder to enable GNOME as the development environment for IoT and Maker market segments.  in fact, the cultures between Maker and Free Software can be quite harmonious.

Today is the last day of the Summit.  It has been a privilege and pleasure to continue to organize and support the West Coast Summit.

A big heartfelt thanks to Endless for providing us with such a great venue for our hackfest and making us feel so welcome and providing us with a goody bag.

I would also like to thank Tiffany Yau, Christian Hergert, and Cosimo Cecchi for providing the evening festivities for each day.

Of course, thanks to all the attendees for making the trip out here in San Francisco, and see you next year!

Photos: future plans


This is the third in my series of blog posts about the latest generation of GNOME application designs. In this post, I’m going to talk about Photos. Out of the applications I’ve covered, this is the one that has the most new design work.

One of the unique things about Photos is that it has been built from the ground up with cloud integration in mind. This means that you can use it to view photos from Facebook, Google or Flickr, as well as the images on your local machine. This is incredibly important for the future of the app, and is something we’d like to build on.

Until recently, we’ve focused on getting Photos into shape as a storage-agnostic Photo viewer and organiser. Now that it has matured in these areas, we’ve begun the process of filling out its feature set.


I’m starting with editing because work in this area is already happening. Photos uses GEGL, the GIMP’s next generation image processing framework, which means that there’s a lot of power under the hood. Debarshi has been (understandably) keen to make use of this for image editing.

For the design of Photos, we’re following one of the design principles drawn up by Jon for GNOME Shell: “design a self-teaching interface for beginners, and an efficient interface for advanced users, but optimize for intermediates”. This can be seen in the designs for photo editing: they are simple and straightforward if you are new to photo editing, but there’s also enough in there to satisfy those who know a few tricks.

We’re also following the other principles of GNOME 3 design: reducing the amount of work the user has to do, preventing mistakes where possible (and allowing them to be easily reversed when they do happen), prioritising the display of content, and using transitions effectively.

The designs organise the different editing tools according to a logical workflow: crop, fix colours (brightness, contrast, saturation, etc), check sharpeness, and finally apply effects.

Editing: Crop

Editing: Filters

We also want editing to feel smooth and seamless, and are focusing on getting the transitions right. To that end, Jakub has been working on motion mockups.

There’s already a branch with Debarshi’s editing work on it, for those who want to give it a try. The usual caveat applies: this is work in progress.


As I mentioned, Photos already has cloud support. This solves the problem of getting photos into the Photos app for many people – if you are shooting with an Android phone, images can be synced to the cloud and will magically appear in GNOME.

However, if you are one of those old-fashioned types who shoots with an actual camera, you have to manually copy images over from the device, and this is labour intensive and error prone. We want a much more convenient and seamless experience for getting images from these devices onto your computer.

The initial designs for importing photos from a device are deliberately simple: generally speaking, all it should take is a single click to import new shots.

One important aspect of this design is that we want it to be error-proof. There’s nothing worse than realising that you forgot to copy over images and then blanked the SD card they were stored on – we want to prevent this from happening by shifting responsibility for maintaining the photo collection to the app.


Sharing is critical for a photos app, since sharing is often the primary reason we take a picture in the first place. It can take place through various means, including showing slideshows, but social media is obviously critical.

There are plans to equip GNOME with a system-wide sharing framework, which will allow posting to social media, and we’d like Photos to take advantage of that. However, sharing is so important for Photos that we don’t want to wait around for system-wide sharing to become available.


There are also wireframes and a bug report.

General improvements

Aside from big new features like editing, import and sharing, we have other changes planned. The first is an improved photos timeline view:

Photos Timeline

The main changes here are the addition of date headings, switching to square thumbnails, and hiding image titles. This is all intended to give you a less cluttered view of your photos, as well as clearer markers for navigation.

Another change that we have planned is a details sidebar. This is intended to provide a kind of organising mode, in which you can go through a series of photos and give them titles and descriptions, or assign them to different albums.

Details Sidebar

How to help

There’s already some cool work happening in Photos, and I’m pretty excited about the plans we have. If you want to help or get involved, there’s plenty to be done, but everything is clearly organised.

Bugs have been filed for all the new features and changes that I’ve mentioned in this post, and each one links to the relevant designs and mockups – you can find them listed on the Photos roadmap page.

Also, Debarshi (the photos maintainer), is happy to review patches, and Jakub and I are available for design discussion and review.

Open Source Hong Kong 2015

Recently, I’ve been to Hong Kong for Open Source Hong Kong 2015, which is the heritage of the GNOME.Asia Summit 2012 we’ve had in Hong Kong. The organisers apparently liked their experience when organising GNOME.Asia Summit in 2012 and continued to organise Free Software events. When talking to organisers, they said that more than 1000 people registered for the gratis event. While those 1000 were not present, half of them are more realistic.

Olivier from Amazon Web Services Klein was opening the conference with his keynote on Big Data and Open Source. He began with a quote from RMS: about the “Free” in Free Software referring to freedom, not price. He followed with the question of how does Big Data fit into the spirit of Free Software. He answered shortly afterwards by saying that technologies like Hadoop allow you to mess around with large data sets on commodity hardware rather than requiring you to build a heavy data center first. The talk then, although he said it would not, went into a subtle sales pitch for AWS. So we learned about AWS’ Global Infrastructure, like how well located the AWS servers are, how the AWS architecture helps you to perform your tasks, how everything in AWS is an API, etc. I wasn’t all too impressed, but then he demoed how he uses various Amazon services to analyse Twitter for certain keywords. Of course, analysing Twitter is not that impressive, but being able to do that within a few second with relatively few lines of code impressed me. I was also impressed by his demoing skills. Of course, one part of his demo failed, but he was reacting very professionally, e.g. he quickly opened a WiFi hotspot on his phone to use that as an alternative uplink. Also, he quickly grasped what was going on on his remote Amazon machine by quickly glancing over netstat and ps output.

The next talk I attended was on trans-compiling given by Andi Li. He was talking about Haxe and how it compiles to various other languages. Think Closure, Scala, and Groovy which all compile to Java bytecode. But on steroids. Haxe apparently compiles to code in another language. So Haxe is a in a sense like Emcripten or Vala, but a much more generic source-to-source compiler. He referred about the advantages and disadvantages of Haxe, but he lost me when he was saying that more abstraction is better. The examples he gave were quite impressive. I still don’t think trans-compiling is particularly useful outside the realm of academic experiments, but I’m still intrigued by the fact that you can make use of Haxe’s own language features to conveniently write programs in languages that don’t provide those features. That seems to be the origin of the tool: Flash. So unless you have a proper language with a proper stdlib, you don’t need Haxe…

From the six parallel tracks, I chose to attend the one on BDD in Mediawiki by Baochuan Lu. He started out by providing his motivation for his work. He loves Free/Libre and Open Source software, because it provides a life-long learning environment as well as a very supportive community. He is also a teacher and makes his students contribute to Free Software projects in order to get real-life experience with software development. As a professor, he said, one of his fears when starting these projects was being considered as the expert™ although he doesn’t know much about Free Software development. This, he said, is shared by many professors which is why they would not consider entering the public realm of contributing to Free Software projects. But he reached out to the (Mediawiki) community and got amazing responses and an awful lot of help.
He continued by introducing to Mediawiki, which, he said, is a platform which powers many Wikimedia Foundation projects such as the Wikipedia, Wikibooks, Wikiversity, and others. One of the strategies for testing the Mediawiki is to use Selenium and Cucumber for automated tests. He introduced the basic concepts of Behaviour Driven Development (BDD), such as being short and concise in your test cases or being iterative in the test design phase. Afterwards, he showed us how his tests look like and how they run.

The after-lunch talk titled Data Transformation in Camel Style was given by Red Hat’s Roger Hui and was concerned with Apache Camel, an “Enterprise Integration” software. I had never heard of that and I am not much smarter know. From what I understood, Camel allows you to program message workflows. So depending on the content of a message, you can make it go certain ways, i.e. to a file or to an ActiveMQ queue. The second important part is data transformation. For example, if you want to change the data format from XML to JSON, you can use their tooling with a nice clicky pointy GUI to drag your messages around and route them through various translators.

From the next talk by Thomas Kuiper I learned a lot about Gandi, the domain registrar. But they do much more than that. And you can do that with a command line interface! So they are very tech savvy and enjoy having such customers, too. They really seem to be a cool company with an appropriate attitude.

The next day began with Jon’s Kernel Report. If you’re reading LWN then you haven’t missed anything. He said that the kernel grows and grows. The upcoming 4.2 kernel, probably going to be released on August 23rd. might very well be the busiest we’ve seen with the most changesets so far. The trend seems to be unstoppable. The length of the development cycle is getting shorter and shorter, currently being at around 63 days. The only thing that can delay a kernel release is Linus’ vacation… The rate of volunteer contribution is dropping from 20% as seen for 2.6.26 to about 12% in 3.10. That trend is also continuing. Another analysis he did was to look at the patches and their timezone. He found that that a third of the code comes from the Americas, that Europe contributes another third, and so does Australasia. As for Linux itself, he explained new system calls and other features of the kernel that have been added over the last year. While many things go well and probably will continue to do so, he worries about the real time Linux project. Real time, he said, was the system reacting to an external event within a bounded time. No company is supporting the real time Linux currently, he said. According to him, being a real time general purpose kernel makes Linux very attractive and if we should leverage that potential. Security is another area of concern. 2014 was the year of high profile security incidents, like various Bash and OpenSSL bugs. He expects that 2015 will be no less interesting. Also because the Kernel carries lots of old and unmaintained code. Three million lines of code haven’t been touch in at least ten years. Shellshock, he said, was in code more than 20 years old code. Also, we have a long list of motivated attackers while not having people working on making the Kernel more secure although “our users are relying on us to keep them safe in a world full of threats”

The next presentation was given by Microsoft on .NET going Open Source. She presented the .NET stack which Microsoft has open sourced at the end of last year as well as on Visual Studio. Their vision, she said, is that Visual Studio is a general purpose IDE for every app and every developer. So they have good Python and Android support, she said. A “free cross platform code editor” named Visual Studio Code exists now which is a bit more than an editor. So it does understand some languages and can help you while debugging. I tried to get more information on that Patent Grant, but she couldn’t help me much.

There was also a talk on Luwrain by Michael Pozhidaev which is GPLv3 software for blind people. It is not a screen reader but more of a framework for writing software for blind people. They provide an API that guarantees that your program will be accessible without the application programmer needing to have knowledge of accessibility technology. They haven’t had a stable release just yet, but it is expected for the end of 2015. The demo unveiled some a text oriented desktop which reads out text on the screen. Several applications already exist, including a file editor and a Twitter client. The user is able to scroll through the text by word or character which reminded of ChorusText I’ve seen at GNOME.Asia Summit earlier this year.

I had the keynote slot which allowed me to throw out my ideas for the future of the Free Software movement. I presented on GNOME and how I see that security and privacy can make a distinguishing feature of Free Software. We had an interesting discussion afterwards as to how to enable users to make security decisions without prompts. I conclude that people do care about creating usable secure software which I found very refreshing.

Both the conference and Hong Kong were great. The local team did their job pretty well and I am proud that the GNOME.Asia Summit in Hong Kong inspired them to continue doing Free Software events. I hope I can be back soon :-)

Name dropping in the last hours of ZeMarmot crowdfunding

We had an awesome funding experience, but this is not finished. First because we still have a few hours left, so if you were planning on contributing and simply waiting for the last second, now is the time! Also because this is only the start of ZeMarmot adventure. With what we have funded, we are going to release the beginning of the movie in a few months, which we hope you will enjoy, then decide to continue supporting, financially or otherwise.

As of now we have 327 awesome funders from 36 countries, from smaller amounts to bigger ones (1000 €). Amongst our Silver Sponsors, 2 organizations officially support our project: apertus° (the first OpenHardware cinema camera makers) and Laboratorio Bambara (a research group on audiovisual art).
Our first ever Silver sponsor was Mike Linksvayer, former executive director of Creative Commons. We can also count Terry Hancock, Free Software Magazine columnist and director of the Open animation serie “Lunatics“, among our funders, and other contributors from well known Free Software or Free Knowledge projects: a long time GIMP developer, Simon Budig; a Mozilla employee, Xionox; a Creative Commons employee himself on a movie adventure too, Matt Lee; GCompris maintainer, Bruno Coudoin… And I’m sure I missed a lot of people.
Also several teachers from various universities, even a bookstore (À Livr’Ouvert) backing us officially, fellow artists, some using Free Software (like Tepee), people from the cinema industry (an executive producer for instance).

Of course the GIMP project has been supporting our project all along…
Screenshot from 2015-05-08 19:56:40
As well as Libre Graphics World, reference for Free Arts-related news…
BlenderNation, linuxfr (French-speaking Free Software news), Framasoft, GIMPUsers, the VLC project, and so many others.
We were also featured in wider audience news website as Numera and Reflets, and even in television on TV5World, and twice on French FM radio.

Tristan Nitot (former president of Mozilla Europe, now Cozy Cloud Chief Product Officer), Free Software foundation, Creative Commons shared our project on various social networks or blogs.
Ton Roosendaal, Blender Foundation chairman, called our initiative “a Libre movie project with the right spirit”.
Now I’m just name-dropping. That’s because we were impressed by all this support. Yet let me be clear: you are all as important to us! Everyone of you. You show us that Libre Art, independent films and Free Software are cool and have a chance. Because seas are made of each drops.
We love you all.
Marmot Love

June 30, 2015

GSoC: report #3

During the last couple of week, the following points were achieved:

  • The list of recently connected servers is now correctly saved.
  • Initial work on keyboard support.
  • Some real research on how Nautilus will handle the new mocups.

Fortunately, my graduation is now totally finished. I was also accepted in the Mastering Course in Information Systems here at University of São Paulo (yay!). From now on, I’ll be fully committed to the Summer of Code project, and you guys will see much more updates :)

This week, I’ll:

  • Submit GtkPlacesView widget for review
  • Start serious hacking on Nautilus

F-Spot icon view decorations —and gtk-sharp compiled from development version

F-Spot icon view with tags, dates and rating stars.

With my last pull request, which you can see here, I'm trying to recover a few aesthetic features that was missing on F-Spot/gtk3, mainly the decorations of icon view mode thumbnails —date, tag icons and rating stars.

But the most interesting task I've accomplished last week was to investigate the reason of tons of weird GLib errors I was seeing when navigating through photos, just like this one:

Domain: 'GLib' Level: Critical
Message: Source ID 1915 was not found when attempting to remove it
Trace follows:
at GLib.Log.PrintTraceLogFunction(System.String domain, LogLevelFlags level, System.String message)
at GLib.Log.NativeCallback(IntPtr log_domain_native, LogLevelFlags flags, IntPtr message_native, IntPtr user_data)
at GLib.Source.g_source_remove(UInt32 )
at GLib.Source.Remove(UInt32 tag)
at GLib.Idle+IdleProxy.Dispose(Boolean disposing)
at GLib.Idle+IdleProxy.Finalize()

After some debugging I've found the source of the problem: One error message was being printed for each call to g_idle_add, so I decided to write two little programs to see if the error was with F-Spot or with something between gtk-sharp or GLib itself —and I've discarded F-Spot because my C# test program had exactly the same problem.

After searching the Web finding no more than complaints about the problem, but no solutions nor explanations about it, I've been able to understand the problem simply changing the return value of the callback function I was using to perform the tests: It looks like gtk-sharp glib bindings has an error in stable version that makes GLib complain about trying to double free a non existent source if the callback you passed to g_idle_add has returned false —since returning false makes GLib free that source, so it shouldn't be freed again.

So I've compiled and installed the git version of gtk-sharp and I've started seeing weird crashes in my test programs and also in F-Spot:

Unhandled Exception:
System.TypeInitializationException: An exception was thrown by the type initializer for FSpot.Utils.XdgThumbnailSpec ---> System.DllNotFoundException: libglib-2.0-0.dll
at (wrapper managed-to-native) GLib.Marshaller:g_malloc (uintptr)
at GLib.Marshaller.StringToPtrGStrdup (System.String str) [0x00000] in :0
at Hyena.SafeUri.FilenameToUri (System.String localPath) [0x00000] in /home/valentin/Escritorio/f-spot-sanva/external/Hyena/Hyena/Hyena/SafeUri.cs:88
at Hyena.SafeUri..ctor (System.String uri) [0x00047] in /home/valentin/Escritorio/f-spot-sanva/external/Hyena/Hyena/Hyena/SafeUri.cs:59
at FSpot.Utils.XdgThumbnailSpec..cctor () [0x00000] in /home/valentin/Escritorio/f-spot-sanva/src/Core/FSpot.Utils/XdgThumbnailSpec.cs:91
--- End of inner exception stack trace ---
at FSpot.Driver.Main (System.String[] args) [0x0006b] in /home/valentin/Escritorio/f-spot-sanva/src/Clients/MainApp/FSpot/main.cs:180
[ERROR] FATAL UNHANDLED EXCEPTION: System.TypeInitializationException: An exception was thrown by the type initializer for FSpot.Utils.XdgThumbnailSpec ---> System.DllNotFoundException: libglib-2.0-0.dll
at (wrapper managed-to-native) GLib.Marshaller:g_malloc (uintptr)
at GLib.Marshaller.StringToPtrGStrdup (System.String str) [0x00000] in :0
at Hyena.SafeUri.FilenameToUri (System.String localPath) [0x00000] in /home/valentin/Escritorio/f-spot-sanva/external/Hyena/Hyena/Hyena/SafeUri.cs:88
at Hyena.SafeUri..ctor (System.String uri) [0x00047] in /home/valentin/Escritorio/f-spot-sanva/external/Hyena/Hyena/Hyena/SafeUri.cs:59
at FSpot.Utils.XdgThumbnailSpec..cctor () [0x00000] in /home/valentin/Escritorio/f-spot-sanva/src/Core/FSpot.Utils/XdgThumbnailSpec.cs:91
--- End of inner exception stack trace ---
at FSpot.Driver.Main (System.String[] args) [0x0006b] in /home/valentin/Escritorio/f-spot-sanva/src/Clients/MainApp/FSpot/main.cs:180

In short, I've found this link and with the help of

ldconfig -p | grep 'libraryname'

I've been able to fix my system creating a symbolic link to every *.so file with the name Mono was expecting...

And now the GLib critical warnings are gone.

Flattr «F-Spot icon view decorations —and gtk-sharp compiled from development version»

Mallard Documentation Sites With Pintail

When we first designed Mallard, we designed it around creating documents: non-linear collections of pages about a particular subject. Documents are manageable and maintainable, and we’re able to define all of Mallard’s automatic linking within the confines of a document.

If you wanted to publish a set of Mallard documents on the web, you could build each of them individually with a tool like yelp-build, then output some extra navigation pages to help people find the right document. But there was no simple way to create those extra pages. What’s more, you couldn’t link between documents except by using external href links. Mallard’s automatic links are confined to documents.

Enter Pintail. Pintail lets you build entire web sites from Mallard sources. Just lay out your pages in the directory structure you like, and let Pintail build the site for you. Put full Mallard documents in their own directories, then use Mallard to create the extra navigation pages between them. Better still, you can use an extended xref syntax to refer to pages in other directories. Just include the path to the target page with slashes, like so:

<link xref="/about/learn/svg"/>

This isn’t just a simple link. You can use this in topic links and seealso links and anywhere else that Mallard lets you put an xref attribute. Pintail makes Mallard’s automatic linking work across multiple documents.

Pintail is designed to allow other formats to be used, so you could use it to build all your documentation in an environment where not everything is in one format. It already supports Mallard Ducktype as well as XML. But Mallard is the primary format.

One of the really nice features is that it can pull it documents in other git repositories, so you don’t have to keep all your documentation in a single source tree. In fact, the site in your main repository might be little more than glue pages and the pintail.cfg file that specifies where all the actual documentation lives.

Pintail builds the web site right now, as well as a few other random sites I maintain. I hope it turns out to be useful for heavy Mallard users like GNOME, Ubuntu, and Endless. And I hope it makes Mallard easier for others who are considering using it.

No software is ever finished, but here are some of the top things I plan to add soon:

  •  Page merging: Mallard allows pages to be dropped into a document and seamlessly integrated into the navigation. Sometimes you want to publish a document with pages pulled from other places. For example, GNOME generally wants to publish GNOME Help with the optional Getting Started video pages merged in.
  • Translations: Mallard was designed from day one to be translator-friendly, and itstool ships with ITS rules for Mallard. I just need to hook the pieces together.
  • Search: An extensive documentation site needs configurable search. You often want to restrict search within a single document. Also, some documents (or versions of documents) shouldn’t appear in global search results.

What would you like to see a Mallard site tool do?

Parsing Option ROM Firmware

A few weeks ago an issue was opened on fwupd by pippin. He was basically asking for a command to return all the hashes of the firmwares installed on his hardware, which I initially didn’t really see the point of doing. However, after doing a few hours research about all the malware that can hide in VBIOS for graphics cards, option ROM in network cards, and keyboard matrix EC processors I was suitably worried also. I figured fixing the issue was a good idea. Of course, malware could perhaps hide itself (i.e. hiding in an unused padding segment and masking itself out on read) but this at least raises the bar from a security audit point of view, and is somewhat easier than opening the case and attaching a SPI programmer to the chip itself.

Fast forward a few nights. We can now verify ATI, NVIDIA, INTEL and ColorHug firmware. I’ve not got any other hardware with ROM that I can read from userspace, so this is where I need your help. I need willing volunteers to compile fwupd from git master (or rebuild my srpm) and then run:

cd fwupd/src
find /sys/devices -name rom -exec sudo ./fwupdmgr dump-rom {} \;

All being well you should see something like this:

/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/rom -> f21e1d2c969dedbefcf5acfdab4fa0c5ff111a57 [Version:]

If you see something just that, you’re not super helpful to me. If you see Error reading from file: Input/output error then you’re also not so helpful as the kernel module for your hardware is exporting a rom file and not hooking up the read vfuncs. If you get an error like Failed to detect firmware header [8950] or Firmware version extractor not known then you’ve just become interesting. If that’s you, can you send the rom file to as an attachment along with any details you know about the hardware. Thanks!


Fedora Workstation next steps : Introducing Pinos

So this will be the first in a series of blogs talking about some major initiatives we are doing for Fedora Workstation. Today I want to present and talk about a thing we call Pinos.

So what is Pinos? One of the original goals of Pinos was to provide the same level of advanced hardware handling for Video that PulseAudio provides for Audio. For those of you who has been around for a while you might remember how you once upon a time could only have one application using the sound card at the same time until PulseAudio properly fixed that. Well Pinos will allow you to share your video camera between multiple applications and also provide an easy to use API to do so.

Video providers and consumers are implemented as separate processes communicating with DBUS and exchanging video frames using fd passing.

Some features of Pinos

  • Easier switching of cameras in your applications
  • It will also allow you to more easily allow applications to switch between multiple cameras or mix the content from multiple sources.

  • Multiple types of video inputs
  • Supports more than cameras. Pinos also supports other type of video sources, for instance it can support your desktop as a video source.

  • GStreamer integration
  • Pinos is built using GStreamer and also have GStreamer elements supporting it to make integrating it into GStreamer applications simple and straightforward.

  • Pinos got some audio support
  • Well it tries to solve some of the same issues for video that PulseAudio solves for audio. Namely letting you have multiple applications sharing the same camera hardware. Pinos does also include audio support in order to let you handle both.

What do we want to do with this in Fedora Workstation?

  • One thing we know is of great use and importance for many of our users, including many developers who wants to make videos demonstrating their software, is to have better screen capture support. One of the test cases we are using for Pinos is to improve the built in screen casting capabilities of GNOME 3, the goal being to reducing overhead and to allow for easy setup of picture in picture capturing. So you can easily set it up so there will be a camera capturing your face and voice and mixing that into your screen recording.
  • Video support for Desktop Sandboxes. We have been working for a while on providing technology for sandboxing your desktop applications and while we with a little work can use PulseAudio for giving the sandboxed applications audio access we needed something similar for video. Pinos provides us with such a solution.

Who is working on this?
Pinos is being designed and written by Wim Taymans who is the co-creator of the GStreamer multimedia framework and also a regular contributor to the PulseAudio project. Wim is also the working for Red Hat as a Principal Engineer, being in charge of a lot of our multimedia support in both Red Hat Enterprise Linux and Fedora. It is also worth nothing that it draws many of its ideas from an early prototype by William Manley called PulseVideo and builds upon some of the code that was merged into GStreamer due to that effort.

Where can I get the code?
The code is currently hosteed in Wim’s private repository on freedesktop. You can get it at

How can I get involved or talk to the author
You can find Wim on Freenode IRC, he uses the name wtay and hangs out in both the #gstreamer and #pulseaudio IRC channels.
Once the project is a bit further along we will get some basic web presence set up and a mailing list created.


If Pinos contains Audio support will it eventually replace PulseAudio too?
Probably not, the usecases and goals for the two systems are somewhat different and it is not clear that trying to make Pinos accommodate all the PulseAudio usescases would be worth the effort or possible withour feature loss. So while there is always a temptation to think ‘hey, wouldn’t it be nice to have one system that can handle everything’ we are at this point unconvinced that the gain outweighs the pain.

Will Pinos offer re-directing kernel APIs for video devices like PulseAudio does for Audio? In order to handle legacy applications?
No, that was possible due to the way ALSA worked, but V4L2 doesn’t have such capabilities and thus we can not take advantage of them.

Why the name Pinos?
The code name for the project was PulseVideo, but to avoid confusion with the PulseAudio project and avoid people making to many assumptions based on the name we decided to follow in the tradition of Wayland and Weston and take inspiration from local place names related to the creator. So since Wim lives in Pinos de Alhaurin close to Malaga in Spain we decided to call the project Pinos. Pinos is the word for pines in Spanish :)

smarter status hiding

In heavily populated IRC channels such as #debian on Freenode, a lot of idle IRC users are joining and leaving every couple of seconds. At the moment, we display a status message for every user in the room which in some cases results in a lot of visual noise.
06-30-15 visual-noise-thumbanother busy day on #debian. 9 out of 17 messages in the chat view are status messages..

This problem is filed as bug 711542 in GNOME’s bug tracker. I have decided to try addressing this issue with some logic.

The main purpose of status messages is to indicate activity concerning a user, such as the user joining a channel, disconnecting or being renamed. The two primary use cases I see for displaying status messages in the chat view are:

  • ..when a user on a channel is actively participating in a conversation, but then the user’s status suddenly changes. For anyone reading the conversation, it is interesting to know what has happened with the participating users as a change in their status can affect the conversation itself.
  • ..when a channel is idle and no conversation is happening. In this case it is a convenience to display status messages. You might be waiting for someone particular to join the channel for example and thus is checking the channel occasionally to see if the person has joined the room. On the other hand, we also want to avoid flooding the chatView with status messages as it creates more unnecessary extra text to scroll through between conversations in log history.

To adress point one, I have implemented the concept of “active” users and “inactive” users. An active user is any user on the channel who has sent a message to the room within the last X* minutes. If the user is considered active, we allow any status messages related to this user to be displayed in the room.

06-30-15 only-active-users-thumb Only recently active users emit status messages.

So because user Raazeer wrote “jelly, if I’m not back in ten minutes, I’ve destroyed my boot config and won’t be back today”, Polari will allow Raazeer’s status messages to be displayed the next X minutes. As can be seen in the screenshot above, a status message indicated that Raazeer disconnected shortly afterwards.

To address point 2, Polari will display status messages for any users provided there hasn’t been any conversation in the channel within the last X minutes. However, if more than 4 status messages are emitted while the channel is idling, Polari collapses the status messages and displays a single line summary instead. The summary can be expanded to reveal the details at any time.

06-30-15 compressed-status-message-thumb Screenshot of a status summary in action. Clicking the “…” can be clicked to reveal the status messages.

With these changes I hope we’ll have less visual noise from status messages in Polari. Feel free to have try my current branch. With problems like these, I think the best way to find out if it’s solved, is by trying it over a longer period of time.

*duration to be discussed. It’s currently set to 5 minutes, which is unfortunate for Raazeer since he said he would return in 10 minutes. Maybe a good indication that I should use a higher number? d:

Westcoast summit 2015

I am in San Francisco this week, for the second Westcoast Summit – this event was born last year, as a counterweight to the traditional GNOME summits that are happening every fall on the other coast.

Sunday was a really awesome day to arrive in San Francisco, my hotel is right around Market St, where this was going on:

Gay pride parade

Maybe this inspired my choice of texture when I  wrote this quick demo in the evening:

Pango power

Like last year, we are being hosted by the awesome people at Endless Mobile . And like last year, we are lucky to have the elementary team join us to work together and have a good time.

We started the day with a topic collection session:

Topic collectionSince everyone was interested in sandboxing and xdg-app, we started with Alex giving an overview of the current status and ideas around xdg-app. Endless has done an experimental build of their own runtime, and (almost) got it working in a day. After the general sandboxing discussion had run its course, Alex and I decided to patch evince to use the document portal.

GTK topicsIn the afternoon, we did a session on GTK+ topics, which produced a number of quick patches for issues that the elementary team has in their use of GTK+.

Later on, KentonVarda and Andy Lutomirsky of came by for a very useful exchange about our approaches to sandboxing.

kdbus vs cap'n protoI have put some notes of todays discussions here.

June 29, 2015

Java ATK Wrapper: making use of ATK function pointers

Firstly, it was awesome to see GNOME give the java-atk-wrapper project a mention in a recent GSoC article! :D As for the project itself: Each of the ATK interfaces (apart from AtkWindow) has an associated structure of function pointers (which can be used to implement ATK interface methods as a wrapper functions, so over the past few weeks I have been creating the wrapper functions with the jaw_ prefix for java-atk-wrapper which I have identified. Some functions which got recently included to the wrapper's API are:
Where AtkObject is a base class and AtkAction, AtkTable, and AtkTableCell are interfaces. Java ATK interface methods are written in java classes in /org/GNOME/Accessibility. The jaw_ prefix functions which use the JNI specification to call java ATK java-atk-wrapper.jar class methods are organised in jni/src. "Class" signal handlers are written in AtkWrapper.c using the JNI specification so that they can be called by org.GNOME.Accessibility.AtkWrapper once the wrapper has been initialised with an accessible java application.

Testing and debugging java-atk-wrapper is particularly long winded and hazardous, so I have been exploring ways to improve on working practices and develop a more efficient system for testing patches too. I have also been updating the JavaAtkWrapper wiki page. Aside from all that, I have been doing a little tweaking to improve the java-atk-wrapper build, fixing a few bugs and I added a GSoC page (which still needs a bit of development!)

I suspect it might be worth adding some automated tests to check the wrapper functions with if there is some time for this. Last year, I used a gem called rspec to test wrapper functionality with FFTW, but at the moment with the wrapper, I mainly rely on Accerciser (or perhaps create a small pyatspi2 listener) for testing with SwingSet2 and Orca (I have added a few listener scripts to Alejandro's AT-SPI examples repository).

Please direct user support questions concerning the java-atk-wrapper  to gnome-accessibility-list and technical questions to gnome-accessibility-devel. All bugs are filed against Java ATK Wrapper in Bugzilla.

June 28, 2015

GStreamer Debugger - introduction

Hi everyone,
This year I've been accepted to Google Summer of Code :) Last year I worked on Banshee project [1], and this year I joined to GStreamer [2] team.
This summer I work on tool for GStreamer-based applications - GStreamer Debugger.

At the end of this summer, I'm going to provide you an application, which allows you to connect to your remote pipeline (obviously, lo interface can be used as well :)), watch pipeline graph (and its changes), spy selected queries, events, log messages, messages from bus and log messages, and even buffers. Application won't allow user modify pipeline and pipeline's state, but who knows - if it is useful feature, I implement it in the future.
GStreamer doesn't provide possibility to connect to pipeline, so I have to do it on my own.

June is a month, when I've exams on my university (fortunately, I've already passed all of them!), so I didn't spend as much time as I wanted on this project. Anyway, I accomplished a few milestones.
There's a list, what already has been done:
  • gst-trace [3] plugin, containing tcp server. For now, it sends GstEvents, GstMessages, and log messages to clients (todo: send GstBuffers, and GstQueries)
  • client application, which displays events and log messages (todo: display GstBuffers, GstQueries, GstMessages). I have a lot of ideas, how to improve client application, but I'm not sure whether I meet GSOC deadline, so I suppose, most of them will be implement after Google's program. 
  • protocol - I used Google Protobuf library [4]. In general, I've defined most of protocol's structures, I just make minor improvements, when I need it.
Below you can find a few screenshoots of client application. Full code can be found on my github account ([5], [6]).


Time flies by!

Hey everyone!

GSoC is passing by so quickly! I don't know why I'm having this feeling but I just wanted to create another blog post and, looking at the previous one, I came to the realisation that very few things have changed since then (sadly)!

How come?! One thing is, I've been quite busy and so did not contribute 100% of my time. On the other hand, though, I have come across several tough challenges as you can see from my most recent PR. I found it quite difficult to implement some of the previously well-working feature in the GTG Task editor.

Most importantly, despite Izidor's great explanation and discussion about buffers, I marked writing code that would be responsible for removing a tag from a task as the main challenge. What I'm trying to do is to view a complete list of actively used tags in the task editor in a popover and the user will be able to tick / untick as many as he likes and the changes would be applied to that particular task. The "tick" part was quite simple as GTG offered such functionality before and so re-implementing it was easy. However, to "untick" a tag means to remove the @tag section from the text, in other words, it means to iter through the entire task text field, search for all occurrences of @some_tag and remove the one particular which was unticked.
I'm quite discombobulated by this and it's become the greatest challenge so far.

Furthermore, I managed to implement viewing a parent of a task by clicking on the parent button, however, another challenge lies ahead: What I need to do is to view a popover with all the parents in the case a task has more than one parent. User will be afterwards able to choose which one of the parents they want to modify. My idea was to create a list that would hold all the parents of the currently edited task and then iter through that list and place each of the parents as a separate field in the parents popover. However there is a piece missing in my puzzle, I need to put more effort into this, it is not working as it should be.

So to sum up, I feel like I need to speed up! A lot of great work is planned and I want to do as much as I can!
Currently I have three existing PRs which need to be finished, mid-term evaluations are here too, so this week will be crazy, full of work and I hope successful hours of coding! :)

June 27, 2015

gnome-common deprecation, round 2

tl;dr: gnome-common is deprecated, but will be hanging around for a while. If you care about modernising your build system, migrate to autoconf-archive macros.

This GNOME release cycle (3.18), we plan to do the last ever release of gnome-common. A lot of its macros for deprecated technologies (scrollkeeper?!) have been removed, and the remainder of its macros have found better replacements in autoconf-archive, where they can be used by everyone, not just GNOME.

We plan to make one last release, and people are welcome to depend on it for as long as they like. However, if you want new hotness, port to the autoconf-archive versions of the macros; but please do it in your own time. There will be no flag day port away from gnome-common.

Note that, for example, porting to AX_COMPILER_FLAGS is valuable, but will probably require fixing a number of new compiler warnings in your code due to increased warning flags. We hope this will make your code better in the long run.

There’s a migration guide here:

We’ve tried to make the transition as easy and smooth as possible, but there will inevitably be hiccups. Please let me know about anything which breaks or doesn’t make sense, or discuss it on the desktop development list thread. First person to complain about -Wswitch-enum gets a prize.

For developers

When building from a tarball of a module which uses the new macros, you will no longer need gnome-common installed. (Although you may not have needed it before.)

When building from git, you will need m4-common or autoconf-archive installed.

JHBuild bootstrap installs m4-common automatically, as does gnome-continuous; so you don’t need to worry about that.

For packagers

In the 3.14.0 release, gnome-common installed some early versions of the autoconf-archive macros which conflicted with what autoconf-archive itself installs. It now has a --[with|without]-autoconf-archive configure option to control this. We suggest that all packagers pass --with-autoconf-archive if (and only if) autoconf-archive is packaged on the distribution. See bug #747920.

m4-common must not be packaged. See its README. m4-common is essentially a caching subset of autoconf-archive.

For continuous integrators

Modules which use the new AX_COMPILER_FLAGS macro gain a new standard --disable-Werror configure flag, which should be used in CI systems (and any other system where spurious compiler warnings should not cause total failure of a build) to disable -Werror. The idea here is that -Werror is enabled by default when building from git, and disabled by default when building from release tarballs and in buildbots.

For further discussion

See the thread on the desktop development mailing list.

June 26, 2015

John Oliver Falls For Software Patent Trade Association Messaging

I've been otherwise impressed with John Oliver and his ability on Last Week Tonight to find key issues that don't have enough attention and give reasonably good information about them in an entertaining way — I even lauded Oliver's discussion of non-profit organizational corruption last year. I suppose that's why I'm particularly sad (as I caught up last weekend on an old episode) to find that John Oliver basically fell for the large patent holders' pro-software-patent rhetoric on so-called “software patents”.

In short, Oliver mimics the trade association and for-profit software industry rhetoric of software patent reform rather than abolition — because trolls are the only problem. I hope the worlds' largest software patent holders send Oliver's writing staff a nice gift basket, as such might be the only thing that would signal to them that they fell into this PR trap. Although, it's admittedly slightly unfair to blame Oliver and his writers; the situation is subtle.

Indeed, someone not particularly versed in the situation can easily fall for this manipulation. It's just so easy to criticize non-practicing entities. Plus, the idea that the sole inventor might get funded on Shark Tank has a certain appeal, and fits a USAmerican sensibility of personal capitalistic success. Thus, the first-order conclusion is often, as Oliver's piece concludes, maybe if we got rid of trolls, things wouldn't be so bad.

And then there's also the focus on the patent quality issue; it's easy to convince the public that higher quality patents will make it ok to restrict software sharing and improvement with patents. It's great rhetoric for a pro-patent entities to generate outrage among the technology-using public by pointing to, say, an example of a patent that reads on every Android application and telling a few jokes about patent quality. In fact, at nearly every FLOSS conference I've gone to in the last year, OIN has sponsored a speaker to talk about that very issue. The jokes at such talks aren't as good as John Oliver's, but they still get laughs and technologists upset about patent quality and trolls — but through carefully cultural engineering, not about software patents themselves.

In fact, I don't think I've seen a for-profit industry and its trade associations do so well at public outrage distraction since the “tort reform” battles of the 1980s and 1990s, which were produced in part by George H. W. Bush's beloved M.C. Rove himself. I really encourage those who want to understand of how the anti-troll messaging manipulation works to study how and why the tort reform issue played out the way it did. (As I mentioned on the Free as in Freedom audcast, Episode 0x13, the documentary film Hot Coffee is a good resource for that.)

I've literally been laughed at publicly by OIN representatives when I point out that IBM, Microsoft, and other practicing entities do software patent shake-downs, too — just like the trolls. They're part of a well-trained and well-funded (by trade associations and companies) PR machine out there in our community to convince us that trolls and so-called “poor patent quality” are the only problems. Yet, nary a year has gone in my adult life where I don't see a some incident where a so-called legitimate, non-obvious software patent causes serious trouble for a Free Software project. From RSA, to the codec patents, to Microsoft FAT patent shakedowns, to IBM's shakedown of the Hercules open source project, to exfat — and that's just a few choice examples from the public tip of the practicing entity shakedown iceberg. IMO, the practicing entities are just trolls with more expensive suits and proprietary software licenses for sale. We should politically oppose the companies and trade associations that bolster them — and call for an end to software patents.


Special GNOME PERU FEST 2015 video captures almost all the conferences we presented in this fifth edition. Thanks to Erick Cachay for this contribution that let us show all the work we did for spreading the GNOME word in Lima Peru.

While there are certainly general coordination-skilled and time management-demand, I must say that there is not only my volunteer job, there is more people behind the scenes. Thanks to many people who did the designer job, marketing job, Wi -Fi and internet connection tests, who attendees to meetings to set permissions and denies in companies, the pre – event, during the event and post- event support, even the carry-on packages job was so important! Thanks for their good attitude and extra effort they put in all the activities.

I was lucky to had grown up every year with this event, thanks to the GNOME Foundation, IBM, FEDORA, INFOPUCP-PUCP, Grupo LA REPUBLICA, Floreria La Bouquette, and all the institutions that helped me a lot! Many thanks to my IBM managers, PUCP authorities and recognition for Linux Peruvian masters: Felipe SolariRodolfo PimentelGenghis Rios and Flavio Contreras.

Please enjoy the video. Thanks again Erick! Great volunteer worker during and post- event :)


Filed under: GNOME, τεχνολογια :: Technology Tagged: Erick Cachay, Gnome foundation, GNOME PERU FEST 2015, Julita Inca, Julita Inca Chiroque, video GNOME, volunteer job

libnice is now mirrored on GitHub

libnice, everyone’s favourite ICE networking library, is now mirrored on GitHub (and GitLab), to make contributing to it easier — just submit a pull request. The canonical git repository is still on

Bug tracking is now on, which is where all new bugs should be reported. Existing open bugs have been migrated; existing closed bugs have not, and are archived on

We’ve been slowly working on some interesting changes to how GSockets are handled, so watch out for news on that.

A new collections dialog for Documents

This summer, as a Google Summer of Code Project,  I’m working on implement a new collections pattern for Documents. Let’s discuss the changes made so far!

The current dialog has some problems because it hasn’t a clear way to add a new collection and it lacks on some features like renaming and deleting a displayed collection. Although the missing features aren’t reachable from the collections dialog, it is already possible to rename from the property dialog and delete from the selection mode.

The new collections dialog at this time looks like this:


The collections are displayed more clearly and a new one can be added more easily. With the new design we also gained the ability to prevent duplicate collections names since we can disable the “Add” button if a collection with that name already exists.


If a collection has just been deleted, it can be restored with the “Undo” button.


And when renaming a collection the other rows are disabled.

This is what I’ve done until today. It took me a lot of time to reach the current state and I would like to thank my mentor Debarshi for being helpful and available at all times, and for answering all my questions just a second after I ask them.

The work-in-progress patch can be found here. Feedback is welcome, if you have any please let me know!

GSoC 2015!

This year I have been accepted to GSoC!

Google Summer of Code is an international program (started in 2005), where accepted students learn about open-source by developing, during the summer, a program. With what do you remain after a coding summer? Well, you will meet a nice community in which you could remain as a developer (for the organization that accepted you, GNOME in my case), you will hone you programming skills and also you will receive 5.500$ for completing the program.

The project I will work on is a security based one and is called Disable USB on lockscreen. Under the guidance of Tobias Mueller, I will try to develop a system that will block the USB port if the screen is locked and if the entered device is not a familiar one (a device that was connected in the past).

This is the core of the project, but I will try to do other little things that it will make the program more flexible.

Until now, I made an application that can detect if a new USB device has been connected and asks the user if he wants to add it to a trusted device list. For every connected device the device descriptors would be collected and every device will get an unique id formed from this descriptors.

usb devices

Each connected USB, that we trust, would be hold into a file named ‘known_devices’ (at least for now) in json format.

File known devices

This is only the start part of the project, only the root, but it would be improved into a more complex application ^_^

This is it for now! I hope you enjoyed reading the first blog post about my experience with GSoC and see you again soon!


Ban USB devices from the ports

An USB device has some descriptors and based on them we could find out information about that connected device.

But here comes a problem, what if a device tells us it is what he is not (like a HID device). Well, in that case the driver would be automatically loaded and someone could have access to our computer.

Screenshot from 2015-06-09 20:32:39

Lucky for us, we can manually “unload a driver” using the unbind special file. Unload a driver is not specifically a correct term, you do not really unload it, but you unbind the device from that driver.

By simply writing (could use echo) the bus id of the device to that file, would cause the device to not be usable.

Screenshot from 2015-06-09 20:39:15

To make it usable again, simply write the same bus id of the device to the bind file.

Screenshot from 2015-06-09 20:39:59

By simply doing this, you could play with the device and allow it to be seen or not by your PC.

This could be easily implemented in Python, simply opening the bind/unbind file and writing in it a specific bus id.


June 25, 2015

A month later...

The problems faced.

As you perhaps saw, I started working on the project a bit late... My final exam took more of my time than i expected, but I passed it. And I needed to learn a brand new langage : Vala. But now, i can work at full speed on the project. And that is what i do :).

The progress.

So, we are able to receive notification from tcp. And those can be dismissed from the Desktop side to the android side. We are also able to toggle action of a notification and send sms from nuntius. But we lack a good GUI, for now. We also discovered a bug in the vala code related to string array which is now fixed.

My plan for the future.

I found some limitation of the glib api for notification :

  • We don't have a way to know a notification was closed on the linux side (bugzilla). We need it so we can also dismiss the notification on the phone, when it's dismissed on the desktop.
  • There is no way to display an inline reply entry. A possible workaround is to develop a gnome-shell extension, in order to be able to add a text entry to a notification.

I reworked a bit my agenda accordlingly with the nuntius team.

Date Goal
29/06 Read/write sms from the desktop
20/07 Secure Lan Communication
3/08 Allow multiple device to connect to one computer
10/08 Enhance graphics, Debug, code enhancement, testing. (Be ready for stable release)
24/08 Extra features (Answer call directly from your Computer)

Hello gnome !

Who am i ?

Hello ! I am Ronan Timinello a french student. I actually am in the first year of DUT in Computer Science at the IUT of Montpellier.

My agenda for the gsoc.

Here is my agenda for the gsoc. I work on nuntius, a cool app that let you easily interact with your phone from your pc.

Date My goal
8/06 Transifex integration - Reply to notifications through actions
22/06 Fully working encrypted TCP over tls with easy trust of certificate
6/07 Read/write sms from the desktop
20/07 Share files between the device easily. (contextual menu in file browser)
3/08 Allow multiple device to connect to one computer
10/08 Enhance graphics, Debug, code enhancement, testing. (Be ready for stable release)
24/08 Extra features (Answer call directly from your Computer)

My actual work.

I integrated a blacklist of app you don't want the notification to annoy you on the pc. I too added material design to the app and fixed some small bug.

Crop Complete

Multi-monitor download

Yay! the cropping widget now crops the wallpaper into multiple segments of desirable dimensions. Now, it would be cumbersome to crop high resolution, let alone multi-monitor, wallpapers in their full glory, so the wallpaper is fit inside the browser and all the overlays created are scaled down proportionately. To achieve this we could have got the wallpaper’s actual dimensions by recreating the image without applying CSS, but we already have that information in wallpaper’s filename. So the image’s dimensions & file type are extracted using regex.

Cropped Wallpaper (left)

For the cropping function, a canvas is created & resized to the dimensions of the screens, proportional to the size of the respective overlays. We then draw a copy of wallpaper at full resolution over the canvas at the position marked by the respective overlay.

Cropped Wallpaper (right)

I added buttons to resize all the overlays simultaneously preserving their aspect ratios, along with several UI tweaks. Lastly, I also completed the JavaScript track on Codecademy.

June 24, 2015

GSoC 2015 Report #2 - Internationalization support for GnomeKeysign

Because the replacement of the GPG wrapper is taking longer than I have expected, I have done something that will surely make GnomeKeysign one step closer to become GNOME ready - I have added support for Internationalization and Localization.

I had no knowledge about this so I started reading this tutorial which is more customized for C projects that use autotools.
GnomeKeysign is a Python pure project and it uses distutils/setuptools for building and installing.

I will explain step by step how to add support for translations in a Python application but first I will talk a little about building & packaging in Python.

"Real artists ship. Or so says Steve Jobs." (taken from

It seemed reasonable that Python should have a packaging framework , hence distutils.
Distutils is many things: a build tool (for you), an installation tool (for your users), a package metadata format (for search engines), and more. It integrates with the Python Package Index  (“PyPI”), a central repository for open source Python libraries.

A good project layout could be the following

| -- package
|    | --
|    | --
|    `-- subpackage
|         | --
|         | --
| -- runner
| --
| -- [setup.cfg]
| --

If you want to know more about what every file does then check the Python Packaging User Guide .
Already having a good project structure , I could easily add support for internationalization (i18n).

To  add i18n support we need to accomplish these tasks:
  1. Configure the function that translates the strings
  2. Mark the strings in the code with that function
  3. Generate a template for the translators (the .pot file)
  4. Add translations
  5. Include the translations in the installation

Configure the '_()' macro

The Python gettext module allows you to mark strings in source code, extract those strings for translation, and use the translated strings in your application. It is a common practice to define macros which are shorter wrappers, like the '_()'  macro that replaces a gettext() call

import gettext
t = gettext.translation("gnome-keysign", PATH_TO_MO_FILES)
_ = t.ugettext

Mark strings for i18n

Search the source code files for strings that are visible in the GUI (window title, labels, button labels, messages, and so on) and wrap them inside '_()' function

print _("Getting ready for i18n.")

Generate template for translators

Using a command line tool, you generate a ".pot" file from the source code. This file contains all strings that need translation in the project. We only need to generate it  once or after the strings change in the code. The file is generated inside /po folder and the command used is
xgettext --language=Python --keyword=_ --output=po/gnome-keysign.pot `find . -name "*.py"`

Add translations

Then, from the ".pot" file , using another command line tool we will generate the ".po" files for each language. These are the files that translators need to complete. Inside the po/ folder we use this command
msginit --input=PROJECTNAME.pot --locale=LOCALE

But once the program is running, it doesn't use .po files directly, but a binary (compiled) version of them: the .mo files. These .mo files must be re-created on build time (when creating the package) and installed in the right location in the system.

Include translations into the project installation

The file is where various aspects of the project are configured. Alongside with the general setup arguments there is a `cmdclass` argument where we put the code to perform custom actions such as code generation, running tests, building translation files, etc.

I implemented a custom command derived from distutils.cmd.Command which will generate the '.mo' files in build time and in install time will include the '.mo' files into the data_files list. In this way we include the translations by overloading the default 'build' and 'install' instructions.

Now I can translate GnomeKeysign and run it in my language :-).

That's all for now, I will be posting again after I'm done with the GPG replacement part.


Summary of work from June 9th to June 22th

In the past two weeks I've finished my planned work before midterm. Here are the details:

To bring the GUI configuration of module EAS back to use, several things are done: the basic configuration and collection code for configuration, the EAS server auto-detect aspect, and some update to the deprecated usage of gtk+ in our project. Some steps are similar to the EWS module, so I borrowed some code from EWS.

Since I will have several final exams in the few weeks, the remaining part of my project will be covered a little bit later, maybe start at mid-July.

Notes: future plans

This is the second in a series of posts about recent design work for GNOME’s core applications. As I said in my previous post, the designs for many of these applications have evolved considerably, and we have major plans for them. Help is needed if these plans are going to become a reality though, so we are looking for contributors to get involved.

In this post, I’m going to focus on the Notes app, which is also known as Bijiben. (For those who don’t now, Bijiben – 筆記本 / 笔记本 – translates to “Notebook”.) This application is maintained by Pierre-Yves Luyten, and is written in C (although Pierre-Yves has expressed an interest in porting the UI to JavaScript).

Where we are

The Notes app aims be a simple and effective note taking application. It has the basic features you’d expect from a notes app, such as formatting, search and a trash bin. You can also organise your notes into separate notebooks.

It also has some other features that you might not be aware of, such as the ability to import notes from Tomboy (or Gnote), and integration with ownCloud and memos (as provided by Evolution), so you can keep your notes online.

Most of the updates that we have planned for notes are designed to polish the existing UI, so it looks and feels really nice. There are also some bigger features planned though.

More online storage

Being able to store your notes in the cloud is a major goal for Notes, and Pierre-Yves wants to expand the number of options for online storage, perhaps through IMAP.

As much as possible, we want cloud storage to become the default option for Notes. As a result, we’re imagining that it might want to ask you where you want to keep your notes when it is run for the first time…

Notes Setup

We also have mockups for migrating notes between online accounts.

Improved grid and list views

It’s important for a notes app to give you a good overview of your notes, so that finding items is quick and painless. We have a number of plans is this area. For the notes grid, we want to show more content from each note, so you get a more useful preview. Date headings are another addition which will make the grid more meaningful and informative.

Notes Grid

The grid view designs are accompanied by a new treatment for the list view. This aims to be neater, more attractive, and easier to read.

Notes List

Better editing

We’ve also done a lot of work on the designs for viewing and editing notes. The overlaid editing controls that Notes currently uses have been a particular sore point, largely due to some technical limitations. As a result, we’ve decided to use an action bar for formatting – this will appear when you select text, and hide when it’s no longer needed.

There are a number of other significant changes that we want to make to the note view, such as using a fixed width layout, having bigger titles, and better use of typography.

Note Editing

Polish, polish, polish

Finally, as with Music, Notes needs a lot of polish. There are quite a lot of small bugs that we have solutions identified for; the overall experience will improve a lot if we can resolve these.

How to get involved

Bijiben’s Bugzilla product is in good shape, so it’s easy to find tasks to work on, and Pierre-Yves is happy to review patches. There’s also a wiki page with details about the application, including a small technical overview of the code. Notes could be a really fantastic application – just get in touch if you want to help out.