GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

October 31, 2014

Introduction to ICE and libnice

As part of the series of tea time talks we do within Collabora, I recently got to refresh my knowledge of STUN, TURN and ICE (the protocols for NAT traversal) and give an introductory talk on how they all fit together within the context of libnice.

Since the talk might be useful (and perhaps even interesting) to a wider audience, I’ve made it available: slides, handout and source (git). It’s under CC-BY-SA 4.0. Please leave comments if anything is unclear, incorrect, or could do with more in-depth coverage!

Recent improvements in libnice

For the past several months, Olivier Crête and I have been working on a project using libnice at Collabora, which is now coming to a close. Through the project we’ve managed to add a number of large, new features to libnice, and implement hundreds (no exaggeration) of cleanups and bug fixes. All of this work was done upstream, and is available in libnice 0.1.8, released recently! GLib has also gained a number of networking fixes, API additions and documentation improvements.

tl;dr: libnice now has a GIOStream implementation, support for scatter–gather send and receive, and more mature pseudo-TCP support — so its API should be much nicer to integrate; GLib has gained a number of fixes.

Firstly, what is libnice? It’s a GLib implementation of ICE, the standard protocol for NAT traversal. Briefly, NAT traversal is needed when two hosts want to communicate peer-to-peer in a network where there is at least one NAT translator between them, meaning that at least one of the hosts cannot directly address the other until a mapping is created in the NAT translator. This is a very common situation (due to the shortage of IPv4 addresses, and the consequence that most home routers act as NAT translators) and affects virtually all peer-to-peer communications. It’s well covered in the literature, and the rest of this post will assume a basic understanding of NAT and ICE, a topic about which I recently gave a talk.

Conceptually, libnice exists just to create a reliable (TCP-like) or unreliable (UDP-like) socket which connects your host with a remote one in a manner that traverses any intervening NATs. At its core, it is effectively an implementation of send(), recv(), and some ancillary functions to negotiate the ICE stream at startup time.

The biggest change is the addition of nice_agent_get_io_stream(), and the GIOStream subclass it returns. This allows reliable ICE streams to be used via GIOStream, with all the API sugar which comes with GIO streams — for example, g_output_stream_splice(). Unreliable (UDP-like) ICE streams can’t be used this way because they’re not technically streams.

Highly related, the original receive API has been augmented with scatter–gather support in the form of a recvmmsg()-like API: nice_agent_recv_messages(). Along with appropriate improvements to libnice’s underlying socket implementations (the most obscure of which are still to be plumbed in), this allows performance improvements by batching messages, reducing the number of system calls needed for communication. Furthermore (perhaps more importantly) it reduces memory copies when assembling and parsing packets, by allowing the packets to be split across multiple non-contiguous buffers. This is a well-studied and long-known performance technique in networking, and it’s nice that libnice now supports it.

So, if you have an ICE connection (stream 1 on agent, with 2 components) exchanging packets with 20B headers and variable-length payloads, instead of:

nice_agent_attach_recv (agent, 1, 1, main_context, recv_cb, NULL);
nice_agent_attach_recv (agent, 1, 2, main_context, recv_cb, NULL);

…

static void
recv_cb (NiceAgent *agent, guint stream_id, guint component_id,
         guint len, const gchar *buf, gpointer user_data)
{
    if (stream_id != 1 ||
        (component_id != 1 && component_id != 2)) {
        g_assert_not_reached ();
    }

    if (parse_header (buf)) {
        if (component_id == 1)
            parse_component1_data (buf + 20, len - 20);
        else
            parse_component2_data (buf + 20, len - 20);
    }
}

…

static void
send_to_component (guint component_id,
                   const gchar *data_buf, gsize data_len)
{
    gsize len = 20 + data_len;
    guint8 *buf = malloc (len);

    build_header (buf);
    memcpy (buf + 20, data, data_len);

    if (nice_agent_send (agent, 1, component_id,
                         len, buf) != len) {
        /* Handle the error */
    }
}

you can now do:

/* Only set up 1 NiceInputMessage as an illustration. */

static guint8 buf1_1[20];  /* header */
static guint8 buf1_2[1024];  /* payload size limit */
static GInputVector buffers1[2] = {
    { &buf1_1, sizeof (buf1_1) },  /* header */
    { &buf1_2, sizeof (buf1_2) },  /* payload */
};
static NiceInputMessage messages[1] = {
    buffers1, G_N_ELEMENTS (buffers1),
    NULL, 0
};
GError *error = NULL;

n_messages = nice_agent_recv_messages (agent, 1, 1, &messages,
                                       G_N_ELEMENTS (messages),
                                       NULL, &error);
if (n_messages == 0 || error != NULL) {
    /* Handle the EOS or error. */
    if (error != NULL)
        g_error ("Error: %s", error->message);
    return;
}

/* Component 2 can be handled similarly and code paths combined. */
for (i = 0; i < n_messages; i++) {
    NiceInputMessage *message = &messages[i];

    if (parse_header (message->buffers[0].buffer)) {
        parse_component1_data (message->buffers[1].buffer,
                               message->buffers[1].size);
    }
}

…

static void
send_to_component (guint component_id, const gchar *data_buf,
                   gsize data_len)
{
    GError *error = NULL;
    guint8 header_buf[20];
    GOutputVector vec[2] = {
        { header_buf, sizeof (header_buf) },
        { data_buf, data_len },
    };
    NiceOutputMessage message = { vec, G_N_ELEMENTS (vec) };

    build_header (header_buf);

    if (nice_agent_send_messages_nonblocking (agent, 1, component_id,
                                              &message, 1, NULL,
                                              &error) != 1) {
        /* Handle the error */
        g_error ("Error: %s", error->message);
    }
}

libnice has also gained non-blocking variants of its I/O functions. Previously, one had to explicitly attach a libnice stream to a GMainContext to start receiving packets. Packets would be delivered individually via a callback function (set with nice_agent_attach_recv()), which was inefficient and made for awkward control flow. Now, the non-blocking I/O functions can be used with a custom GSource from g_pollable_input_stream_create_source() to allow for more flexible reception of packets using the more standard GLib pattern of attaching a GSource to the GMainContext and in its callback, calling g_pollable_input_stream_read_nonblocking() until all pending packets have been read. libnice’s internal timers (used for retransmit timeouts, etc.) are automatically added to the GMainContext passed into nice_agent_new() at construction time, which you must run all the time as before.

GIOStream *stream = nice_agent_get_io_stream (agent, 1, 1);
GInputStream *istream;
GPollableInputStream *pollable_istream;

istream = g_io_stream_get_input_stream (stream);
pollable_istream = G_POLLABLE_INPUT_STREAM ();

source = g_pollable_input_stream_create_source (pollable_istream, NULL);
g_source_set_callback (source, readable_cb, NULL, pollable_istream);
g_source_attach (main_context, source);

static gboolean
readable_cb (gpointer user_data)
{
    GPollableInputStream *pollable_istream = user_data;
    GError *error = NULL;
    guint8 buf[1024];  /* whatever the maximum packet size is */

    /* Read packets until the queue is empty. */
    while ( (len = g_pollable_input_stream_read_nonblocking (pollable_istream,
                                                             buf, sizeof (buf),
                                                             NULL,
                                                             &error) ) > 0) {
        /* Do something with the received packet. */
    }

    if (error != NULL) {
        /* Handle the error. */
    }
}

libnice also gained much-improved support for restarting individual streams using ICE restarts with the addition of nice_agent_restart_stream(), switching TURN relays with nice_agent_forget_relays(), plus a number of bug fixes.

Finally, FIN/ACK support has been added to libnice’s pseudo-TCP implementation. The code was originally based on Google’s libjingle pseudo-TCP, establishing a reliable connection over UDP by encapsulating TCP-like packets within UDP. This implemented the basics of TCP, but left things like the closing FIN/ACK handshake to higher-level protocols. Fine for Google, but not for our use case, so we added support for that. Furthermore, we needed to layer TLS over a pseudo-TCP connection using GTlsConnection, which required implementing half-duplex close support and fixing a few nasty leaks in GTlsConnection.

Thanks to the libnice community for testing out the changes, and thanks to the GLib developers for patiently reviewing the stream of tiny documentation fixes and several larger GLib patches! All of the libnice API changes are shown on the handy upstream-tracker.org tool.

Hill hacks report

On the way to Dharamshala from Delhi, I caught sight of an untouched night sky, in half sleep, and woke up shocked. The rest of the journey lit up the sky slowly in lines and layers, and the stars disappeared one after the other, quietly and slowly, into the sun. On reaching Dharamshala, after walking aimlessly for a while trying to find a shortcut to The Country Lodge, we were guided by a very nice old man who took us into his office space, and showed us the way down from his balcony — a long maze of stairs and pathways some of which encroached into the fronts of people’s houses and ran on the sides of their walls, precarious and so inviting to my city feet. The cloudy sky replaced my umbrella, and grandparents of strangers on terraces took the place of shopkeepers in the city, ready with directions.

On Day 1, we watched lovely Tink teach the children from Rakkar to hula-hoop, among other circus tricks, after discussing a little bit of user interface design in the morning. I tried to get some honest end user feedback on the application we had been building in office. All my beta testers(usually tricked into believing they were only playing) so far had been the children my mother teaches at home, or the children who play with the cats at office, all generally fluent in English, who would complain about the game being a little boring or say that a certain picture was pretty. The children from Rakkar figured their way around the game but the English content was a barrier. They enjoyed playing in groups, as long as each of them got to operate the computer at some point as opposed to the children back at home, who liked having the computer to themselves. They all unanimously loved everything, and when I asked them if it would have been nicer to have the game in Hindi, almost all of them disagreed, which was interesting to think about. My laptop went over a fashion makeover, with the children wanting to cover it with every sticker around. We stuck together Nitin’s broken 3-D printed name in the afternoon, after making some light-sensitive electronic lamps for Diwali, all the while being shot by David, the paparazzi.Then it was time for open night, with some singing, some flute and the guitar. The fire dance was a big hit, with an appropriate choice of song – Fireflies by Owl City. I made some Tamil friends, which is always nice and makes me cheekier than I am, especially if one of them happen to be Krishnan. This was also when I met warm and cheeky Sva, whom I have learned to adore quite effortlessly.

Day 2 brought the TCV students to The Country Lodge, and the wonderfully eloquent in Tamil Tenzin Tsundue. I have only had brief encounters with the Tibetan culture before. My earliest memory runs back to a visit to some temple in South India or North India, I am not sure, where my grandmother struck a conversation with a group of Tibetan women, and they all generally agreed about water problems being prevalent everywhere. After that, barring the brief visit to Bylakuppe 2 years back, it is not something I have actively thought about. Tenzin’s talk brought back memories of history lessons, learning about colonialism, trying very hard to understand if the thirst for power was only felt by a few. He also mentioned that Tibet’s rich Lithium resources were being used in electronic goods made in China, and sold to the world keeping the prices of electronic goods significantly lower everywhere. It was the time for my Scratch workshop for the TCV students after a small blooper, when I lost the pictures of deer drawn by Poulami because I hadn’t copied them into my computer, and she, assuming that I would have copied them, formatted the all important flash drive containing the pictures. Arun came to the rescue and drew me some lovely deer frames after calling me “very demanding” when I bothered him about the deer being monochromatic. I wanted the children to create a small animation film based on a traditional tale from Meghalaya. Personally, I wanted to observe the amount of scaffolding that such a workshop would require and I was pleasantly surprised to see that it needed almost none. During the course of the 2.5 hour workshop, the children worked independently, in pairs, and enjoyed the whole process. They were particularly focussed and needed little or no help from me. I saw first-hand Pappert’s theory of making children learn algorithmic thinking while they were trying to create something that was important to them personally.

I tried making friends with a long-legged insect after the workshop, which was particularly interested in eating something on my nails, I wonder what. We tried distracting it and getting it off my finger, but it seemed quite engrossed with the supply of exotic food. It was alright, though slightly strange, till it decided to nip me nicely, at which point I shook it off rather violently. I must say here, that spiders are the nicest long-legged insects I have made friends with, till date; not one was interested in biting my finger nails, and they invariably wanted to jump off my fingers within 3 minutes. Shreyas reckoned that the general rule governing friendships between insects and programmers would be – “We keep the bugs with us, until they bite.” I don’t remember much of the rest of the day except for screen printing some shirts with Lucy and Andreas, and talking to people about the Outreach Program for Women, from GNOME which was something I had to do conversationally since the conference had only a few women participants. Hopefully OPW and Hill Hacks will get many more female participants over the next few rounds. (History project)

Day 3 – It snowed in the mountains. The children from Rakkar came back to visit and worked on their puppet show. A couple of them tried to sneak away my laptop from me, and after feigning ignorance for a while, I let them have it. I attended a bunch of flash talks and spoke about my work with Gcompris. Yuvi and me worked quickly on a syntax highlighting file for writers working with Dolch words with Krishnan present for moral support and the wisecracks. We all then watched a bunch of documentaries made by Mr. Lo before heading to the rooms for DemoScene. All of us bunched on rugs and cushions, we stayed up for a while watching the brilliant graphics generated with unsophisticated machines. The 3D printer was also a source of attraction, and I tried unsuccessfully to create a pair of earrings before being herded away for a session on MozDef.

Day 4 – brought more flash talks and a lovely presentation by Monam La, a monk who has worked to create several Tibetan fonts and is working on a Tibetan dictionary. The Tibetan keyboard for Linux systems needs work, so if any of you are interested, give a shout on the hillhacks mailing list. Arun’s talk on improving digital literacy in rural India was also very inspiring. At night, all of us sat bunched up near the fire, talking about trivialties, while I was enjoying the smell of woodsmoke on my clothes. I learned to spot Gemini, after finding Orion and The Big Dipper quite easily, a split second before Sva, Kunaarak and Yuvi decided to slump on me at once, to warm me up. Heh.

It was time to pack up on the 27th. The large maroon tent sewn together by Rajesh’s father had to come down. Tink mentioned how it took much lesser time to pull down the tent as compared to putting it up, which was true of all things we agreed. We found at least 8 caterpillar pupae on the seams of the tent. There were 3 casualties which had been unwittingly stamped on or crushed under the weight of the tent. We collected all of them and placed them horizontally under a tile that quite appropriately said “HOTEL”. I wanted to try out this exercise (Link to Madeeha’s video) but after everything had been packed up and sent in an auto to Rakkar, Sva said that maybe it wouldn’t be possible to do that, and that I should just get used to the idea of them being eaten by a crow. When I asked Priya if horizontal pupae can metamorphose, considering that their is much lesser area from where they can push out their wings, she said that it was likely. She also said that caterpillar pupae is hard to eat, so maybe all hope is not lost. Maybe someone at The Country Lodge is in fact carrying out the video exercise right now.

This was also the day when I had my first outing. I had been nursing a mysteriously swollen ankle from the first day of the conference which finally didn’t feel so swollen so I decided to go to McLeod’s and purchase some trinkets. I ruined Santosh’s bargaining exercise for him, unfortunately, which he didn’t think was as funny as I did. We walked around the most crowded places in Dharamshala, and visited two small cafes for coffee and cake and some vegetarian Japanese food. Alex, Yogesh and I then took a taxi for Ghumakkad while Santosh impulsively decided to go up to Treund with Lucy, Andreas and gang whom we met on the way and said our goodbyes to. The house in Ghumakkad is beautiful. I sat down with a copy of The Little Prince, and read my favourite parts. Ayush led about four of us through a trail in the mountains to the most beautiful spot that I had seen. The Ghumakkad stream, whose gurgling is a perpetual sound no matter where you go in the village seemed to belong just to me, as it must have seemed to the rest of us. The mixture of the outing, the night, the thought of leaving and a mix of all my other thoughts left me a little dazed and quiet for the next couple of days. The mountains can affect you in strange ways, make you tender and more aware of your thoughts. I must say that Ayush was quite nonchalant about the dark and the fact that we were all unused to the paths of Ghumakkad, and for most parts of the way we walked without a torch. Me and Yuvi then walked with Arun to Priya, Sounjanya and Arun’s beautiful house. We saw a flash of The Big Bat’s cousin that lives in the area, and walked quietly for the rest of the way to Rakkar.

I woke up on the 28th and decided to trace my path back to Ghumakkad with Ms. Sheila and Mr. Murphy, the two dogs that live in their house. Quite expectedly, I started off in the wrong direction and wandered around for about an hour, lost and with two dogs who were thoroughly happy. When you notice the mountains on these walks, it feels like they want to embrace you, calling your attention always, even when you are trying hard to walk on the right trails hoping they lead to the right place. Their silhouette is hazy in the fog, but their outline always assures you that they are sure of their place in your day. I also noticed how quiet I had become, wanting to speak little and speak sensibly, and tread softly. I did not want to disturb the slumber of the mountains. I finally managed to trace my way back to some houses and asked for directions to the dogs’ house, and made my way back to bed, and fell asleep, warmed by the walk. When I next woke up, it was to the sight of the wonderfully lively Soujanya combing her lovely hair, and talking about the pet parakeets they raised in the house. They also mentioned something about mountain birds that often flew through the gap between the roof and the walls. We discussed the absence of sparrows in the cities, the gentleness you can find in old Bangalore, and growing chillies among other things in their garden. We also watched some some yellow-tailed birds that flew in a flock and an owl that dived down and disappeared. Reminded me of the time, my father and me made a small bird feeder during the summer, and we only had a bunch of bees come everyday to drink from the feeder. After tea, I went up to Norbulingka with Manish and lunched there at Queenies on momos and soup. We struck up a nice conversation with a woman from Mexico who was curious about why South Indians spoke to each other in English. We walked back to the house through another trail in the fields to drop off some fruits before returning to Ghumakkad. We located the post office to send some postcards to my grandmother and some friends, but it was closed, so I left them behind. Then, we rushed about hugging everyone hastily, before leaving for Delhi by a very cold bus.

It is 29th today, and I have been rather quiet and thoughtful, wondering if it would be normal to take the next bus back to Ghumakkad as opposed to going to Chennai to write new exams. Wondering about the concept of home, the woods, all the while talking about something funny that Benthor said, or something silly that someone did, trying very hard to not forget. Sitting on the railway station’s platform for 3 hours, listening to recordings from the conference, till the train to Bangalore arrives.

Thanks to the Hill Hacks team for sponsoring my travel. I was a little short on cash after paying for my expensive exams, and it was really nice of you to fund my travel money. :)


Software Freedom Day 2014 Phnom Penh

The Digital Freedom Foundation is organizing our Software Freedom Day event in Phnom Penh together with the National Institute of Posts Telecommunications and ICT and the Ministry of Posts and Telecommunications on November 1st at the NIPTICT Building. There will be 14 talks (9 in Khmer and 5 in English) with topics covering free and open source software ranging from operating system, learning platforms, website development, resource map, servers, to security. Here is the detailed schedule and speakers profiles.

We expect to have more than a hundred people to attend and aim to target both the university audience and the young workforce, on top of presentations and workshops, we (assisted by various communities) will be holding booths (e.g. Moodle, Mozilla, RouterOS, Ubuntu and Blender) to allow for more individuals discussions. All in all it’s been a joy preparing for this event, allowing us to talk and plan resources with people from different local communities such as OpenSourceCambodia and Smallworld Cambodia.

The event will start at 1:30pm tomorrow, if you happen to be in Phnom Penh please do drop by!

web-banner-chat-we-re-organizing-h

Upgrading your application from Django1.6 to Django1.7 and deploying onto Openshift

I was using Django.1.6.5 and Python 2.6 in my test env and I  wanted to deploy my application on openshift. Openshift has nice UI for application creation. However with WebUI,  Openshift selects Django version 1.7 and Python2.6 and one can't select other versions of python. 
After deployment I got following error messages in the logs:-
 [Thu Oct 30 04:36:39 2014] [notice] Apache/2.2.15 (Unix) mod_wsgi/3.2 Python/2.6.6 configured -- resuming normal operations 
[Thu Oct 30 04:36:56 2014] [error] [client 127.2.185.129] mod_wsgi (pid=336): Target WSGI script '/var/lib/openshift/5451f70325535da2dd0000fa/app-root/runtime/repo/wsgi/application' cannot be loaded as Python module. 
[Thu Oct 30 04:36:56 2014] [error] [client 127.2.185.129] mod_wsgi (pid=336): Exception occurred processing WSGI script '/var/lib/openshift/5451f70325535da2dd0000fa/app-root/runtime/repo/wsgi/application'. 
[Thu Oct 30 04:36:56 2014] [error] [client 127.2.185.129] Traceback (most recent call last):
         [Thu Oct 30 04:36:56 2014] [error] [client 127.2.185.129]      TEST_SETTING_RENAMES_REVERSE = {v: k for k, v in TEST_SETTING_RENAMES.items()}
         

         I found this bug, that above error is caused because Django1.7 has not supported with Python2.6. So to create application with Django1.7 and  Python 2.7. One needs to use Openshifts command line client "rhc".

To create application using rhc one needs to run following commands:-

$ rhc app create -a myapp -t python-2.7
$ cd  myapp
$ git init
$ git remote add upstream -m master git://url
$ git pull -s recursive -X theirs upstream master

However I got SSL errors and couldn't connect to openshift server, one way to fix these error is by addition of
ssl_version=tlsv1
in .openshift/express.conf file.

My application uses wsgi scirpts, for Django1.7 applications one needs to use following API's for WSGI scripts

from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()

otherwise you will get AppRegistryNotReady exception.

For Django1.7 and with registration app, I got following following error
raise RuntimeError("App registry isn't ready yet.")
RuntimeError: App registry isn't ready yet.
Above error is caused due to django-registration calls get_user_module() in models.py.
To fix this error I have replaced
          user = get_user_model()
by 
     user = model.ForeignKey(settings.AUTH_USER_MODEL)



October 30, 2014

Temporenc, comprehensive binary encoding format for dates and times

Today I published a new side project of mine: temporenc.

Temporenc is a comprehensive binary encoding format for dates and times that is flexible, compact, and machine-friendly. I’ve put up a website for the project at temporenc.org.

Please give it a look and consider using it in your libraries and applications! If you like, you can discuss it on Github or Hacker news.

Hacker News metrics (first rough approach)

I'm not a huge fan of Hacker News[1]. My impression continues to be that it ends up promoting stories that align with the Silicon Valley narrative of meritocracy, technology will fix everything, regulation is the cancer killing agile startups, and discouraging stories that suggest that the world of technology is, broadly speaking, awful and we should all be ashamed of ourselves.

But as a good data-driven person[2], wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:
  1. If the story is younger than any other story
  2. and the story has a higher score than that other story
  3. and the story has a worse ranking than that other story
  4. and at least one of these two stories is on the front page
then the story is considered to have been penalised.

(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)

Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see [2]) and so just tried some keyword analysis:
KeywordPenalisedUnpenalised
Women134
Harass20
Female51
Intel23
x8634
ARM34
Airplane12
Startup4626

A few things to note:
  1. Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
  2. Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
  3. There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.

This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories[3], but I'm not sure how practical that is with the publicly available data.

(Raw data is here, penalised stories are here, unpenalised stories are here)


[1] Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
[2] Ha ha like fuck my PhD's in biology
[3] Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?

comment count unavailable comments

appdata-tools is dead

PSA: If you’re using appdata-validate, please switch to appstream-util validate from the appstream-glib project. If you’re also using the M4 macro, just replace APPDATA_XML with APPSTREAM_XML. I’ll ship both the old binary and the old m4 file in appstream-glib for a little bit, but I’ll probably remove them again the next time we bump ABI. That is all. :)

2014 GStreamer Conference

I’ve been home from Europe over a week, after heading to Germany for the annual GStreamer conference and Linuxcon Europe.

We had a really great turnout for the GStreamer conference this year

GstConf2k14

as well as an amazing schedule of talks. All the talks were recorded by Ubicast, who got all the videos edited and uploaded in record time. The whole conference is available for viewing at http://gstconf.ubicast.tv/channels/#gstreamer-conference-2014

I gave one of the last talks of the schedule – about my current work adding support for describing and handling stereoscopic (3D) video. That support should land upstream sometime in the next month or two, so more on that in a bit.

elephant

There were too many great talks to mention them individually, but I was excited by 3 strong themes across the talks:

  • WebRTC/HTML5/Web Streaming support
  • Improving performance and reducing resource usage
  • Building better development and debugging tools

I’m looking forward to us collectively making progress on all those things and more in the upcoming year.

On joining the FSF board

I joined the board of directors of the Free Software Foundation a couple of weeks ago. I've been travelling a bunch since then, so haven't really had time to write about it. But since I'm currently waiting for a test job to finish, why not?

It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.

Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.

However.

Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.

That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.

So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.

People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.

comment count unavailable comments

October 29, 2014

Update on Fedora and Asus TransformerBook TP500LN

A quick update on my new ultrabook running Fedora:
  • After watching the kernel development closely to see if anything related to the built-in touchpad comes in, and nothing came, I have decided to try some workarounds. If it can't work as a touchpad, at least it should work as a mouse. This can be accomplished by adding psmouse.proto=imps to the kernel parameters. The worst thing in this is that there's neither two-finger scrolling, nor edge-scrolling, but I can live with that, as I also have a wireless mouse.
  • Unfortunately I couldn't do anything with the wireless card, I have downloaded the kernel driver for 3.13 and 3.14 kernels, changed the source to work with 3.17 kernel (the one in Fedora workstation dailies), but unfortunately it fails to connect to my WPA-PSK2 network. So, until I get a mini PCIe wifi card with an Intel or Atheros chip (which is confirmed to have proper linux support), I will use the laptop with an USB WLAN interface.
  • Optimus graphics card switching still didn't seem trivial to install and set up properly. However, I don't need more than the intel graphics card, so I just wanted to switch the NVidia card off completely. So installed bumblebee and bbswitch based on the instructions on Fedora wiki, and turned the discreete card off.
  • Battery usage is at about 8W, estimated usage on battery is 7.5 hours with standard internet browsin on a standard 9-cell-battery, so I'm pretty satisfied with that.
  • I have formatted both the 24 GB SSD and the 1.5 TB HDD (cleaned up from sh*t like windows and McAffee 30 days trial), and installed Fedora 21 with a custom partitioning layout.
All in all, at last I have a mostly working (there's place for improvement though)  laptop with a battery life above six hours with constant browsing, so I'm satisfied.

    Understanding Wikimedia, or, the Heavy Metal Umlaut, one decade on

    It has been nearly a full decade since Jon Udell’s classic screencast about Wikipedia’s article on the Heavy Metal Umlaut (current textJan. 2005). In this post, written for Paul Jones’ “living and working online” class, I’d like to use the last decade’s changes to the article to illustrate some points about the modern Wikipedia.1

    Measuring change

    At the end of 2004, the article had been edited 294 times. As we approach the end of 2014, it has now been edited 1,908 times by 1,174 editors.2

    This graph shows the number of edits by year – the blue bar is the overall number of edits in each year; the dotted line is the overall length of the article (which has remained roughly constant since a large pruning of band examples in 2007).

    Edits-by-year

     

    The dropoff in edits is not unusual — it reflects both a mature article (there isn’t that much more you can write about metal umlauts!) and an overall slowing in edits in English Wikipedia (from a peak of about 300,000 edits/day in 2007 to about 150,000 edits/day now).3

    The overall edit count — 2000 edits, 1000 editors — can be hard to get your head around, especially if you write for a living. Implications include:

    • Style is hard. Getting this many authors on the same page, stylistically, is extremely difficult, and it shows in inconsistencies small and large. If not for the deeply acculturated Encyclopedic Style we all have in our heads, I suspect it would be borderline impossible.
    • Most people are good, most of the time. Something like 3% of edits are “reverted”; i.e., about 97% of edits are positive steps forward in some way, shape, or form, even if imperfect. This is, I think, perhaps the single most amazing fact to come out of the Wikimedia experiment. (We reflect and protect this behavior in one of our guidelines, where we recommend that all editors Assume Good Faith.)

    The name change, tools, and norms

    In December 2008, the article lost the “heavy” from its name and became, simply, “metal umlaut” (explanation, aka “edit summary“, highlighted in yellow):

    Name change

    A few take aways:

    • Talk pages: The screencast explained one key tool for understanding a Wikipedia article – the page history. This edit summary makes reference to another key tool – the talk page. Every Wikipedia article has a talk page, where people can discuss the article, propose changes, etc.. In this case, this user discussed the change (in November) and then made the change in December. If you’re reporting on an article for some reason, make sure to dig into the talk page to fully understand what is going on.
    • Sources: The user justifies the name change by reference to sources. You’ll find little reference to them in 2005, but by 2008, finding an old source using a different term is now sufficient rationale to rename the entire page. Relatedly…
    • Footnotes: In 2008, there was talk of sources, but still no footnotes. (Compare the story about Motley Crue in Germany in 2005 and now.) The emphasis on foonotes (and the ubiquitous “citation needed”) was still a growing thing. In fact, when Jon did his screencast in January 2005, the standardized/much-parodied way of saying “citation needed” did not yet exist, and would not until June of that year! (It is now used in a quarter of a million English Wikipedia pages.) Of course, the requirement to add footnotes (and our baroque way of doing so) may also explain some of the decline in editing in the graphs above.

    Images, risk aversion, and boldness

    Another highly visible change is to the Motörhead art, which was removed in November 2011 and replaced with a Mötley Crüe image in September 2013. The addition and removal present quite a contrast. The removal is explained like this:

    remove File:Motorhead.jpg; no fair use rationale provided on the image description page as described at WP:NFCC content criteria 10c

    This is clear as mud, combining legal issues (“no fair use rationale”) with Wikipedian jargon (“WP:NFCC content criteria 10c”). To translate it: the editor felt that the “non-free content” rules (abbreviated WP:NFCC) prohibited copyright content unless there was a strong explanation of why the content might be permitted under fair use.

    This is both great, and sad: as a lawyer, I’m very happy that the community is pre-emptively trying to Do The Right Thing and take down content that could cause problems in the future. At the same time, it is sad that the editors involved did not try to provide the missing fair use rationale themselves. Worse, a rationale was added to the image shortly thereafter, but the image was never added back to the article.

    So where did the new image come from? Simply:

    boldly adding image to lead

    “boldly” here links to another core guideline: “be bold”. Because we can always undo mistakes, as the original screencast showed about spam, it is best, on balance, to move forward quickly. This is in stark contrast to traditional publishing, which has to live with printed mistakes for a long time and so places heavy emphasis on Getting It Right The First Time.

    In brief

    There are a few other changes worth pointing out, even in a necessarily brief summary like this one.

    • Wikipedia as a reference: At one point, in discussing whether or not to use the phrase “heavy metal umlaut” instead of “metal umlaut”, an editor makes the point that Google has many search results for “heavy metal umlaut”, and another editor points out that all of those search results refer to Wikipedia. In other words, unlike in 2005, Wikipedia is now so popular, and so widely referenced, that editors must be careful not to (indirectly) be citing Wikipedia itself as the source of a fact. This is a good problem to have—but a challenge for careful authors nevertheless.
    • Bots: Careful readers of the revision history will note edits by “ClueBot NG“. Vandalism of the sort noted by Jon Udell has not gone away, but it now is often removed even faster with the aid of software tools developed by volunteers. This is part of a general trend towards software-assisted editing of the encyclopedia.NoSwagForYou
    • Translations: The left hand side of the article shows that it is in something like 14 languages, including a few that use umlauts unironically. This is not useful for this article, but for more important topics, it is always interesting to compare the perspective of authors in different languages.Languages

    Other thoughts?

    I look forward to discussing all of these with the class, and to any suggestions from more experienced Wikipedians for other lessons from this article that could be showcased, either in the class or (if I ever get to it) in a one-decade anniversary screencast. :)

    1. I still haven’t found a decent screencasting tool that I like, so I won’t do proper homage to the original—sorry Jon!
    2. Numbers courtesy X’s edit counter.
    3. It is important, when looking at Wikipedia statistics, to distinguish between stats about Wikipedia in English, and Wikipedia globally — numbers and trends will differ vastly between the two.

    October 28, 2014

    Useful (to me) post-commit hook script for checking commit message

    A lot of projects are managed using bug-tracking tool (e.g. JIRA [1], Bugzilla [2], Redmine [3]), and most commits refer to a specified ticket. It's good to enter ticket number in a commit message - it allows other developers find a bug report, read more about a problem, which concern a commit.
    I don't know about you, but I often forget about put it in a commit message. So I've prepared simple script, which reminds me about enter ticket message in a message. I'm using this script as a post-commit hook both at work and in my private projects.
    I've pushed it to my github repository [4] (I've added also simple instruction - you can find it in a README.md file). Feel free to use and improve it (I'm not a bash and git expert) - patches are welcome ;)

    REPO_REGEX parameter

    Some of my friends have asked me about REPO_REGEX parameter. So let me clarify; e.g. at work I'm using mostly company's repositories, where should I enter ticket number. But I've got also a few open source repositories, where I don't need to (sometimes even I can't, because it doesn't strongly connected with bug-reporting service) pass this number. To make my life easier, I've added this script as a global. REPO_REGEX allows me define company repositories pattern, so I don't see warning message during committing to a non-company repositories.

    Links
    [1] https://www.atlassian.com/software/jira
    [2] http://www.bugzilla.org/
    [3] http://www.redmine.org/
    [4] https://github.com/loganek/postcommit-checker

    What’s new in Tracker 1.2?

    Minions-Happy

    Reblogged from Lanedo GmbH. blog

    Every 6 months or so we produce a new stable release and for Tracker 1.2 we had some new exciting work being introduced. For those that don’t know of Tracker, it is a semantic data storage and search engine for desktop and mobile devices. Tracker is a central repository of user information, that provides two big benefits for the user; shared data between applications and information which is relational to other information (for example: mixing contacts with files, locations, activities and etc.).

    Providing your own data

    Earlier in the year a client came Lanedo and to the community asking for help on integrating Tracker into their embedded platforms. What did they want? Well, they wanted to take full advantage of the Tracker project’s feature set but they also wanted to be able to use it on a bigger scale, not just for local files or content on removable USB keys. They wanted to be able to seamlessly query across all devices on a LAN and cloud content that was plugged into Tracker. This is not too dissimilar to the gnome-online-miners project which has similar goals.

    The problem

    Before Tracker 1.2.0, files and folders came by way of a GFile and GFileInfo which were found using the GFileEnumerator API that GLib offers. Underneath all of this the GFile* relates to GLocalFile* classes which do the system calls (like lstat()) to crawl the file system.

    Why do we need this? Well, on top of TrackerCrawler (which calls the GLib API), is TrackerFileNotifier and TrackerFileSystem, these essentially report content up the stack (and ignore other content depending on rules). The rules come from a TrackerIndexingTree class which knows what to black list and what to white list. On top of all of this is TrackerMinerFS, which (now is inaccurately named) handles queues and processing of ALL content. For example, DELETED event queues are handled before CREATED event queues. It also gives status updates, handles INSERT retries when the system is busy and so on).

    To make sure that we take advantage of existing technology and process information correctly, we have to plugin at the level of the TrackerCrawler class.

    The solution

    Essentially we have a simple interface for handling open and close cases for iterating a container (or directory) called TrackerDataProvider interface (and TrackerFileDataProvider implementation for the default or existing local file system case).

    That is followed up with an enumerator interface for enumerating that container (or directory). That is called TrackerEnumerator and of course there is a TrackerFileEnumerator class to implement the previous functionality that existed.

    So why not just implement our own GFile backend and make use of existing interfaces in GLib? Actually, I did look into this but the work involved seemed much larger and I was conscious of breaking existing use cases of GFile in other classes in libtracker-miner.

    How do I use it?

    So now it’s possible to provide your own data provider implementation for a cloud based solution to feed Tracker. But what are the minimum requirements? Well, Tracker requires a few things to function, those include providing a real GFile and GFileInfo with an adequate name, and mtime. The libtracker-miner framework requires the mtime for checking if there have been updates compared to the database. The TrackerDataProvider based implementation is given as an argument to the TrackerMiner object creation and called by the TrackerCrawler class when indexing starts. The locations that will be indexed by the TrackerDataProvider are given to the TrackerIndexingTree and you can use the TRACKER_DIRECTORY_FLAG_NO_STAT for non-local content.

    Crash aware Extractor

    In Tracker 1.0.0, the Extractor (the ‘tracker-extract’ process) used to extract metadata from files was upgraded to be passive. Passive meaning, the Extractor was only extracting content from files already added to the database. Before that, content was concatenated from the Extractor to the file system miner and inserted into the database collectively.

    Sadly with 1.0.0, any files that caused crashes or serious system harm resulting in the termination of ‘tracker-extract’ were subsequently retried on each restart of the Extractor. In 1.2.0 these failures are noted and files are not retried.

    New extractors?

    Thanks to work from Bastien Hadess, there have been a number of extractors added for electronic book and comic books. If your format isn’t supported yet, let us know!

    Updated Preferences Dialog

    Often we get questions like:

    • Can Tracker index numbers?
    • How can I disable indexing file content?

    To address these, the preferences dialog has been updated to provide another tab called “Control” which allows users to change options that have existed previously but not been presented in a user interface.

    tracker-preferences-1.2

    In addition to this, changing an option that requires a reindex or restart of Tracker will prompt the user upon clicking Apply.

    What else changed?

    Of course there were many other fixes and improvements as well as the things mentioned here. To see a full list of those, see them as mentioned in the announcement.

    Looking for professional services?

    If you or someone you know is looking to make use of Open Source technology and wants professional services to assist in that, get in touch with us at Lanedo to see how we can help!

    Android: Changing the Toolbar’s text color and overflow icon color

    Light on Dark and Dark on Light.

    Android’s has normal (dark) and light themes, though it’s actually the light themes which are normally show in examples of the new Material design.

    The light theme expects your App Bar1 (Toolbar or ActionBar) to have a light background color, so it gives you a dark title and dark overflow menu icon (the three vertical dots):

    screenshot_toolbar_light_theme_default_cropped

    The dark theme expects your App Bar to have a dark background color, so it gives you a white title and white overflow menu icon:

    screenshot_toolbar_dark_theme_default_cropped

    This is true of both the Holo themes and the new Material themes.

    If you want to use the light theme but want your App Bar to have a dark background, or use a dark theme and want your toolbar to have a light background, things get awkward. Arguably this might be unwise design anyway, but there’s nothing in the Material design guidelines advising against it.

    It’s fairly easy to change the ActionBar‘s text color, but changing the color of its overflow icon is harder. It seems normal to provide a whole new overflow icon for your app, replacing the standard one, just to get the right color.

    Android’s new Toolbar, which replaces ActionBar (with some awkward code), makes it easier to change the title text color and the color of the menu overflow icon (and the Up/Back icon). So now I finally have dark text and icons on my light background in the dark theme:

    screenshot_toolbar_dark_text_on_light_with_dark_theme_cropped

    Toolbar theme and popupTheme

    It took me ages to figure out how to do this, so hopefully the following explanation saves someone some time. I’d welcome any corrections.

    My main theme derives from Theme.AppCompat (not Theme.AppCompat.Light), which is the dark Material theme for older devices, because I want most of the UI to be dark.

    <style name="AppTheme" parent="AppTheme.Base" />
    <!-- Base application theme.
         Defining this lets values-v21/styles.xml reuse it with changes. -->
    </style>
    <style name="AppTheme.Base" parent="Theme.AppCompat">  
      <!-- Use the Toolbar instead of the ActionBar
           (new in API 21 and AppCompat). -->
      <item name="windowActionBar">false</item>
    
      <!-- colorPrimary is used, for instance, for the default ActionBar
           (but not Toolbar) background.
            We specify the same color for the toolbar background in 
            toolbar.xml.. -->
      <item name="colorPrimary">@color/color_primary</item>
    
      <!-- colorPrimaryDark is used for the status bar (with the
           battery, clock, etc). -->
      <item name="colorPrimaryDark">@color/color_primary_dark</item>
    
       <!-- colorAccent is used as the default value for
            colorControlActivated which is used to tint widgets. -->
      <item name="colorAccent">@color/color_accent</item>
    </style>

    But that dark theme gives me light text and icons on the my light background of the App Bar:

    screenshot_toolbar_standard_white_text_on_light_with_dark_theme_cropped

    I want to use a light color for the toolbar background even while using the dark theme.  So I’ll need to make the text and icons on my toolbar dark instead of the default white from the light theme. Incidentally, the Material Design Color Palette page seems to agree with me, using dark title text on the Lime color I’ve chosen, but using white on almost all other colors.

    So my Toolbar’s XML layout specifies a different theme (app:theme when using AppCompat, or android:theme when targetting SDK >=21), like so:

    <android.support.v7.widget.Toolbar
      xmlns:android="http://schemas.android.com/apk/res/android"
      xmlns:app="http://schemas.android.com/apk/res-auto"
      android:id="@+id/toolbar"
      android:layout_width="match_parent"
      android:layout_height="wrap_content"
      android:background="@color/color_primary"
      app:theme="@style/GalaxyZooThemeToolbarDarkOverflow"
      app:popupTheme="@style/Theme.AppCompat" />

    That toolbar theme specifies a textColorPrimary and textColorSecondary to change the color of the title text and of the menu overflow button. You could just specify the standard Theme.AppCompat.Light theme for the toolbar, to get the dark text and overflow icon,  but I wanted to derive from my own theme and make only small changes, because I have no idea what else might be affected.

    <style name="GalaxyZooThemeToolbarDarkOverflow" parent="Theme.AppCompat">
      <!-- android:textColorPrimary is the  color of the title text
           in the Toolbar, in the Theme.AppCompat theme:  -->
      <item name="android:textColorPrimary">@color/abc_primary_text_material_light</item>
    
      <!-- android:textColorPrimaryInverse is the  color of the title
           text in the Toolbar, in the Theme.AppCompat.Light theme:  -->
      <!-- <item name="android:textColorPrimaryInverse">@color/abc_primary_text_material_light</item> -->
    
      <!-- android:actionMenuTextColor is the color of the text of
            action (menu) items in the Toolbar, at least in the
            Theme.AppCompat theme.
            For some reason, they already get the textColorPrimary
            when running on API 21, but not on older versions of
            Android, so this is only necessary to support older
            Android versions.-->
            <item name="actionMenuTextColor">@color/abc_primary_text_material_light</item>
      <!-- android:textColorSecondary is the color of the menu
           overflow icon (three vertical dots) -->
      <item name="android:textColorSecondary">@color/abc_secondary_text_material_light</item>
    
      <!-- This would set the toolbar's background color,
            but setting this also changes the popup menu's background,
            even if we define popupTheme for our <Toolbar> -->
      <!-- <item name="android:background">@color/color_primary</item> -->
    </style>

    This gives me the dark text and icons in my App Bar while using the dark theme:

    screenshot_toolbar_dark_text_on_light_with_dark_theme_cropped

    Notice that the <Toolbar> also uses popupTheme (app:popuptheme when using AppCompat, or android:popupTheme when targetting SDK >=21). Without this, the overflow menu’s appearance is affected by the Toolbar’s style, leading to dark text on a dark background:

    screenshot_toolbar_dark_text_on_light_with_dark_theme_with_menu_without_popuptheme_cropped

    By specifying the Toolbar’s popupTheme, we can make its menu use our normal theme instead:

    screenshot_toolbar_dark_text_on_light_with_dark_theme_with_menu_cropped

    [1] “App Bar” seems to be the new official terminology. It’s used here and some other places I can’t find now.

    Weekend hack, updated

    After a bit more debugging, and with some help from Jasper, here is the inspector debugging an X11 application while displaying under Wayland:

    Wayland inspection!It also turns out that the GTK_INSPECTOR_DISPLAY environment variable is sufficient to make this work. I started this demo with

    GTK_DEBUG=interactive GDK_BACKEND=x11,wayland \
    GTK_INSPECTOR_DISPLAY=wayland-0 ./gtk3-demo
    
    Update

    The reverse (application displaying under Wayland, inspector  on X) also works:

    Reverse inspection

     

     

    Sciopero

    screenshot

    Public transport strikes in Rome are so frequent that it’s hard to remember when they are. I wrote a Gnome Shell extension to help remind me when there’s one either coming up or in progress. Find it on extensions.gnome.org. It gets its data from another little service I just made.


    A Roma gli scioperi dei mezzi pubblici sono così frequenti che spesso è facile dimenticarsi quando ci sono. Ho scritto un’estensione per Gnome Shell per avvisare quando c’è o si avvicina uno sciopero dell’Atac. La puoi trovare su extensions.gnome.org. Funziona grazie ad un altro piccolo servizio che ho creato.

    October 26, 2014

    Hello new life

    Since the end of the Google Summer of Code 2014 so many things have happened that It is really hard to make a simple blog post about it... but I must try :)

    New job


    Well, along with the GSoC I've applied to a job at Red Hat and the process ended in August. Since October 15th I am part of the Spice team and I will be working at the Brno office in Czech Republic.

    I am really excited with the work and with the team. I am learning a lot already.

    New City


    I have visited Brno at GUADEC 2013, lovely city. I feel safe here unlike Campinas in Brazil.
    I have great expectations about living here and so do my wife after reading Christian's blog post about leaving brno.

    Married


    Yay ;) Yes, biggest decision of my life.
    We have been together for almost three years now and we were living together for almost one year which made this decision straightforward for me. Catielle is a wonderful person and I am really happy with our marriage. I feel that we are going to be very happy here in Brno.

    Pending


    Well, I was quite a busy in the last two months but I still have a lot of things to do!
    • A few blog posts about GUADEC, GSoC and Grilo
    • I want to move this Blog to Pelican :-)
    • Finishing off a few things related to the last GSoC project, making it upstream!
    • Writing a documentation for the lua-factory plugin :-)

    That's all for now! I will be back hacking and blogging now that things are getting calmer.






    October 25, 2014

    A weekend hack

    I’ve been working on making GtkInspector use a different display connection. This helps isolating it from some of the changes you can trigger from inside the inspector UI. Then I thought, why not use a different backend ?!

    We did enough work on GDK backend separation that it could almost work. But since we didn’t add API to actually connect to specific backends (users and applications get some control with GDK_BACKEND and gdk_set_allowed_backends()),  nobody has ever used multiple backends in the same process. And things that don’t get used don’t work.  So some fixes were necessary.

    But now I have gtk3-demo running under X while the inspector is using the Broadway backend and shows up in the web browser:

    To my knowledge this is the first time a GTK+ application is using multiple GDK backends at the same time.

    Anticipating some questions:

    • What is this good for ?

    Not that much, really. I consider it mostly a curiosity. It does prove that the GDK backend separation is (almost) working.

    For most cases where this could be used with the inspector, it is probably preferable to move the inspector out of process and communicate via D-Bus. There are some people who are interested in this, so it may happen.

    • Can I use this in my application ?

    No. I haven’t committed that hack that makes the inspector use a different backend . If we wanted to support this, we would need to add a real API to connect to a specific backend.

    What I did commit though is the code that makes the inspector always use a separate display connection (with the same backend), so you get the better isolation that can be seen in the video.

    • Does this also work with Wayland ?

    No. I first tried to make it work with Wayland+X, but the Wayland backend is more complicated than Broadway. I’m still getting crashes fairly soon. Broadway pretty much just worked, after a  round of fixes.

    October 24, 2014

    Kerberos over HTTP: getting a TGT on a firewalled network

    One of the benefits I originally wanted to bring with the FreeIPA move to GNOME contributors was the introduction of an additional authentication system to connect to to the services hosted on the GNOME Infrastructure. The authentication system that comes with the FreeIPA bundle that I had in mind was Kerberos. Users willing to use Kerberos as their preferred authentication system would just be required to get a TGT (Ticket-Granting Ticket) from the KDC (Key Distribution Center) through the kinit command. Once done authenticating to the services currently supporting Kerberos will be as easy as pointing a configured browser (Google for how to configure your browser to use Krb logins) to account.gnome.org without being prompted for entering the usual username / password combination or pushing to git without using the public-private key mechanism. That theoretically means you won’t be required to use a SSH key for loggin in to any of the GNOME services at all as entering your password to the kinit password prompt will be enough (for at least 24 hours as that’s the life of the TGT itself on our setup) for doing all you were used to do before the Kerberos support introduction.

    kerberos-over-http

    A successful SSH login using the most recent Kerberos package on Fedora 21

    The issue we faced at first was the underlying networking infrastructure firewalling all Kerberos ports blocking the use of kinit itself which kept timing out reaching port 88. A few days later I was contacted by RH’s developer Nathaniel McCallum who worked out a way to bypass this restriction by creating a KDC proxy that accepts requests from port 443 and proxies them to the internal KDC running on port 88. With the recent Kerberos release (released on October 15th, 2014 and following the MS-KKDCP protocol) a patched kinit allows users to retrieve their TGTs directly from the HTTPS proxy completely bypassing the need for port 88 to stay open on the firewall. The GNOME Infrastructure now runs the KDC Proxy and we’re glad to announce Kerberos authentications are working as expected on the hosted services.

    If you are facing the same problem and you are curious to know more about the setup, here they come all the details:

    On the KDC:

    1. No changes are needed on the KDC itself, just make sure to install the python-kdcproxy package which is available for RHEL 7, HERE.
    2. Tweak your vhost accordingly by following the provided documentation.

    On the client:

    1. Install the krb5-workstation package, make sure it’s at least version 1.12.2-9 as that’s the release which had the additional features we are talking about backported. Right now it’s only available for Fedora 21.
    2. Adjust /etc/krb5.conf accordingly and finally get  TGT through kinit $userid@GNOME.ORG.
    [realms]
     GNOME.ORG = {
      kdc = https://account.gnome.org/kdc
      kpasswd_server = https://account.gnome.org/kdc
    }
    

    That should be all for today!

    Introducing Gthree

    I’ve recently been working on OpenGL support in Gtk+, and last week it landed in master. However, the demos we have are pretty lame and are not very good to show off or even test the OpenGL support. I’ve looked around for some open source demos that used modern GL that we could use, but I didn’t find anything that we could easily use.

    What I did find though, was a lot of WebGL demos that used three.js. This looked like a very nice open source library for highlevel 3d rendering. At first I had some plans to bind OpenGL to gjs so that we could run three.js, but this turned out to be a hard.

    Instead I started converting three.js into C + GObject, using the Gtk+ OpenGL support and the vector/matrix library graphene that Emmanuele has been working on recently.

    After about a week of frantic hacking it is now at a stage where it may be interesting for others. So, without further ado I introduce:

    https://github.com/alexlarsson/gthree

    It does not yet support everything that three.js can do, but it does support a meshes with most mesh matrial types and lighting, including a loader for the json model format of thee.js, which means that it is minimally useful.

    Here are some screenshots of the examples that ships with the code:

    Screenshot from 2014-10-24 15:04:47

    Various types of materials

    Screenshot from 2014-10-24 15:10:00

    Some sample models from three.js examples

    Screenshot from 2014-10-24 15:31:40

    Some random cubes

    This has been a lot of fun to work on as I’ve seen a lot of progress very fast. Mad props to mrdoob and the other three.js developers for creating three.js and making it free software. Gthree is a huge rip-off of their work and would never be possible without it. Thanks also to Emmanuele for his graphene library.

    What are you sitting here for, go ahead and play with it! Make some demos, port some more three.js features, marvel at the fancy graphics!

    A Londoner In San Jose

    10th Year Reunion Summit

    Here I am at the Google Mentor Summit to celebrate the 10 year Reunion as both a Google Summer of Code Scientific Ruby and of course, a GNOME student too. ;-)

    Banner: Google Summer of Code, tenth year

    I have yet to see any GNOMIES or the SciRuby lot, but there is a gathering in the Marriott in about an hour where I am sure I will get a chance then. That said, I have already had the opportunity to meet some cool people from various FOSS communities involved in GSoC and accordingly I found out about some interesting work that they have been involved with relating to Neuroscience, Robotics and Open Education. There seem to be a lot of researchers floating about at the Summit which has led to some especially interesting chat (since I like all that stuff).

    With Thanks To...

    Logo: SciRubyI would just like to take this opportunity to thank all the people who helped me get here from the GNOME and Scientific Ruby communities respectively.Logo: GNOME
    All donations have been very much appreciated! So to all of those individuals who saw fit to support me (in my time of need), thank you very much!

    I would also like to give a shout out to Google for giving me free digs at the San Jose Hilton for three nights of the Summit, for inviting me to the event and generally providing me with cool free stuff and  an itinerary of interesting things to do this weekend in San Jose!

    Logo: Google

    Last but by no means least, I would like to thank The Lavin Agency for not only, "making the world a smarter place" ;-) but specifically, for covering most of my travel costs to the Summit San Jose and back again... That contribution truly nailed it in getting me all the way to San Jose!

    Logo: The Lavin Agency

    Once again, thank you all.

    October 23, 2014

    Mono for Unreal Engine

    Earlier this year, both Epic Games and CryTech made their Unreal Engine and CryEngine available under an affordable subscription model. These are both very sophisticated game engines that power some high end and popular games.

    We had previously helped Unity bring Mono as the scripting language used in their engine and we now had a chance to do this over again.

    Today I am happy to introduce Mono for Unreal Engine.

    This is a project that allows Unreal Engine users to build their game code in C# or F#.

    Take a look at this video for a quick overview of what we did:

    This is a taste of what you get out of the box:

    • Create game projects purely in C#
    • Add C# to an existing project that uses C++ or Blueprints.
    • Access any API surfaced by Blueprint to C++, and easily surface C# classes to Blueprint.
    • Quick iteration: we fully support UnrealEngine's hot reloading, with the added twist that we support it from C#. This means that you hit "Build" in your IDE and the code is automatically reloaded into the editor (with live updates!)
    • Complete support for the .NET 4.5/Mobile Profile API. This means, all the APIs you love are available for you to use.
    • Async-based programming: we have added special game schedulers that allow you to use C# async naturally in any of your game logic. Beautiful and transparent.
    • Comprehensive API coverage of the Unreal Engine Blueprint API.

    This is not a supported product by Xamarin. It is currently delivered as a source code package with patches that must be applied to a precise version of Unreal Engine before you can use it. If you want to use higher versions, or lower versions, you will likely need to adjust the patches on your own.

    We have set up a mailing list that you can use to join the conversation about this project.

    Visit the site for Mono for Unreal Engine to learn more.

    (I no longer have time to manage comments on the blog, please use the mailing list to discuss).

    perf.gnome.org – introduction

    My talk at GUADEC this year was titled Continuous Performance Testing on Actual Hardware, and covered a project that I’ve been spending some time on for the last 6 months or so. I tackled this project because of accumulated frustration that we weren’t making consistent progress on performance with GNOME. For one thing, the same problems seemed to recur. For another thing, we would get anecdotal reports of performance problems that were very hard to put a finger on. Was the problem specific to some particular piece of hardware? Was it a new problem? Was it an a problems that we have already addressed? I wrote some performance tests for gnome-shell a few years ago – but running them sporadically wasn’t that useful. Running a test once doesn’t tell you how fast something should be, just how fast it is at the moment. And if you run the tests again in 6 months, even if you remember what numbers you got last time, even if you still have the same development hardware, how can you possibly figure out what what change is responsible? There will have been thousands of changes to dozens of different software modules.

    Continuous testing is the goal here – every time we make a change, to run the same tests on the same set of hardware, and then to make the results available with graphs so that everybody can see them. If something gets slower, we can then immediately figure out what commit is responsible.

    We already have a continuous build server for GNOME, GNOME Continuous, which is hosted on build.gnome.org. GNOME Continuous is a creation of Colin Walters, and internally uses Colin’s ostree to store the results. ostree, for those not familiar with it is a bit like Git for trees of binary files, and in particular for operating systems. Because ostree can efficiently share common files and represent the difference between two trees, it is a great way to both store lots of build results and distribute them over the network.

    I wanted to start with the GNOME Continuous build server – for one thing so I wouldn’t have to babysit a separate build server. There are many ways that the build can break, and we’ll never get away from having to keep a eye on them. Colin and, more recently, Vadim Rutkovsky were already doing that for GNOME Continuouous.

    But actually putting performance tests into the set of tests that are run by build.gnome.org doesn’t work well. GNOME Continuous runs it’s tests on virtual machines, and a performance test on a virtual machine doesn’t give the numbers we want. For one thing, server hardware is different from desktop hardware – it generally has very limited graphics acceleration, it has completely different storage, and so forth. For a second thing, a virtual machine is not an isolated environment – other processes and unpredictable caching will affect the numbers we get – and any sort of noise makes it harder to see the signal we are looking for.

    Instead, what I wanted was to have a system where we could run the performance tests on standard desktop hardware – not requiring any special management features.

    Another architectural requirement was that the tests would keep on running, no matter what. If a test machine locked up because of a kernel problem, I wanted to be able to continue on, update the machine to the next operating system image, and try again.

    The overall architecture is shown in the following diagram:

    HWTest Architecture The most interesting thing to note in the diagram the test machines don’t directly connect to build.gnome.org to download builds or perf.gnome.org to upload the results. Instead, test machines are connected over a private network to a controller machine which supervises the process of updating to the next build and actually running, the tests. The controller has two forms of control over the process – first it controls the power to the test machines, so at any point it can power cycle a test machine and force it to reboot. Second, the test machines are set up to network boot from the test machines, so that after power cycling the controller machine can determine what to boot – a special image to do an update or the software being tested. The systemd journal from the test machine is exported over the network to the controller machine so that the controller machine can see when the update is done, and collect test results for publishing to perf.gnome.org.

    perf.gnome.org is live now, and tests have been running for the last three months. In that period, the tests have run thousands of times, and I haven’t had to intervene once to deal with a . Here’s perf.gnome.org catching a regression (fix)

    perf.gnome.org regressionI’ll cover more about the details of how the hardware testing setup work and how performance tests are written in future posts – for now you can find some more information at https://wiki.gnome.org/Projects/HardwareTesting.


    GUADEC-ES 2014: Zaragoza (Spain), 24th-26th October

    A short notice to remind everyone that this weekend a bunch of GNOME hackers and users will be meeting in the beautiful city of Zaragoza (*) (Spain) for the Spanish-speaking GUADEC. The schedule is already available online:

    http://2014.guadec.es/programa

    Of course, non-Spanish-speaking people are also very welcome :)

    See you there!

    gnomehispano

    (*) Hoping not to make enemies: Zárágozá.


    Filed under: GNOME Planet, GNU Planet Tagged: gnome, guadec

    October 22, 2014

    A workshop at FSCONS

    Things are going at a fast pace at Medialogy these days, but I’ll have a bit of time to do GNOME Engagement again soon. FSCONS is coming up and I plan to bring posters, brochures and myself to Sweden from Thursday the 30th October till Monday the 3rd. If anyone is interested in meeting up, I’ll be around the whole weekend at the conference.

    Also! Also! I’m doing a workshop on promotional videos. It will be interesting as I haven’t quite held a talk on this subject before.  It’s scheduled around 18.45 on Sunday in Room 6. My plan is to give tips on creating promotional videos, especially:

    • Planning out a promotional video
    • Using a possible pipeline of FOSS tools for creating these videos.
    • Sharing my own  collection of ressources I have used to learn the tools.

    I’m curious if there’s anything else you feel I should touch upon during this workshop. Feel free to tell me on beforehand. (-:|>

    GStreamer Conference 2014 talks online

    For those of you who like me missed this years GStreamer Conference the recorded talks are now available online thanks to Ubicast. Ubicats has been a tremendous partner for GStreamer over the years making sure we have high quality talk recordings online shortly after the conference ends. So be sure to check out this years batch of great GStreamer talks.

    Btw, I also done a minor release of Transmageddon today, which mostly includes a couple of bugfixes and a few less deprecated widgets :)

    Development of Nautilus – Popovers, port to GAction and more

    So for the last two weeks, I have been trying to implement this:

    The popovers!

    In an application that already use GAction and a normal GMenu for everything is quite easy.

    But Nautilus is not using GAction neither GMenu for its menus. Not only that, Nautilus use GtkUIManager for managing the menus and GtkActions. And not only that, Nautilus merge parts of menus along all the code.

    Also, the popover drawn in that design is not possible with GMenu because of the GtkSlider.

    So my first step, when nothing was clear for me, was to just trying to create a custom GtkBox class to embed it on the popover and try to us the current architecture of nautilus.

    It didn’t work, obviously. Fail 1.

    Then after talking with some Gedit guys (thanks!), I understood that what I needed was to port Nautilus to GAction first. But, I will have to find a solution to merge menus.

    My first week and a half was trying to find a solution on how to merge the menus, along making the port to GAction and refactoring Nautilus code to make it with sense and being used to the code of Nautilus.

    The worst part was the complexity of the code, understanding it and the its intricate code paths. Making a new application test with GMenu and popovers merging menus was kinda acceptable.

    To understand why I needed to merge menus recursively, this was the recursion of nautilus menus that was done with GtkUIManager along 4 levels of classes. That diagram should have more leafs (more classes injecting items) at some levels, but this was the most complex one.:

    Dibujo sin título

    So after spending more than a week trying to make it work at all costs,  I figured out that merging menus recursively in recursive sections was not working. That was kinda frustrating.

    Big fail 2.

    Then I decided to get another path, with the experience earned along that one week and a half.

    I simplified the menu layout, to be a flat layout (still I have to merge a one-level menus, so a new way to merge menus was born), put all the management of the actions on the window instead of having multiple GtkActionGroups sparsed on the code as Nautilus had previously, make the update of menus centralized on the window, attach the menus where it makes sense (on the toolbar), and a beautiful thing, the toolbar of nautilus (aka header bar) is now on a xml gresource file, not longer making it programatically =).

    That last thing required to redo a good part of the toolbar, to for example use the private bindings that GObject provides (and then be able to use gtk_widget_class_bind_template_child_private) or sync the sensitivity of some widgets that were synced directly modifying the actions on the window instead on the toolbar, etc.

    And thanks to the experience earned in the fails before, it started working!

    Then I became enthusiastic to add more and more part of nautilus ported. After the prototype worked this morning, all was kinda easy. And now I feel more (like a very big difference) confident with the code of Nautilus, C, GTK+ and GObject.

    Here’s the results

    nautilus-view-menu

    nautilus-action-menu

    It’s still a very early prototype, since the port to GAction is not completed. I think I have 40% of the port done. And I didn’t erased all the code that now it’s not necesary. But with a prototype working and the difficult edges solved, that doesn’t worry me at all.

    Work to be done is:

    * Complete the port to GAction, porting also all menus.

    * Refactor to make more sense now with the current workflow of menus and actions.

    * Create the public API to allow extensions to extend the menus. Luckily I was thinking on that when creating the API to merge the menus inside Nautilus, so the method will be more or less the same.

    * And last but not least, make sure any regression is known (this is kinda complicated due to the possibly code paths and supported tools of Nautilus)

    Hope you like the work!

    PD: Work is being done in wip/gaction but please, don’t look at the code yet =)


    Apache SSLCipherSuite without POODLE

    In my previous post Forward Secrecy Encryption for Apache, I’ve described an Apache SSLCipherSuite setup to support forward secrecy which allowed TLS 1.0 and up, avoided SSLv2 but included SSLv3. With the new PODDLE attack (Padding Oracle On Downgraded Legacy Encryption), SSLv3 (and earlier versions) should generally be avoided. Which means the cipher configurations discussed [...]

    Cassandra Keyspace case-sensitiveness WTF

    cqlsh> DESCRIBE KEYSPACES;
    foo   bar  OpsCenter

    cqlsh> use opscenter;
    Bad Request: Keyspace 'opscenter' does not exist

    cqlsh> use OpsCenter;
    Bad Request: Keyspace 'opscenter' does not exist

    cqlsh> USE "OpsCenter";
    cqlsh:OpsCenter>

    Seriously this is the way Cassandra handle case-sensitiveness ???

    October 21, 2014

    A GNOME Kernel wishlist

    GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

    I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

    I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

    October 20, 2014

    GNOME Web 3.14

    It’s already been a few weeks since the release of GNOME Web 3.14, so it’s a bit late to blog about the changes it brings, but better late than never. Unlike 3.12, this release doesn’t contain big user interface changes, so the value of the upgrade may not be as immediately clear as it was for 3.12. But this release is still a big step forward: the focus this cycle has just been on polish and safety instead of UI changes. Let’s take a look at some of the improvements since 3.12.1.

    Safety First

    The most important changes help keep you safer when browsing the web.

    Safer Handling of TLS Authentication Failures

    When you try to connect securely to a web site (via HTTPS), the site presents identification in the form of a chain of digital certificates to prove that your connection has not been maliciously intercepted. If the last certificate in the chain is not signed by one of the certificates your browser has on file, the browser decides that the connection is not secure: this could be a harmless server configuration error, but it could also be an attacker intercepting your connection. (To be precise, your connection would be secure, but it would be a secure connection to an attacker.) Previously, Web would bet on the former, displaying an insecure lock icon next to the address in the header bar, but loading the page anyway. The problem with this approach is that if there really is an attacker, simply loading the page gives the attacker access to secure cookies, most notably the session cookies used to keep you logged in to a particular web site. Once the attacker controls your session, he can trick the web site into thinking he’s you, change your settings, perhaps make purchases with your account if you’re on a shopping site, for example. Moreover, the lock icon is hardly noticeable enough to warn the user of danger. And let’s face it, we all ignore those warnings anyway, right? Web 3.14 is much stricter: once it decides that an attacker may be in control of a secure connection, it blocks access to the page, like all major browsers already do:

    Screenshot from 2014-10-17 19:52:53Click for full size

    (The white text on the button is probably a recent compatibility issue with GTK+ master: it’s fine in 3.14.)

    Safety team members will note that this will obviously break sites with self-signed certificates, and is incompatible with a trust-on-first-use approach to certificate validation. As much as I agree that the certificate authority system is broken and provides only a moderate security guarantee, I’m also very skeptical of trust-on-first-use. We can certainly discuss this further, but it seemed best to start off with an approach similar to what major browsers already do.

    The Load Anyway button is non-ideal, since many users will just click it, but this provides good protection for anyone who doesn’t. So, why don’t we get rid of that Load Anyway button? Well, different browsers have different strategies for validating TLS certificates (a good topic for a future blog post), which is why Web sometimes claims a connection is insecure even though Firefox loads the page fine. If you think this may be the case, and you don’t care about the security of your connection (including any passwords you use on the site), then go ahead and click the button. Needless to say, don’t do this if you’re using somebody else’s Wi-Fi access point, or on an email or banking or shopping site… when you use this button, the address in the address bar does not matter: there’s no telling who you’re really connected to.

    But all of the above only applies to your main connection to a web site. When you load a web page, your browser actually creates very many connections to grab subresources (like images, CSS, or trackers) needed to display the page. Prior to 3.14, Web would completely ignore TLS errors for subresources. This means that the secure lock icon was basically worthless, since an attacker could control the page by modifying any script loaded by the page without being detected. (Fortunately, this attack is somewhat unlikely, since major browsers would all block this.) Web 3.14 will verify all TLS certificates used to encrypt subresources, and will block those resources if verification fails. This can cause web pages to break unexpectedly, but it’s how all major browsers I’ve tested behave, and it’s certainly the right thing to do. (We may want to experiment with displaying a warning, though, so that it’s clear what’s gone wrong.)

    And if you’re a distributor, please read this mail to learn how not to break TLS verification in your distribution. I’m looking at you, Debian and derivatives.

    Fewer TLS Authentication Failures

    With glib-networking 2.42, corresponding to GNOME 3.14, Web will now accept certificate chains when the certificates are sent out of order. Sites that do this are basically broken, but all major browsers nevertheless support unordered chains. Sending certificates out of order is a harmless configuration mistake, not a security flaw, so the only harm in accepting unordered certificates is that this makes sites even less likely than before to notice their configuration mistake, harming TLS clients that don’t permute the certificates.

    This change should greatly reduce the number of TLS verification failures you experience when using Web. Unfortunately, there are still too many differences in how certificate verification is performed for me to be comfortable with removing the Load Anyway button, but that is definitely the long-term goal.

    HTTP Authentication

    WebKitGTK+ 2.6.1 plugs a serious HTTP authentication vulnerability. Previously, when a secure web page would require a password before the user could load the page, Web would not validate the page’s TLS certificate until after prompting the user for a password and sending it to the server.

    Mixed Content Detection

    If a secure (HTTPS) web page displays insecure content (usually an image or video) or executes an insecure script, Web now displays a warning icon instead of a lock icon. This means that the lock icon now indicates that your connection is completely private, with the exception that a passive adversary can always know the web site that you are visiting (but not which page you are visiting on the site). If the warning icon is displayed, then an adversary can compromise some (and possibly all) of the page, and has also learned something that might reveal which page of the site you are visiting, or the contents of the page.

    If you’re curious where the insecure content is coming from and don’t mind leaving behind Web’s normally simple user interface, you can check using the web inspector:

    Screenshot from 2014-10-17 21:07:52The screenshot is leaked to an attacker, revealing that you’re on the home page. Click for full size.

    The focus on safety will continue to drive the development of Web 3.16. Most major browsers, with the notable exception of Safari, take mixed content detection one step further by actively blocking some more dangerous forms of insecure content, such as scripts, on secure pages, and we certainly intend to do so as well. We’re also looking into support for strict transport security (HSTS), to ensure that your connection to HSTS-enabled sites is safe even if you tried to connect via HTTP instead of HTTPS. This is what you normally do when you type an address into the address bar. Many sites will redirect you from an HTTP URL to an HTTPS URL, but an attacker isn’t going to do this kindness for you. Since all HTTP pages are insecure, you get no security warning. This problem is thwarted by strict transport security. We’re currently hoping to have both mixed content blocking and strict transport security complete in time for 3.16.

    UI Changes

    Of course, security hasn’t been the only thing we’ve been working on.

    • The most noticeable user experience change is not actually a change in Web at all, but in GNOME Document Viewer 3.14. The new Document Viewer browser plugin allows you to read PDFs in Web without having to download the file and open it in Document Viewer. (This is similar to the proprietary Adobe Reader browser plugin.) This is made possible by new support in WebKitGTK+ 2.6 for GTK+ 3 browser plugins.
    • The refresh button has been moved from the address bar and is now next to the new tab button, where it’s always accessible. Previously, you would need to click to show the address bar before the refresh button would appear.
    • The lock icon now opens a little popover to display the security status of the web page, instead of directly presenting the confusing certificate dialog. You can also now click the lock when the title of the page is displayed, without needing to switch to the address bar.

    Bugfixes

    3.14 also contains some notable bugfixes that will improve your browsing experience.

    • We fixed a race condition that caused the ad blocker to accidentally delete its list of filters, so ad block will no longer randomly stop working when enabled (it’s off by default). (We still need one additional fix in order to clean this up automatically if it’s already broken, but in the meantime you can reset your filters by deleting ~/.config/epiphany/adblock if you’re affected.)
    • We (probably!) fixed a bug that caused pages to disappear from history after the browser was closed.
    • We fixed a bug in Web’s aggressive removal of tracking parameters from URLs when the do not track preference is enabled (it’s off by default), which caused compatibility issues with some web sites.
    • We fixed a bug that caused Web to sometimes not offer to remember passwords.

    These issues have all been backported to our 3.12 branch, but were never released. We’ll need to consider making more frequent stable releases, to ensure that bugfixes reach users more quickly in the future.

    Polish

    • There are new context menu entries when right-clicking on an HTML video. Notably, this adds the ability to easily download a copy of the video for watching it offline.
    • Better web app support. Recent changes in 3.14.1 make it much harder for a web app to escape application mode, and ensure that links to other sites open in the default browser when in application mode.
    • Plus a host of smaller improvements: The subtitle of the header bar now changes at the same time as the title, and the URL in the address bar will now always match the current page when you switch to address bar mode. Opening multiple links in quick succession from an external application is now completely reliable (with WebKitGTK+ 2.6.1); previously, some pages would load twice or not at all. The search provider now exits one minute after you search for something in the GNOME Shell overview, rather than staying alive forever. The new history dialog that was added in 3.12 now allows you to sort history by title and URL, not just date. The image buttons in the new cookies, history, and passwords dialogs now have explanitory tooltips. Saving an image, video, or web page over top of an existing file now works properly (with Web 3.14.1). And of course there are also a few memory usage and crash fixes.

    As always, the best place to send feedback is <epiphany-list@gnome.org>, or Bugzilla if you’ve found a bug. Comments on this post work too. Happy browsing!

    3.14 Games Updates

    So, what new things happened to our games in GNOME 3.14?

    Hitori

    GNOME Hitori has actually been around for a while, but it wasn’t until this cycle that I discovered it. After chatting with Philip Withnall, we agreed that with a minor redesign, the result would be appropriate for GNOME 3. And here it is:

    Screenshot from 2014-10-17 18:03:30

    The gameplay is similar to Sudoku, but much faster-paced. The goal is to paint squares such that the same digit appears in each row and column no more than once, without ever painting two horizontally- or vertically-adjacent squares and without ever creating a set of unpainted squares that is disconnected both horizontally and vertically from the rest of the unpainted squares. (This sounds a lot more complicated than it is: think about it for a bit and it’s really quite intuitive.) You can usually win each game in a minute or two, depending on the selected board size.

    Mines

    For Mines, the screenshots speak for themselves. The new design is by Allan Day, and was implemented by Robert Roth.

    Screenshot from 2014-10-17 18:09:103.12
    Screenshot from 2014-10-17 18:08:063.14

    There is only one gameplay change: you can no longer get a hint to help you out of a tough spot at the cost of a small time penalty. You’ll have to actually guess which squares have mines now.

    Right now, the buttons on the right disappear when the game is in progress. This may have been a mistake, which we’ll revisit in 3.16. You can comment in Bug #729250 if you want to join our healthy debate on whether or not to use colored numbers.

    Sudoku

    Sudoku has been rewritten in Vala with the typical GNOME emphasis on simplicity and ease of use. The design is again by Allan Day. Christopher Baines started work on the port for a Google Summer of Code project in 2012, and Parin Porecha completed the work this summer for his own Google Summer of Code project.

    Screenshot from 2014-10-17 18:19:023.12 (note: not possible to get rid of the wasted space on the sides)
    Screenshot from 2014-10-17 18:20:253.14

    We’re also using a new Sudoku generator, QQwing, for puzzle generation. This allows us to avoid reimplementing bugs in our old Sudoku generator (which is documented to have generated at least one impossible puzzle, and sometimes did a very poor job of determining difficulty), and instead rely on a project completely focused on correct Sudoku generation. Stephen Ostermiller is the author of QQwing, and he worked with us to make sure QQwing met our needs by implementing symmetric puzzle generation and merging changes to make it a shared library. QQwing is fairly fast at generating puzzles, so we’ve dropped the store of pregenerated puzzles that Sudoku 3.12 used and now generate puzzles on the fly instead. This means a small (1-10 second) wait if you’re printing dozens of puzzles at once, but it ensures that you no longer get the same puzzle twice, as sometimes occurred in 3.12.

    If you noticed from the screenshot, QQwing often uses more interesting symmetries than our old generator did. For the most part, I think this is exciting — symmetric puzzles are intended to be artistic — but I’m interested in comments from players on whether we should disable some of the symmetry options QQwing provides if they’re too flashy. We also need feedback on whether the difficulty levels are set appropriately; I have a suspicion that QQwing’s difficulty rating may not be as refined as our old one (when it was working properly), but I could be completely wrong: we really need player feedback to be sure.

    A few features from Sudoku 3.12 did not survive the redesign, or changed significantly. Highlighter mode is now always active and uses a subtle gray instead of rainbow colors. I’m considering making it a preference in 3.16 and turning it off by default, since it’s primarily helpful for keyboard users and seems to get in the way when playing with a mouse. The old notes are now restricted to digits in the top row of the cell, and you set them by right-clicking in a square. (The Ctrl+digit shortcuts will still work.) This feels a lot better, but we need to figure out how to make notes more discoverable to users.  Most notably, the Track Additions feature is completely gone, the victim of our desire to actually ship this update. If you used Track Additions and want it back, we’d really appreciate comments in Bug #731640. Implementation help would be even better. We’d also like to bring back the hint feature, which we removed because the hints in 3.12 were only useful when an easy move exists, and not very helpful in a tough position. Needless to say, we’re definitely open to feedback on all of these changes.

    Other Games

    We received a Lights Off bug report that the seven-segment display at the bottom of the screen was difficult to read, and didn’t clearly indicate that it corresponded to the current level. With the magic of GtkHeaderBar, we were able to remove it. The result:

    Screenshot from 2014-10-17 18:34:593.12
    Screenshot from 2014-10-17 18:31:373.14

    Robots was our final game (from the historical gnome-games package, so discounting Aisleriot) with a GNOME 2 menu bar. No longer:

    Screenshot from 2014-10-17 18:36:363.12
    Screenshot from 2014-10-17 18:42:333.14

    It doesn’t look as slick as Mines or Sudoku, but it’s still a nice modernization.

    I think that’s enough screenshots for one blog post, but I’ll also mention that Swell Foop has switched to using the dark theme (which blends in better with its background), Klotski grew a header bar (so now all of the historical gnome-games have a header bar as well), and Chess will now prompt the player at the first opportunity to claim a draw, allowing us to remove the confusing Claim Draw menu item and also the gear menu with it. (It’s been replaced by a Resign button.)

    Easier Easy Modes

    The computer player in Four-in-a-row used to be pratically impossible to defeat, even on level one. Nikhar Agrawal wrote a new artificial intelligence for this game as part of his Google Summer of Code project, so now it’s actually possible to win at connect four. And beginning with Iagno 3.14.1, the computer player is much easier to beat when set to level one (the default). Our games are supposed to be simple to play, and it’s not fun when the easiest difficulty level is overwhelming.

    Teaser

    There have also been plenty of smaller improvements to other games. In particular, Arnaud Bonatti has fixed several Iagno and Sudoku bugs, and improved the window layouts for several of our games. He also wrote a new game that will appear in GNOME 3.16.  But that has nothing to do with 3.14, so I can’t show you that just yet, now can I? For now, I will just say that it will prominently feature the Internet’s favorite animal.

    Happy gaming!

    Mon 2014/Oct/20

    • Together with GNOME 3.14, we have released Web 3.14. Michael Catanzaro, who has been doing an internship at Igalia for the last few months, wrote an excellent blog post describing the features of this new release. Go and read his blog to find out what we've been doing while we wait for his new blog to be sindicated to Planet GNOME.

    • I've started doing two exciting things lately. The first one is Ashtanga yoga. I had been wanting to try yoga for a long time now, as swimming and running have been pretty good for me but at the same time have made my muscles pretty stiff. Yoga seemed like the obvious choice, so after much tought and hesitation I started visiting the local Ashtanga Yoga school. After a month I'm starting to get somewhere (i.e. my toes) and I'm pretty much addicted to it.

      The second thing is that I started playing the keyboards yesterday. I used to toy around with keyboards when I was a kid but I never really learned anything meaningful, so when I saw an ad for a second-hand WK-1200, I couldn't resist and got it. After an evening of practice I already got the feel of Cohen's Samson in New Orleans and the first 16 bars of Verdi's Va, pensiero, but I'm still horribly bad at playing with both hands.

    October 17, 2014

    2014-10-17: Friday

    • Early to rise; quick call, mail, breakfast; continued on slideware - really thrilled to use droidAtScreen to demo the LibreOffice on Android viewer.
    • Off to the venue in the coach; prepped slides some more, gave a talk - rather a hard act to follow at the end of the previous talk: a (male) strip-tease, mercifully aborted before it went too far. Presented my slides, informed by a few recent local discussions:
      Hybrid PDF of LibreOffice under-development slides
    • Quick lunch, caught up with mail, customer call, poked Neil & Daniel, continued catching up with the mail & interaction backlog.
    • Conference ended - overall an extremely friendly & positive experience, in a lovely location - most impressed by my first trip to Brazil; cudos to the organizers; and really great to spend some time with Eliane & Olivier on their home turf.

    ffs ssl

    I just set up SSLTLS on my web site. Everything can be had via https://wingolog.org/, and things appear to work. However the process of transitioning even a simple web site to SSL is so clownshoes bad that it's amazing anyone ever does it. So here's an incomplete list of things that can go wrong when you set up TLS on a web site.

    You search "how to set up https" on the Googs and click the first link. It takes you here which tells you how to use StartSSL, which generates the key in your browser. Whoops, your private key is now known to another server on this internet! Why do people even recommend this? It's the worst of the worst of Javascript crypto.

    OK so you decide to pay for a certificate, assuming that will be better, and because who knows what's going on with StartSSL. You've heard of RapidSSL so you go to rapidssl.com. WTF their price is 49 dollars for a stupid certificate? Your domain name was only 10 dollars, and domain name resolution is an actual ongoing service, unlike certificate issuance that just happens one time. You can't believe it so you click through to the prices to see, and you get this:

    Whatttttttttt

    OK so I'm using Epiphany on Debian and I think that uses the system root CA list which is different from what Chrome or Firefox do but Jesus this is shaking my faith in the internet if I can't connect to an SSL certificate provider over SSL.

    You remember hearing something on Twitter about cheaper certs, and oh ho ho, it's rapidsslonline.com, not just RapidSSL. WTF. OK. It turns out Geotrust and RapidSSL and Verisign are all owned by Symantec anyway. So you go and you pay. Paying is the first thing you have to do on rapidsslonline, before anything else happens. Welp, cross your fingers and take out your credit card, cause SSLanta Clause is coming to town.

    Recall, distantly, that SSL has private keys and public keys. To create an SSL certificate you have to generate a key on your local machine, which is your private key. That key shouldn't leave your control -- that's why the DigitalOcean page is so bogus. The certification authority (CA) then needs to receive your public key and then return it signed. You don't know how to do this, because who does? So you Google and copy and paste command line snippets from a website. Whoops!

    Hey neat it didn't delete your home directory, cool. Let's assume that your local machine isn't rooted and that your server isn't rooted and that your hosting provider isn't rooted, because that would invalidate everything. Oh what so the NSA and the five eyes have an ongoing program to root servers? Um, well, water under the bridge I guess. Let's make a key. You google "generate ssl key" and this is the first result.

    # openssl genrsa -des3 -out foo.key 1024
    

    Whoops, you just made a 1024-bit key! I don't know if those are even accepted by CAs any more. Happily if you leave off the 1024, it defaults to 2048 bits, which I guess is good.

    Also you just made a key with a password on it (that's the -des3 part). This is eminently pointless. In order to use your key, your web server will need the decrypted key, which means it will need the password to the key. Adding a password does nothing for you. If you lost your private key but you did have it password-protected, you're still toast: the available encryption cyphers are meant to be fast, not hard to break. Any serious attacker will crack it directly. And if they have access to your private key in the first place, encrypted or not, you're probably toast already.

    OK. So let's say you make your key, and make what's called the "CRTCSR", to ask for the cert.

    # openssl req -new -key foo.key -out foo.csr
    

    Now you're presented with a bunch of pointless-looking questions like your country code and your "organization". Seems pointless, right? Well now I have to live with this confidence-inspiring dialog, because I left off the organization:

    Don't mess up, kids! But wait there's more. You send in your CSR, finally figure out how to receive mail for hostmaster@yourdomain.org because that's what "verification" means (not, god forbid, control of the actual web site), and you get back a certificate. Now the fun starts!

    How are you actually going to serve SSL? The truly paranoid use an out-of-process SSL terminator. Seems legit except if you do that you lose any kind of indication about what IP is connecting to your HTTP server. You can use a more HTTP-oriented terminator like bud but then you have to mess with X-Forwarded-For headers and you only get them on the first request of a connection. You could just enable mod_ssl on your Apache, but that code is terrifying, and do you really want to be running Apache anyway?

    In my case I ended up switching over to nginx, which has a startlingly underspecified configuration language, but for which the Debian defaults are actually not bad. So you uncomment that part of the configuration, cross your fingers, Google a bit to remind yourself how systemd works, and restart the web server. Haich Tee Tee Pee Ess ahoy! But did you remember to disable the NULL authentication method? How can you test it? What about the NULL encryption method? These are actual things that are configured into OpenSSL, and specified by standards. (What is the use of a secure communications standard that does not provide any guarantee worth speaking of?) So you google, copy and paste some inscrutable incantation into your config, turn them off. Great, now you are a dilettante tweaking your encryption parameters, I hope you feel like a fool because I sure do.

    Except things are still broken if you allow RC4! So you better make sure you disable RC4, which incidentally is exactly the opposite of the advice that people were giving out three years ago.

    OK, so you took your certificate that you got from the CA and your private key and mashed them into place and it seems the web browser works. Thing is though, the key that signs your certificate is possibly not in the actual root set of signing keys that browsers use to verify the key validity. If you put just your key on the web site without the "intermediate CA", then things probably work but browsers will make an additional request to get the intermediate CA's key, slowing down everything. So you have to concatenate the text files with your key and the one with the intermediate CA's key. They look the same, just a bunch of numbers, but don't get them in the wrong order because apparently the internet says that won't work!

    But don't put in too many keys either! In this image we have a cert for jsbin.com with one intermediate CA:

    And here is the same but with an a different root that signed the GeoTrust Global CA certificate. Apparently there was a time in which the GeoTrust cert hadn't been added to all of the root sets yet, and it might not hurt to include them all:

    Thing is, the first one shows up "green" in Chrome (yay), but the second one shows problems ("outdated security settings" etc etc etc). Why? Because the link from Equifax to Geotrust uses a SHA-1 signature, and apparently that's not a good idea any more. Good times? (Poor Remy last night was doing some basic science on the internet to bring you these results.)

    Or is Chrome denying you the green because it was RapidSSL that signed your certificate with SHA-1 and not SHA-256? It won't tell you! So you Google and apply snakeoil and beg your CA to reissue your cert, hopefully they don't charge for that, and eventually all is well. Chrome gives you the green.

    Or does it? Probably not, if you're switching from a web site that is also available over HTTP. Probably you have some images or CSS or Javascript that's being loaded over HTTP. You fix your web site to have scheme-relative URLs (like //wingolog.org/ instead of http://wingolog.org/), and make sure that your software can deal with it all (I had to patch Guile :P). Update all the old blog posts! Edit all the HTMLs! And finally, green! You're golden!

    Or not! Because if you left on SSLv3 support you're still broken! Also, TLSv1.0, which is actually greater than SSLv3 for no good reason, also has problems; and then TLS1.1 also has problems, so you better stick with just TLSv1.2. Except, except, older Android phones don't support TLSv1.2, and neither does the Googlebot, so you don't get the rankings boost you were going for in the first place. So you upgrade your phone because that's a thing you want to do with your evenings, and send snarky tweets into the ether about scumbag google wanting to promote HTTPS but not supporting the latest TLS version.

    So finally, finally, you have a web site that offers HTTPS and HTTP access. You're good right? Except no! (Catching on to the pattern?) Because what happens is that people just type in web addresses to their URL bars like "google.com" and leave off the HTTP, because why type those stupid things. So you arrange for http://www.wobsite.com to redirect https://www.wobsite.com for users that have visited the HTTPS site. Except no! Because any network attacker can simply strip the redirection from the HTTP site.

    The "solution" for this is called HTTP Strict Transport Security, or HSTS. Once a visitor visits your HTTPS site, the server sends a response that tells the browser never to fetch HTTP from this site. Except that doesn't work the first time you go to a web site! So if you're Google, you friggin add your name to a static list in the browser. EXCEPT EVEN THEN watch out for the Delorean.

    And what if instead they go to wobsite.com instead of the www.wobsite.com that you configured? Well, better enable HSTS for the whole site, but to do anything useful with such a web request you'll need a wildcard certificate to handle the multiple URLs, and those run like 150 bucks a year, for a one-bit change. Or, just get more single-domain certs and tack them onto your cert, using the precision tool cat, but don't do too many, because if you do you will overflow the initial congestion window of the TCP connection and you'll have to wait for an ACK on your certificate before you can actually exchange keys. Don't know what that means? Better look it up and be an expert, or your wobsite's going to be slow!

    If your security goals are more modest, as they probably are, then you could get burned the other way: you could enable HSTS, something could go wrong with your site (an expired certificate perhaps), and then people couldn't access your site at all, even if they have no security needs, because HTTP is turned off.

    Now you start to add secure features to your web app, safe with the idea you have SSL. But better not forget to mark your cookies as secure, otherwise they could be leaked in the clear, and better not forget that your website might also be served over HTTP. And better check up on when your cert expires, and better have a plan for embedded browsers that don't have useful feedback to the user about certificate status, and what about your CA's audit trail, and better stay on top of the new developments in security! Did you read it? Did you read it? Did you read it?

    It's a wonder anything works. Indeed I wonder if anything does.

    XNG: GIFs, but better, and also magical

    It might seem like the GIF format is the best we’ll ever see in terms of simple animations. It’s a quite interesting format, but it doesn’t come without its downsides: quite old LZW-based compression, a limited color palette, and no support for using old image data in new locations.

    Two competing specifications for animations were developed: APNG and MNG. The two camps have fought wildly and we’ve never gotten a resolution, and different browsers support different formats. So, for the widest range of compatibility, we have just been using GIF… until now.

    I have developed a new image format which I’m calling “XNG”, which doesn’t have any of these restrictions, and has the possibility to support more complex features, and works in existing browsers today. It doesn’t require any new features like <canvas> or <video> or any JavaScript libraries at all. In fact, it works without any JavaScript enabled at all. I’ve tested it in both Firefox and Chrome, and it works quite well in either. Just embed it like any other image, e.g. <img src="myanimation.xng">.

    It’s magic.

    Have a few examples:

    I’ve been looking for other examples as well. If you have any cool videos you’d like to see made into XNGs, write a comment and I’ll try to convert it. I wrote out all of these XNG files out by hand.

    Over the next few days, I’ll talk a bit more about XNG. I hope all you hackers out there look into it and notice what I’m doing: I think there’s certainly a lot of unexplored ideas in what I’ve developed. We can push this envelope further.

    EDIT: Yes, guys, I see all your comments. Sorry, I’ve been busy with other stuff, and haven’t gotten a chance to moderate all of them. I wasn’t ever able to reproduce the bug in Firefox about the image hanging, but Mario Klingemann found a neat trick to get Firefox to behave, and I’ve applied it to all three XNGs above.

    October 16, 2014

    2014-10-16: Thursday

    • To the venue, crazy handing out of collateral, various talks with people; Advisory Board call, LibreOffice anniversary Cake cutting and eating (by massed hordes).
    • It is extraordinary, and encouraging to see how many young ladies are at the conference, and (hopefully) getting engaged with Free Software: never seen so many at other conferences. As an unfortunate down-side: was amused to fobb off an un-solicited offer of marriage from a 15yr old: hmm.
    • Chewed some mail, bus back in the evening; worked on slides until late, for talk tomorrow.

    The Wait Is Over: MimeKit and MailKit Reach 1.0

    After about a year in the making for MimeKit and nearly 8 months for MailKit, they've finally reached 1.0 status.

    I started really working on MimeKit about a year ago wanting to give the .NET community a top-notch MIME parser that could handle anything the real world could throw at it. I wanted it to run on any platform that can run .NET (including mobile) and do it with remarkable speed and grace. I wanted to make it such that re-serializing the message would be a byte-for-byte copy of the original so that no data would ever be lost. This was also very important for my last goal, which was to support S/MIME and PGP out of the box.

    All of these goals for MimeKit have been reached (partly thanks to the BouncyCastle project for the crypto support).

    At the start of December last year, I began working on MailKit to aid in the adoption of MimeKit. It became clear that without a way to inter-operate with the various types of mail servers, .NET developers would be unlikely to adopt it.

    I started off implementing an SmtpClient with support for SASL authentication, STARTTLS, and PIPELINING support.

    Soon after, I began working on a Pop3Client that was designed such that I could use MimeKit to parse messages on the fly, directly from the socket, without needing to read the message data line-by-line looking for a ".\r\n" sequence, concatenating the lines into a massive memory buffer before I could start to parse the message. This fact, combined with the fact that MimeKit's message parser is orders of magnitude faster than any other .NET parser I could find, makes MailKit the fastest POP3 library the world has ever seen.

    After a month or so of avoiding the inevitable, I finally began working on an ImapClient which took me roughly two weeks to produce the initial prototype (compared to a single weekend for each of the other protocols). After many months of implementing dozens of the more widely used IMAP4 extensions (including the GMail extensions) and tweaking the APIs (along with bug fixing) thanks to feedback from some of the early adopters, I believe that it is finally complete enough to call 1.0.

    In July, at the request of someone involved with a number of the IETF email-related specifications, I also implemented support for the new Internationalized Email standards, making MimeKit and MailKit the first - and only - .NET email libraries to support these standards.

    If you want to do anything at all related to email in .NET, take a look at MimeKit and MailKit. I guarantee that you will not be disappointed.

    Communities in Real Life

    Add this to the list of things I never expected to be doing: opening a grocery store.

    At last year’s Open Help Conference, I gave a talk titled Community Lessons From IRL. I told the story of how I got involved in opening a grocery store, and what I’ve learned about community work when the community is your neighbors.

    I live in Cincinnati, in a beautiful, historic, walkable neighborhood called Clifton. We pride ourselves on being able to walk to get everything we need. We have a hardware store, a pharmacy, and a florist. We have lots of great restaurants. We had a grocery store, but after generations of serving the people of Clifton, our neighborhood IGA closed its doors nearly four years ago.

    The grocery store closing hurt our neighborhood. It hurt our way of life. Other shops saw their business decline. Quite a few even closed their doors. At restaurants and coffee houses and barber shops, all anybody talked about was the grocery store being closed. When will it reopen? Has anybody contacted Trader Joe’s/Whole Foods/Fresh Market? Somebody should do something.

    “Somebody should do something” isn’t doing something.

    If there’s one thing I’ve learned from over a decade of working in open source, it’s that things only get done when people get up and do them. Talk is cheap, whether it’s in online forums or in the barber shop. So a group of us got up and did something.

    Last August, a concerned resident sent out a message that if anybody wanted to take action, she was hosting a gathering at her house. Sometimes just hosting a gathering at your house is all it takes to get the ball rolling. Out of that meeting came a team of people committed to bringing a full-service grocery store back to Clifton as a co-op, owned and controlled by the community.

    Thus was born Clifton Market.

    Clifton Market display in the window of the vacant IGA building

    Clifton Market display in the window of the vacant IGA building

    For the last 14 months, I’ve spent whatever free time I could muster trying to open a grocery store. Along with an ever-growing community of volunteers, I’ve surveyed the neighborhood, sold shares, created a business plan, talked to contractors, negotiated real estate, and learned far more about the grocery industry than I ever expected. In many ways, I’ve been well-served by my experience working with volunteer communities in GNOME and other projects. But a lot of things are different when the project is in your backyard staring you down each day.

    Opening a grocery store costs money, and we’ve been working hard on raising the money through shares and owner loans. If you want to support our effort, you can buy a share too.

    Feeds