GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

August 27, 2014

LIST YOUR BOOKS

Work on GNOME Books has continued, and actually a great update is behind the door. Let’s get in…

When most of the functionality in the Read Mode was implemented it was time to start working on the Overview Mode. I am very excited to write about this update. The app is getting real good, and it is starting to fulfill some basic requirements making the user experience better.

OVERVIEW MODE

Overview Mode lists all the eBooks in the epub format that are part of your Books folder in your $HOME directory. Overview Mode is showing the cover of the eBook. Books can be listed in two modes, as a list and as a grid of icons.

 list-view      grid-view

Later on, some more information is going to be shown in the list mode. An interesting feature would also be to add a book details dialog, which will show all the information about the book.

By clicking one of the books a Read Mode opens, with the contents of the chosen book.

READ MODE

Read Mode was also changed with some design improvements. Title bar is updating with the book title. Icons for loading table of contents and page numbers (which are hopefully going to be discard in the future) are part of the title bar.

read-modeHope you enjoyed this short post. The next step would be to create a roadmap towards the goal, so expect my update soon.

 


Object-oriented design best practices

Here is another book review, this time about object-oriented design.

Programming Best Practices

We can learn by our own experience, we can use our common sense, we can learn by more experienced developers when contributing to an existing project. But to speed things up, reading a good old book is also a good idea.

For programming in general, not specifically for object-oriented design, see the blog post About code quality and maintainability.

Object-Oriented Design Heuristics

For OO design, the well-known Design Patterns book is often advised. But most of the design patterns are not often applied, or are useful only for big applications and frameworks. How often do you create an Abstract Factory or a Facade?

On the other hand, the book Object-Oriented Design Heuristics, by Arthur Riel, discusses in details more than 60 guidelines − or heuristics − that can be applied to all projects, even for a small codebase with three classes.

Example of an heuristic:
Keep related data and behavior in one place.

If a getter method is present in a class, why is the data used outside of the class? The behavior implemented outside of the class can maybe be implemented directly in the class, so the getter can be replaced by a higher-level method.

Other example:
Distribute system intelligence horizontally as uniformly as possible, that is, the top-level classes in a design should share the work uniformly.

By following this heuristic, god classes should be avoided. An example of a class that can be split in two is where some methods use only attributes A and B, while some other methods use only attributes C and D. In other words, if a class has non-communicating behavior, that is, methods that operate on a proper subset of the data members of the class, then the class can be split.

All the heuristics can not be followed all the time. That’s why they are called “heuristics”. But when an heuristic is violated, there should be good reasons behind that decision, and the developer should know what he or she is doing.

Event-driven programming

One missing concept in the book is event-driven programming.

With GObject, we are lucky to have a powerful signal system. It is a great way to decouple classes. When an object sends a signal, the class doesn’t know who receive it.

Conclusion

Programming best practices are useful to maintain a good code quality. Object-Oriented Design Heuristics will give you the knowledge to better write object-oriented code, either for small projects, big projects or API design in a library.

Voorzien

Nu de rest van het land nog

Stroomgenerator

GNOME design: saving space since 2009 (or so)

One of the pieces of feedback I often get about GNOME 3 concerns the use of space. People repeatedly say to me that GNOME 3 apps use too much space, particularly in terms of header bars and their buttons.

I find these comments about wasted spacing to be perplexing, because they seem contrary to one of the major themes of GNOME design over the past five years or so. It’s incisive to look at GNOME’s history in this area, and to count some pixels along the way.

Back during the tail-end of the GNOME 2.x days, those of us who were involved in GNOME design were strongly advocating for applications that had less chrome, and gave more space to content. We were particularly concerned about apps like Nautilus, and this kind of screenshot was the kind of thing we wanted to eradicate:

Nautilus 2.22

David Siegal wrote a great blog post about it at the time. The proportion of the window that was devoted to content rather than chrome was laugably small, as Andreas Nilsson artfully demonstrated:

Starting with this as a baseline, we can measure how space usage has changed in GNOME in the past four or five years. Back then, a Nautilus window used 171 vertical pixels for chrome (24 pixels each for the title bar, menu bar, and status bar, 55 for the tool bar, and 44 for the path bar).

By the time 3.0 came around, we had managed to reduce the amount of wasted space considerably. The stacked toolbar and path bar were gone, and the permanent status bar was replaced by a floating element. I count 63 pixels for the title bar and menu bar, and 53 pixels for the toolbar – that’s a space saving of 55 pixels.

Nautilus 3.0

Three releases later, and more space saving improvements had landed. The menu bar was gone, which had been replaced by a single toolbar. Chrome stood at a height of 70 pixels – a saving of 46 pixels since 3.0, and a whopping 101 pixels since GNOME 2.

Nautilus 3.6

Finally, to bring the story up to date, in GNOME 3.12 we introduced header bars, which again resulted in a space saving. Nautilus today has 48 pixels of vertical chrome, less than a third of what it had in 2.22.

Nautilus 3.10

This history makes quite a nifty table…

Nautilus Version Vertical Chrome
2.22 171
3.0 116
3.6 70
3.12 48

You get a similar result for other GNOME applications. At the end of GNOME 2, gedit had 127 vertical pixels of chrome. Nowadays it is down to 72 pixels.

What’s more, today’s Nautilus compares favourably with file managers from other operating systems/desktop environments. KDE’s Dolphin uses 109 vertical pixels of chrome compared with Nautilus’s 48. Finder in the upcoming OS X version seems to have around 75 pixels.

So I am puzzled. In GNOME we have greatly reduced our use of chrome, to the point where we seem to use less than obvious alternatives. And yet, we still get repeated comments that we use too much chrome.

You could argue that there is still extraneous chrome that can be shaved off, of course. I’ve looked into the possibility myself, and what I found is that the potential savings really are negligible – in the region of four or six pixels or so. Compared with the dramatic savings we’ve seen in the past years, this is trivial stuff.

Besides, I don’t think that shaving four or six pixels is really the answer to why we get comments about space usage. It’s hard to know exactly what is going on, but I suspect that it is more down to the visual presentation of header bars [1], and something about how some people expect buttons to look. For some reason, buttons like this seem to hit a nerve:

square-button

It might well be something to do with shape, rather than size.

GNOME is actually using space better than it ever has done before, and is more efficient in this respect than at least some major rivals. This isn’t to say that there aren’t real issues behind the feedback we’ve got about space usage – obviously something is going on. It’s more of a subtle issue than many give credit for, though, and it’s certainly not because we are worse at using space than in the past.

[1] Perhaps in combination with their position. On a small laptop screen that is below eye level, something at the top is more noticeable, closer, and appears bigger.

GDK and threads

this article is meant as a guideline and an explanation for application developers still using the deprecated gdk_threads_* family of functions because of reasons, as well as application developers still using GTK+ 2.x.

newly written code should not use this API at all.

a programmer has a problem. “I know,” he said, “I will use threads!” now has he problems two. — old Romulan proverb

we all know that using GTK+ and threads is a fan favourite question; I routinely answered it on IRC, mailing lists, Stack Overflow, and even in person multiple times. some time ago, we actually fixed the documentation to provide an authoritative answer as to how you should do your long-running, blocking operations on a thread, and then update the UI from within the main thread at various synchronization points.

sadly, both the API and the documentation that came before the tenet above became well-known, were lacking. the wording was ambiguous, and the examples were far from clear in showing the idiomatic way of initializing and using GTK+ in a multi-threaded application.

if I asked ten random developers using GTK+ for their applications what is the correct way of initializing the old threading support, I’d probably get the answer “call gdk_threads_init() before you call gtk_init()” from at least half of them. that answer is wrong, but most likely they never noticed it because they never ported their application to other platforms.

the correct answer for the question of how to initialize the old thread safety support in GTK+ is actually this idiomatic code snippet:

int
main (int argc, char *argv[])
{
  // initialize GDK thread support
  gdk_threads_init ();

  // acquire the GDK lock
  gdk_threads_enter ();

  // initialize GTK+
  gtk_init (&argc, &argv);

  // build your UI ...

  // start the main loop
  gtk_main ();

  // tear down your UI...

  // release the GDK lock
  gdk_threads_leave ();

  // exit
  return 0;
}

as you can see, we acquire the GDK lock before calling the GTK+ API, like the documentation says:

As always, you must also surround any calls to GTK+ not made within a signal handler with a gdk_threads_enter() / gdk_threads_leave() pair.

why is this needed? because the gtk_main() call will try to release the GDK lock before entering the main loop, and re-acquire it before returning the control flow to you. this is needed to avoid deadlocks from within GDK itself, since the GDK frame processing is going to acquire the GDK lock whenever it’s needed inside its own critical sections.

if you don’t acquire the lock, gtk_main() will try to release an unlocked mutex, and if you carefully read the g_mutex_unlock() documentation you will notice that doing that results in undefined behaviour. what does undefined behaviour means? in GLib and GTK+, we use the term in the same sense as the ISO C standard uses it.

in this specific instance, if you’re on Linux, undefined behaviour does not mean much: by default, the GNU libc implementation of pthreads is permissive, so it will simply ignore the double unlock. if you’re running on an operating system from the *BSD family, however, your application will uncerimoniously abort. hence, why so far very few people actually noticed this.

starting from GLib 2.42 (the next stable release), the GMutex implementation on Linux has been switched from a pure pthread wrapper to be futex-based. given that we don’t pay any penalty for it, we decided to ensure consistent behaviour, and proper usage of the API. this means that GLib 2.42 will abort if you try to clear an uninitialized or locked GMutex, as well as if you try to unlock an already unlocked one. this ensures that your code will actually be portable, and you’ll catch errors much more quickly.

this also means that non-idiomatic GTK+ code will start breaking on Linux as well, just like it would on non-Linux platforms.

since our documentation was not really good enough for people to use, and since we could not enforce proper idiomatic code at the toolkit level, GDK will try to compensate.


it’s important to not that the fix in GDK does not absolve you from fixing your code: you are doing something wrong. it will allow, though, existing GTK+ 2.x/early GTK+ 3.x code calling gdk_threads_init() in the wrong way to continue working even in the face of undefined behaviour. take this as a chance to rework your code not to use the GDK API to mark critical sections, and instead use the proper approach of worker threads notifying the UI through idle and timeout callbacks executed from within the main thread — an approach that does not require calling gdk_threads_init() at all.

GUADEC

Hi everyone,

Here’s another post about a summer intern’s GUADEC experience. Kindly accept my apologies for posting this so late. I’m afraid the delay is a bit too much even by IST (Indian Student Time) standards. :P

I had two reasons to be very excited about the conference. This was my first ever conference and my first ever international tour.

Until as late as beginning of this year, I was a casual GNOME user. Only once I got in touch with my mentors during the GSoC period did I actually begin doing dev stuff. And even then, I’ve mostly confined myself to GNOME games.  Attending the conference was thus a very enriching experience for me with regards to getting to know the rest of the GNOME community.

The talks were mostly very fascinating. The tech talks helped me better understand the features GNOME has to offer and the non tech ones allowed me to better appreciate the struggles the community faces today. The sessions that I enjoyed the most though were the lightning talks.

The first lightning talks were given by the summer interns. I was a little nervous before going on the stage but the community is so supporting and encouraging that you forget all your fears. I’m sure that at one point I even managed to elicit laughter from the crowd (or at least a couple of people). :D

One of the reasons I liked the lightening talks so much is that the talks given by my fellow interns and other senior members of the community served as a quick, yet detailed introduction to other GNOME modules. It’s always good to hear about things from people who are passionate about them and it makes for much more interesting information digest than reading a non-personalized version of it on the internet somewhere.

I also learned a very important thing about myself during the conference. I have a hard time starting conversations with new people. I wish I would get better at it and soon. I regret now having talked so less to fellow GNOME enthusiasts. There was so much more to learn. But from the little conversations I did manage to make, I learned a lot about the very exciting things others have been doing in the community. Many a times I was simply awestruck at the sheer genius of the people around me.

Apart from the conference, there’s still a lot to write about the joys and disappointments of exploring a new country. But that would probably be a blog post of its own.

Before I sign off though, I would like to take this opportunity to thank GNOME and particularly the travel committee for partially sponsoring my GUADEC trip.

 

gnome_sponsored


a wingolog user's manual

Greetings, dear readers!

Welcome to my little corner of the internet. This is my place to share and write about things that are important to me. I'm delighted that you stopped by.

Unlike a number of other personal sites on the tubes, I have comments enabled on most of these blog posts. It's gratifying to me to hear when people enjoy an article. I also really appreciate it when people bring new information or links or things I hadn't thought of.

Of course, this isn't like some professional peer-reviewed journal; it's above all a place for me to write about my wanderings and explorations. Most of the things I find on my way have already been found by others, but they are no less new to me. As Goethe said, quoted in the introduction to The Joy of Cooking: "That which thy forbears have bequeathed to thee, earn it anew if thou wouldst possess it."

In that spirit I would enjoin my more knowledgeable correspondents to offer their insights with the joy of earning-anew, and particularly to recognize and banish the spectre of that moldy, soul-killing "well-actually" response that is present on so many other parts of the internet.

I've had a good experience with comments on this site, and I'm a bit lazy, so I take an optimistic approach to moderation. By default, comments are posted immediately. Every so often -- more often after a recent post, less often in between -- I unpublish comments that I don't feel contribute to the piece, or which I don't like for whatever reason. It's somewhat arbitrary, but hey, welcome to my corner of the internet.

This has the disadvantage that some unwanted comments end up published, then they go away. If you notice this happening to someone else's post, it's best to just ignore it, and in particular to not "go meta" and ask in the comments why a previous comment isn't there any more. If it happens to you, I'd ask you to re-read this post and refrain from unwelcome comments in the future. If you think I made an error -- it can happen -- let me know privately.

Finally, and it really shouldn't have to be said, but racism, sexism, homophobia, transphobia, and ableism are not welcome here. If you see such a comment that I should delete and have missed, let me know privately. However even among well-meaning people, and that includes me, there are ways of behaving that reinforce subtle bias. Please do point out such instances in articles or comments, either publicly or privately. Working on ableist language is a particular challenge of mine.

You can contact me via comments (anonymous or not), via email (wingo@pobox.com), twitter (@andywingo), or IRC (wingo on freenode). Thanks for reading, and happy hacking :)

Concatenate multiple streams gaplessly with GStreamer

Earlier this month I wrote a new GStreamer element that is now integrated into core and will be part of the 1.6 release. It solves yet another commonly asked question on the mailing lists and IRC: How to concatenate multiple streams without gaps between them as if they were a single stream. This is solved by the concat element now.

Here are some examples about how it can be used:

# 100 frames of the SMPTE test pattern, then the ball pattern
gst-launch-1.0 concat name=c ! videoconvert ! videoscale ! autovideosink  videotestsrc num-buffers=100 ! c.   videotestsrc num-buffers=100 pattern=ball ! c.

# Basically: $ cat file1 file2 > both
gst-launch-1.0 concat name=c ! filesink location=both   filesrc location=file1 ! c.   filesrc location=file2 ! c.

# Demuxing two MP4 files with h264 and passing them through the same decoder instance
# Note: this works better if both streams have the same h264 configuration
gst-launch-1.0 concat name=c ! queue ! avdec_h264 ! queue ! videoconvert ! videoscale ! autovideosink   filesrc location=1.mp4 ! qtdemux ! h264parse ! c.   filesrc location=2.mp4 ! qtdemux ! h264parse ! c.

If you run this in an application that also reports time and duration you will see that concat preserves the stream time, i.e. the position reporting goes back to 0 when switching to the next stream and the duration is always the one of the current stream. However the running time will be continuously increasing from stream to stream.

Also as you can notice, this only works for a single stream (i.e. one video stream or one audio stream, not a container stream with audio and video). To gaplessly concatenate multiple streams that contain multiple streams (e.g. one audio and one video track) one after another a more complex pipeline involving multiple concat elements and the streamsynchronizer element will be necessary to keep everything synchronized.

Details

The concat element has request sinkpads, and it concatenates streams in the order in which those sinkpads were requested. All streams except for the currently playing one are blocked until the currently playing one sends an EOS event, and then the next stream will be unblocked. You can request and release sinkpads at any time, and releasing the currently playing sinkpad will cause concat to switch to the next one immediately.

Currently concat only works with segments in GST_FORMAT_TIME and GST_FORMAT_BYTES format, and requires all streams to have the same segment format.

From an application side you could implement the same behaviour as concat implements by using pad probes (waiting for EOS) and using pad offsets (gst_pad_set_offset()) to adjust the running times. But by using the concat element this should be a lot easier to implement.

August 26, 2014

2014-08-26: Tuesday

  • More working through mail & bugs.
  • Saddened to get a call from an rather upset Mother apparently a victim of a unsolicited scam 'imax support' - which is sadly rather common. When I phoned them they apparently fraudulently claimed to be a UK registered company - but Companies House had no record of that name. Supposedly they are based in Kolkata, India Sector 5, Walzan Tower. Attempted to report it to the UK police, who appear interested only in statistical collection of I was scammed.
  • Finally got through the backlog of expenses.
  • Noticed Tamas' awesome blog on the great LibreOffice 3D work that we've been doing for 4.3 / master, great.

Endless changes ahead!

I know I haven’t blogged for a while, and definitely not as much as I would like, but that was partially because I was quite busy during my last days in Samsung (left on the 25th of July), where I wanted to make sure I did not leave any loose end before departure, and that everything was properly handed over to the right people there.

But that was one month ago… so what did I do since then? Many many things, and most of them away from a keyboard, at least until the past week. Main highlights:

  • One week travelling by car with my family all the way down to Spain from the UK, through France, visiting all the nice places we could (and could afford) in the way, which was a lot of fun and an incredible experience.
  • The goal of taking the car to Spain was to sell it once we were there and, surprisingly enough, we did it in record time, so one thing less to worry about…
  • 2 weeks in Spain having proper “relaxing holidays” to get some quality time off in between the two jobs, to properly recharge batteries. Not that the previous week was not holidays, but travelling 2200 km by car with two young kids on the back can be amazing and exhausting at the same time :-)
  • 1 week in the UK to make sure I had everything ready by the time I officially started in the new company, where I will initially be working from home: assemble a home office in my spare bedroom, and prepare my new laptop mainly. In the end, we (my wife helped me a lot) finished by Wednesday, so on Thursday we went for a last 2-day getaway to Wales (what a beautiful place!) by car, making the most that we were kids-free.

Endless Mobile logoTherefore, as you can imagine, I didn’t have much time for blogging lately, but still I would like to share with the world my “change of affiliation” so here it is: since yesterday I’m officially part of the amazing team at Endless Mobile, an awesome start up from San Francisco committed to break the digital divide in the developing world by taking GNOME-based technology to the end users in ways that were not imaginable before. And I have to say that’s a vision I fell in love with since the very first time I heard about it (last year in Brno, during Matt’s keynote at GUADEC).

But just in case that was not awesome enough by itself, the other thing that made me fall in love with the company was precisely the team they have assembled, because even if I’m mostly a technical guy, I still value a lot the human side of the places I work in. And in this regard Endless seems to be perfect, or even better!

So, I’m extremely happy these days because of this new challenge I’m seeing in front of me, and because of the opportunity I’m being given to have a real positive impact in the lives of millions of people who still can’t access to technology as they should be able to do it. Also, I feel blessed and privileged for having been given the chance to be part of such an amazing team of people. Could not be happier at this time! :)

Last to finish this post, I would like to say thanks to my friend Joaquim, since he was who introduced me to Matt in the first place and “created” this opportunity for me. Thank you!

OpenGlucose: continued

I started working on the UI to display the results:

openglucoseIt is made using a GtkApplicationWindow containing a WebkitWebView, the content is made with HTML/CSS/JS with jquery and the chart is made using jqplot.

To make testing easier, I also added a dummy device that has random data, it can be enabled by setting OPENGLUCOSE_DUMMY_DEVICE=1 in your env.

A lot more work is needed, but that’s a start.

Mon 2014/Aug/25

  • The Safety and Privacy team

    During GUADEC we had a Birds-of-a-feather session (BoF) for what eventually became the Safety Team. In this post I'll summarize the very raw minutes of the BoF.

    Locks on bridge

    What is safety in the context of GNOME?

    Matthew Garrett's excellent keynote at GUADEC made a point that GNOME should be the desktop that takes care of and respects the user, as opposed to being just a vehicle for selling stuff (apps, subscriptions) to them.

    I'll digress for a bit to give you an example of "taking care and respecting the user" in another context, which will later let me frame this for GNOME.

    Safety in cities

    In urbanism circles, there is a big focus on making streets safe for everyone, safe for all the users of the street. "Safe" here means many things:

    • Reducing the number of fatalities due to traffic accidents.
    • Reducing the number of accidents, even if they are non-fatal, because they waste everyone's time.
    • Making it possible for vulnerable people to use the streets: children, handicapped people, the elderly.
    • Reducing muggings and crime on the streets.
    • Reducing the bad health effects of a car-centric culture, where people can't walk to places they want to be.

    It turns out that focusing on safety automatically gives you many desirable properties in cities — better urbanism, not just a dry measure of "streets with few accidents".

    There is a big correlation between the speed of vehicles and the proportion of fatal accidents. Cities that reduce maximum speeds in heavily-congested areas will get fewer fatal accidents, and fewer accidents in general — the term that urbanists like to use is "traffic calming". In Strasbourg you may have noticed the signs that mark the central island as a "Zone 30", where 30 Km/h is the maximum speed for all vehicles. This lets motor vehicles, bicycles, and pedestrians share the same space safely.

    Zone 30 End of Zone 30

    Along with traffic calming, you can help vulnerable people in other ways. You can put ramps on curbs where you cross the street; this helps people on wheelchairs, people carrying children on strollers, people dragging suitcases with wheels, skaters, cyclists. On sidewalks you can put tactile paving — tiles with special reliefs so blind pedestrians can feel where the "walking path" is, or where the sidewalk is about to end, or where there is a street crossing. You can make traffic lights for pedestrians emit a special sound when it is people's turn to cross the street — this helps the blind as well as those who are paying attention to their cell phone instead of the traffic signals. You can make mass transit accessible to wheelchairs.

    Once you have slow traffic, accessible mass transit, and comfortable/usable sidewalks, you get more pedestrians. This leads to more people going into shops. This improves the local economy, and reduces the amount of money and time that people are forced to waste in cars.

    Once you have people in shops, restaurants, or cafes at most times of the day, you get fewer muggings — what Jane Jacobs would call "eyes on the street".

    Once people can walk and bike safely to places they actually want to go (the supermarket, the bakery, a cafe or a restaurant, a bank), they automatically get a little exercise, which improves their health, as opposed to sitting in a car for a large part of the day.

    Etcetera. Safety is a systemic thing; it is not something you get by doing one single thing. Not only do you get safer streets; you also get cities that are more livable and human-scaled, rather than machine-scaled for motor vehicles.

    And this brings us to GNOME.

    Safety in GNOME

    Strasbourg     Cathedral, below the rose

    "Computer security" is not very popular among non-technical users, and for good reasons. People have friction with sysadmins or constrained systems that don't let them install programs without going through bureaucratic little processes. People get asked for passwords for silly reasons, like plugging a printer to their home computer. People get asked questions like "Do you want to let $program do $thing?" all the time.

    A lot of "computer security" is done from the viewpoint of the developers and the administrators. Let's keep the users from screwing up our precious system. Let's disallow people from doing things by default. Let's keep control for ourselves.

    Of course, there is also a lot of "computer security" that is desirable. Let's put a firewall so that vandals can't pwn your machine, and so that criminals don't turn your computer into a botnet's slave. Let's keep rogue applications (or rogue users) from screwing up the core of the system. Let's authenticate users so a criminal can't access your bank account.

    Security is putting an armed guard at the entrance of a bank; safety is having enough people in the streets at all times of the day so you don't need the police most of the time.

    Security is putting iron bars in low-storey windows so robbers can't get in easily; safety is putting iron railings in high-storey balconies so you don't fall over.

    Security is disallowing end-user programs from reading /etc/shadow so they can't crack your login passwords; safety is not letting a keylogger run while the system is asking you for your password. Okay, it's security as well, but you get the idea.

    Safety is doing things that prevent harm to users.

    Strasbourg     Cathedral, door detail

    Encrypt all the things

    A good chunk of the discussion during the meeting at GUADEC was about existing things that make our users unsafe, or that inadvertently reveal user's information. For example, we have some things that don't use SSL/TLS by default. Gnome-weather fetches the weather information over unencrypted HTTP. This lets snoopers figure out your current location, or your planned future locations, or the locations where people related to you might live. (And in more general terms, the weather forecasts you check are nobody's business but yours.)

    Gnome-music similarly fetches music metadata over an unencrypted channel. In the best case it lets a snooper know your taste in music; in the worst case it lets someone correlate your music downloads with your music purchases — the difference is a liability to you.

    Gnome-maps fetches map tile data over an unencrypted connection. This identifies places you may intend to travel; it may also reveal your location.

    Strasbourg     Cathedral, stained glass window

    But I don't have a nation-state adversary

    While the examples above may seem far-fetched, they go back to one of the biggest problems with the Internet: unencrypted content is being used against people. You may not have someone to hide from, but you wouldn't want to be put in an uncomfortable situation just from using your software.

    You may not be a reckless driver, but you still put on seatbelts (and you would probably not buy a car without seatbelts).

    We are not trying to re-create Tails, the distro that tries to maintain your anonymity online, but we certainly don't want to make things easy for the bad guys.

    During the meeting we agreed to reach out to the Tails / Tor people so that they can tell us where people's identifying information may leak inadvertently; if we can fix these things without a specialized version of the software, everyone will be safer by default.

    Sandbox all the things

    While auditing code, or changing code to use encrypted connections, can be ongoing "everyday" work, there's a more interesting part to all of this. We are moving to sandboxed applications, where running programs cannot affect each other, or where an installed program doesn't affect the installed dependencies for other programs, or where programs don't have access to all your data by default. See Allan Day's posts on sandboxed apps for a much more detailed explanation of how this will work (parts one and two).

    We have to start defining the service APIs that will let us keep applications isolated from the user's personal data, that is, to avoid letting programs read all of your home directory by default.

    Some services will also need to do scrubbing of sensitive data. For example, if you want to upload photos somewhere public, you may want the software to strip away the geolocation information, the face-recognition data, and the EXIF data that reveals what kind of expensive camera you have. Regular users are generally not aware that this information exists; we can keep them safer by asking for their consent before publishing that information.

    Strasbourg     Cathedral, floor grate

    Consent, agency, respect

    A lot of uncomfortable, inconvenient, or unsafe software is like that because it doesn't respect you.

    Siloed software that doesn't let you export your data? It denies you your agency to move your data to other software.

    Software that fingerprints you and sends your information to a vendor? It doesn't give you informed consent. Or as part of coercion culture, it sneakily buries that consent in something like, "by using this software, you agree to the Terms of Service" (terms which no one ever bothers to read, because frankly they are illegible).

    Software that sends your contact list to the vendor so it can spam them? This is plain lack of respect, lack of consent, and more coercion, as those people don't want to be spammed in the first place (and you don't want to be the indirect cause).

    Allan's second post has a key insight:

    [...] the primary purpose of posing a security question is to ascertain that a piece of software is doing what the user wants it to do, and often, you can verify this without the user even realising that they are being asked a question for security purposes.

    We can take this principle even further. The moment when you ask a security question can be an opportunity to present useful informations or controls – these moments can become a valuable, useful, and even enjoyable part of the experience.

    In a way, enforcing the service APIs upon applications is a way of ensuring that they ask for your consent to do things, and that they respect your agency in doing things which naive security-minded software may disallow "for security reasons".

    Here is an example:

    Agency: "I want to upload a photo"
    Safety: "I don't want my privacy violated"
    Consent: "Would you like to share geographical information, camera information, tags?"

    L'amour

    Pattern Language

    We can get very interesting things if we distill these ideas into GNOME's Pattern Language.

    Assume we had patterns for Respect the user's agency, for Obtain the user's consent, for Maintain the user's safety, and for Respect the user's privacy. These are not written yet, but they will be, shortly.

    We already have prototypal patterns called Support the free ecosystem and User data manifesto.

    Pattern languages start being really useful when you have a rich set of connections between the patterns. In the example above about sharing a photo, we employ the consent, privacy, and agency patterns. What if we add Support the free ecosystem to the mix? Then the user interface to "paste a photo into your instant-messaging client" may look like this:

    Mockup of 'Insert photos' dialog

    Note the defaults:

    • Off for sharing metadata which you may not want to reveal by default: geographical information, face recognition info, camera information, tags. This is the Respect the user's privacy pattern in action.

    • On for sharing the license information, and to let you pick a license right there. This is the Support the free ecosystem pattern.

    If you dismiss the dialog box with "Insert photos", then GNOME would do two things: 1) scrub the JPEG files so they don't contain metadata which you didn't choose to share; 2) note in the JPEG metadata which license you chose.

    In this case, Empathy would not communicate with Shotwell directly — applications are isolated. Instead, Empathy would make use of the "get photos" service API, which would bring up that dialog, and which would automatically run the metadata scrubber.

    Resources

Sun 2014/Jul/27

REMINDER: GNOME Shell Magnification Feedback Survey 2014

Wee reminder 

of the GNOME Shell Magnification feedback survey. So far there have been 8 respondents which is great but it would be even greater to have some more! So far the survey result seems to suggest that slow performance and degraded graphics are a problem for users... But don't take my word for it: try it out, see what you think and feedback with your verdict if you will! Results will be published soon.

Background

The GNOME Shell Magnifier was first authored in a commit by Colin Walters in 2010, since its birth it's benefited from several contributions made by various developers so to alleviate any confusion there may be about the Magnifiers roots, the complete Magnifier git log is as follows.

2014-06-24js: Adapt to GSettings API changeJasper St. Pierre1-2/+2
2014-02-13Magnifier: clip the UI group clone to the allocationGiovanni Campagna1-1/+2
2014-02-13Magnifier: use the system noise texture for the dead areaGiovanni Campagna1-0/+4
2014-02-13Magnifier: don't listen for focus/tracker events if the magnifier is not activeGiovanni Campagna1-6/+23
2014-02-13Magnifier: demote exceptions reading focus/caret positionGiovanni Campagna1-2/+16
2014-02-13Magnifier: fix a warning when calling setActive() with the same valueGiovanni Campagna1-7/+10
2014-02-08Magnifier: Restore crosshairsMagdalen Berns1-75/+56
2014-02-05Magnifier: take x,y from center of focused widgetMagdalen Berns1-1/+2
2014-02-04Magnifier: Disable unredirect when activeMagdalen Berns1-2/+6
2013-09-13Magnifier: don't initialize if we don't need itGiovanni Campagna1-59/+80
2013-09-12ShellGlobal: use MetaCursorTracker to query the pointer positionGiovanni Campagna1-3/+3
2013-09-07Remove unused functionsMagdalen Berns1-80/+0
2013-09-05Magnifier: Implement focus and caret trackingMagdalen Berns1-20/+135
2013-08-19Replace ShellXFixesCursor with MetaCursorTrackerGiovanni Campagna1-10/+10
2012-08-31magnifier: Don't use some deprecated APIsJasper St. Pierre1-14/+15
2012-08-31magnifier: Don't set the size of the uiGroupJasper St. Pierre1-2/+0
2012-08-31magnifier: Round the uiGroup to integer positionsJasper St. Pierre1-1/+1
2012-08-31magnifier: Use PointerWatcher to poll the mouse pointerJasper St. Pierre1-9/+6
2012-08-31js: Fix up for Clutter.Color changesJasper St. Pierre1-5/+2
2012-08-06magnifier: Using properly 'color-saturation'Alejandro Piñeiro1-5/+5
2012-08-06magnifier: 'color-saturation' is a double not a booleanAlejandro Piñeiro1-1/+1
2012-07-13magnifier: Fix grayscale effectFlorian Müllner1-0/+1
2012-07-09magnifier: fix a copy/paste typoCosimo Cecchi1-1/+1
2012-07-06Add a grayscale effectJasper St. Pierre1-0/+49
2012-05-16Magnifier: Add brightness and contrast functionalityJoseph Scheuhammer1-0/+61
2012-05-16Magnifier: Add brightness and contrast functionalityJoseph Scheuhammer1-1/+232
2012-05-02Refactor show()/hide() sequencesJasper St. Pierre1-4/+1
2012-01-26magnifier: Handle screen size changesRui Matos1-4/+33
2011-12-15Do not use the default stageJasper St. Pierre1-1/+1
2011-11-24Port everything to class frameworkGiovanni Campagna1-15/+9
2011-11-04magnifier: Use enum from gsettings-desktop-schemasFlorian Müllner1-37/+23
2011-10-11*.js: Make emacs modelines consistentDan Winship1-1/+1
2011-02-17magnifier: Adjust for removal of 'show-magnifier' keyFlorian Müllner1-6/+10
2011-02-13magnifier: crosshairs opacity is now a doubleBastien Nocera1-7/+7
2011-02-11Move magnifier schemas to gsettings-desktop-schemasThomas Wood1-1/+1
2010-12-03magnifier: Fix DND when the magnifier is activeFlorian Müllner1-0/+3
2010-12-03Fix redundant calls to global.get_pointer()Owen W. Taylor1-23/+19
2010-12-03Improve the algorithm for proportional mouse trackingOwen W. Taylor1-4/+9
2010-12-03Refactor Magnifier.ZoomRegion to avoid permanent Clutter.CloneOwen W. Taylor1-346/+336
2010-11-01magnifier: Sync MouseTrackingMode values with the gsettings enumFlorian Müllner1-3/+3
2010-09-16Add Universal Access status indicatorGiovanni Campagna1-0/+6
2010-09-10Bug 622414 - Port magnifier to GSettingsMilan Bouchet-Valat1-151/+99
2010-07-18Clean up unused includesFlorian Müllner1-3/+0
2010-06-25Missed some 'Shell.GConf' references in switch to GSettings.Joseph Scheuhammer1-12/+12
2010-06-18Migrate to GSettingsMilan Bouchet-Valat1-25/+26
2010-05-19minor js cleanupsDan Winship1-3/+3
2010-05-17magnifier: use global.get_pointer instead of gdk_window_get_pointerDan Winship1-10/+5
2010-05-13Don't use double quotes for things that don't need to be translatedMarina Zhurakhinskaya1-17/+17
2010-05-11Add missing magnifier files from the last commitColin Walters1-0/+1484

The magnifier could do with some work. It currently has 17 unresolved issues in Bugzilla which are as follows

618397norNorLinugnome-shellUNCO20Hz polling when magnifier is enabled
646942norNorLinugnome-shellUNCOdesktop zoom has some oddities in multi-head mode
649535majNorLinugnome-shellUNCOMagnifier turns the screen blue when notifications are received in the message tray
666612minNorLinugnome-shellUNCOWallpaper is visible with magnifier when modal dialog is attached to a maximized window
669192minNorLinugnome-shellUNCOD-Bus: org.gnome.Magnifier.setActive incomplete.
672325norNorLinugnome-shellUNCOMagnifier freezes shell when activities screen invoked
708985norNorLinugnome-shellUNCOMouspointer disappears in Fullscreengames with magnifier enabled
710191norNorLinugnome-shellUNCOMagnifier: Taking a screenshot crashes gnome-shell
710194norNorLinugnome-shellUNCOMagnifier: View is poor quality because the image is not scaled
710470criNorLinugnome-shellUNCOWayland: Reliable crash when typing in a text view with the magnifier enabled
720714norNorLinugnome-shellUNCOMagnifier: Focus Tracking should only track the Active Window when gnome-terminal is running in background
720715norNorLinugnome-shellUNCOMagnifier: Focus Tracking flipps on left screen edge in some cases
720716norNorLinugnome-shellUNCOMagnifier: Focus Tracking should focus "objects" more complete
720723norNorLinugnome-shellUNCOMagnifier: Focus Tracking should jump more smooth to the next focus point.
725129norNorLinugnome-shellUNCO[RFE] Move the magnified screen area with a keyboard shortcut
728848criNorLinugnome-shellUNCOSwitching the magnifier on makes Gnome-shell so unstable that it is impossible to switch it off.
633573norNorLinugnome-shellNEWMagnifier should turn off when screen blanked


If you like my work as a volunteer contributor to GNOME and past Google Summer of Code (GSoC) student to GNOME and Scientific Ruby, then please consider helping me meet the cost of my trip to the GSoC Mentor Summit in San Jose this October in support of my contributions to FOSS by donating using the link below.

Click here to lend your support to: Funding to attend San Jose GSoC 10 Year Reunion and make a donation at pledgie.com !

August 25, 2014

2014-08-25: Monday

  • Chewed mail, caught up with the weekend backlog of customer mail, partner pieces etc. Product and consulting meetings. J. kindly filing the backlog of expenses; unwound some accomodation issues for Bern.

Coding Time is Over… ?

The Google Summer of Code has come to an end. It was an incredible time thanks to the many people with whom I worked and talked. This blog post is intended to give an overview over what was done during this project and how my involvement with GNOME may continue. If you already know what my project is about or only want to know what was done you might want to skip the “Goal” paragraphs.

The Project

The project consisted of three big parts:

  1. ISO Downloading
  2. Express Installation
  3. Bugfixing

All of these three parts where adressed to some point during this project. Everything described is already committed and will be released with GNOME 3.14 unless explicitly mentioned.

ISO Downloading

Goal

The goal of this part was to make Boxes able to download media directly. The “stretch goal” was to recognize ISOs while downloading or from the URI so the user can enter all needed information for the VM, click on “Create” and it would be automatically downloaded and installed without any need to wait.

Reality

The main goal was achieved. This is how it looks:

Boxes downloading an ISO

With this Boxes gets also a few improvements for the driver downloading process. (There were some memory issues and downloads couldnt be aborted internally.) Since I underestimated the work that needed to be done I was not able to implement the strech goal.

Express Installation Improvements

Goal

I intended to widen the support for express installation for Boxes in this part. Mainly the goal was support for Debian based distributions but we also had some explicit distributions on the list:

  • Debian
  • Ubuntu
  • Windows 8.1
  • Windows Vista
  • openSUSE
  • Linux MINT
  • Damn Small Linux
  • ArchLinux

Reality

During the first months I worked hard to make the initrd injection for express installation scripts possible. This is needed in order to express install some distributions; especially for Debian this is a common method. In order to make this possible I had to update the libarchive vapi file and write a vala wrapper for libarchive so we can handle archives with Boxes without more runtime dependencies. (In fact I was able to trow out one dependency due to the wrapper.) I spent very much time on this groundwork, much more than I initially estimated.

After the groundwork was covered I started working on the Debian express installation script. Due to a bug in the preseed installer this also took way longer than estimated.

On GUADEC I managed to get a draft of an openSUSE express installation script to work. Due to time limitations and the other goals I was not able to get these changes cleaned up and committed yet. Since libosinfo is not bound to the GNOME release schedule you probably get the openSUSE express installation soon anyway.

Due to the explained problems I did not manage to do more than some experiments with other distributions and Windows. However note that express installation with Ubuntu with the alternate ISO is possible with a slightly modified Debian installation script. Since we currently do not support express installation for live ISOs and Ubuntu does not seem to ship the alternate ISO anymore there seems no way right now to make express installation with Ubuntu work.

Long term fix would be adding support for live media in general, or provide means to distinguish non-express installable live media from express installable live media.

Express installation support for more distributions should not be that problematic anymore, we have the most important groundwork covered now.

Bugfixing

During the project I did some effort to make Boxes better and more stable in general.

  • I tested and reviewed patches other people contributed. I was especially involved in the review of the many code refactoring patches needed for multiple windows.
  • I triaged some bugs and tried to feed the discussions to get them solved.
  • I fixed some bugs myself. I mainly improved accessibility support with patches adressing the following issues:
    • I added some common shortcuts here and there. We got consistent forward, back and cancel shortcuts through the application plus other missing shortcuts like F1 for help and similar.
    • I worked on the stylesheet of Boxes to make it more independent from hardcoded colors. This results in less incompatibility problems with alien themes, especially themes that do not provide a dark variant, like the “High Contrast” accessibility theme.
    • Miscellaneous things.

The People

Summary of the following paragraph: thanks everyone for making this possible!

The Google Summer of Code was just as expected – plus GUADEC, which made it better. I want to thank Google for providing the resources and organizing the Google Summer of Code. I also want to thank the GNOME Foundation for providing financial aids to GSoC students where needed so they can attend GUADEC (which is great). Special thanks go to Zeeshan Ali Khattak (zeenix) who mentored me and reviewed with patience all my patches with worse commit messages. Last thank goes to the whole GNOME community for making GNOME what it is.

I hope I can make it to the next GUADEC and see all of you again!

The Continuation

I’ll say just this: 787852bc wont be my last patch although I wont be able to work full time on GNOME. Boxes will probably get some of the promised express installation scripts. (openSUSE will come, Linux MINT and ArchLinux are on my whishlist.) I also plan to take a further look at the HIG and some more general accessibility issues. For now I’m writing exams but I’ll be back soon!

Evince's annotations: "final" report

Google Summer of Code is officially over and it's good to look back and see what happened during the summer (even if the work is not over yet!!). I will go ahead and do the list-based post which helps me keep things clear.

These were nice features that were implemented or fixed and that are already on the latest evince release (yey \o/):

These are patches which are currently under review:
And there is a branch in which I am finishing developing highlight annotations. After several iterations, we decided to make it work with highlight first, so that adding strike out, squiggly, and underline is trivial afterwards. Here's how it is working:


You can see that there is a big delay between dragging the cursor and updating the view with the highlight. It is precisely this problem that I am trying to fix at the moment, thus "final", in quotes, as the title of this post.

Among fixing/reporting bugs and coding, I've met great people along the way and got to know an incredible community. The whole experience was great and I hope it lasts more than this summer :)

Experimenting with Panamax

Disclosure: This blog post is part of the Panamax Template Contest.

In my blog post about the Dell C6100 server I’ve been using, I mentioned that I run a full LXC userland for each application I deploy, and that I’d like to try out Docker but that this setup is in conflict with Docker’s philosophy – a Docker container only runs one process, which makes it difficult to use Docker for anything requiring interaction between processes. Here’s an example: this blog is running WordPress with MySQL. So, with LXC I create a fresh Ubuntu container for the blog and run apt-get install wordpress and I’m up and running, but trying to use Docker would leave me with an “orchestration” problem – if I’m supposed to have a separate web server and database server, how will they figure out how to talk to each other?

If the two Docker services are being run on the same host, you can use docker --link, which runs one service under a given name and then makes it available to any service it’s linked to. For example, I could call a postgres container db and then run something like docker --name web --link db:db wordpress. The wordpress container receives environment variables giving connection information for the database host, which means that as long as you can modify your application to use environment variables when deciding which database host to connect to, you’re all set. (If the two docker services are being run on separate hosts, you have an “ambassador” problem to figure out.)

All of which is a long-winded way to say that Panamax is a new piece of open source software that attempts to ameliorate the pain of solving orchestration problems like this one, and I decided to try it out. It’s a web service that you run locally, and it promises a drag-and-drop interface for building out complex multi-tier Docker apps. Here’s what it looks like when pairing a postgres database with a web server running a Django app, WagtailCMS:

The technical setup of Panamax is interesting. It’s distributed as a CoreOS image which you run inside Vagrant and Virtualbox, and then your containers are launched from the CoreOS image. This means that Panamax has no system dependencies other than Vagrant and Virtualbox, so it’s easily usable on Windows, OS X, or any other environment that can’t run Docker directly.

Looking through the templates already created, I noticed an example of combining Rails and Postgres. I like Django, so I decided to give Django and Postgres a try. I found mbentley’s Ubuntu + nginx + uwsgi + Django docker image on the Docker Hub. Comparing it to the Rails and Postgres template on Panamax, the Django container lacks database support, but does have support for overlaying your own app into the container, which means you can do live-editing of your app.

I decided to see if I could combine the best parts of both templates to come up with a Panamax template for hosting arbitrary Django apps, which supports using an external database and offers live-editing.  I ended up creating a new Docker image, with the unwieldy name of cjbprime/ubuntu-django-uwsgi-nginx-live. This image is based on mbentley’s, but supports having a Django app passed in as an image, and will try to install its requirements. You can also link this image to a database server, and syncdb/migrate will be run when the image starts to set things up. If you need to create an admin user, you can do that inside a docker_run.sh file in your app directory.

After combining this new Docker image with a Postgres container, I’m very happy with how my django-with-postgres template turned out – I’m able to take an existing Django app, make minor changes using a text editor on my local machine to use environment variables for the database connection, start up the Panamax template, and watch as a database is created (if necessary), dependencies are installed, migrations are run, an admin user is created (if necessary), and the app is launched.  All without using a terminal window at any point in the process.

To show a concrete example, I also made a template that bundles the Wagtail Django CMS demo. It’s equivalent to just using my django-with-postgres container with the wagtaildemo code passed through to the live-editing overlay image (in /opt/django/app), and it brings up wagtaildemo with a postgres DB in a separate container. Here’s what that looks like:

Now that I’ve explained where I ended up, I should talk about how Panamax helped.  Panamax introduced me to Docker concepts (linking between containers, overlaying images) that I hadn’t used before because they seemed too painful, and helped me create something cool that I wouldn’t otherwise have attempted.  There were some frustrations, though.  First, the small stuff:

Underscores in container names

This one should have been in big bold letters at the top of the release notes, I think.  Check this out: unit names with _{a-f}{a-f} in them cause dbus to crash. This is amusing in retrospect, but was pretty inscrutable to debug, and perhaps made worse by the Panamax design: there’s a separate frontend web service and backend API, and when the backend API throws an error, it seems that the web interface doesn’t have access to any more detail on what went wrong. I’m lucky that someone on IRC volunteered the solution straight away.

The CoreOS Journal box occasionally stays black

Doing Docker development depends heavily on being able to see the logs of the running containers to work out why they aren’t coming up as you thought they would.  In Docker-land this is achieved with docker -f logs <cid>, but Panamax brings the logs in to the web interface: remember, the goal is to avoid having to look at the terminal at all.  But it doesn’t work sometimes.  There’s a panamax ssh command to ssh into the CoreOS host and run docker logs there, but that’s breaking the “fourth wall” of Panamax.

Progress bar when pulling Docker images

A minor change: it’d be great to be able to see progress when Panamax is pulling down a Docker image. There’s no indicator of progress, which made me think that something had hung or failed. Further, systemd complained about the app failing to start, when it just needed more time for the docker pull to complete.

Out of memory when starting a container

The CoreOS host allocates 1GB RAM for itself: that’s for the Panamax webapp (written in Rails), its API backend, and any containers you write and launch.  I had to increase this to 2GB while developing, by modifying ~/.panamax/.env:

export PMX_VM_MEMORY=2048

Sharing images between the local host and the container

I mentioned how Panamax uses a CoreOS host to run everything from, and how this drastically reduces the install dependencies.  There’s a significant downside to this design – I want to allow my local machine to share a filesystem and networking with my Docker container, but now there’s a CoreOS virtual machine in the way – I can’t directly connect from my laptop to the container running Django without hopping through the VM somehow. I want to connect to it for two different reasons:

  1. To have a direct TCP connection from my laptop to the database server, so that I can make database changes if necessary.
  2. To share a filesystem with a container so that I can test my changes live.

Panamax makes the first type of connection reasonably easy. There’s a VirtualBox command for doing port forwarding from the host through to the guest – the guest in this case is the CoreOS host. So we end up doing two stages of port forwarding: Docker forwards port 80 from the Django app out to port 8123 on the CoreOS host, and then VirtualBox forwards port 8123 on my laptop to port 8123 on the CoreOS host. Here’s the command to make it work:

VBoxManage controlvm panamax-vm natpf1 rule1,tcp,,8123,,8123

The filesystem sharing is much trickier – we need to share a consistent view of a single directory between three hosts: again, the laptop, the CoreOS host, and the Docker app. Vagrant has a solution to this, which is that it can NFS share a guest OS from the CoreOS host back to my laptop. That works like this, modifying ~/.vagrant.d/boxes/panamax-coreos-box-367/0/virtualbox/Vagrantfile:

  config.vm.network "private_network", ip: "192.168.50.4"
  config.vm.synced_folder "/home/cjb/djangoapp", "/home/core/django",
  id: "django", :nfs => true, :mount_options => ['nolock,vers=3,udp']

So, we tell Panamax to share /opt/django/app with the CoreOS host as /home/core/django, and then we tell Vagrant to share /home/cjb/djangoappon my laptop with the CoreOS host as /home/core/django over NFS. After `apt-get install nfs-kernel-server`, trying this leads to a weird error:

exportfs: /home/cjb/djangoapp does not support NFS export

This turns out to be because I’m running ecryptfs for filesystem encryption on my Ubuntu laptop, and nfs-kernel-server can’t export the encrypted FS. To work around it, I mounted a tmpfs for my Django app and used that instead. As far as I know, OS X and Windows don’t have this problem.

Summary

Panamax taught me a lot about Docker, and caused me to publish my first two images to the Docker registry, which is more than I expected to gain from trying it out. I’m not sure I’m the target audience – I don’t think I’d want to run production Docker apps under it on a headless server (at least until it’s more stable), which suggests that its main use is as an easy way to experiment with the development of containerized systems. But the friction introduced by the extra CoreOS host seems too great for it to be an awesome development platform for me. I think it’s a solvable problem – if the team can find a way to make the network port forwarding and the filesystem NFS sharing be automatic, rather than manual, and to work with ecryptfs on Ubuntu, it would make a massive difference.

I am impressed with the newfound ability to help someone launch a database-backed Django app without using any terminal commands, even if they’re on Windows and have no kind of dev environment, and would consider recommending Panamax for someone in that situation. Ultimately, maybe what I’ll get out of Panamax is a demystification of Docker’s orchestration concepts. That’s still a pretty useful experience to have.

revisiting common subexpression elimination in guile

A couple years ago I wrote about a common subexpression pass that I implemented in Guile 2.0.

To recap, Guile 2.0 has a global, interprocedural common subexpression elimination (CSE) pass.

In the context of compiler optimizations, "global" means that it works across basic block boundaries. Basic blocks are simple, linear segments of code without control-flow joins or branches. Working only within basic blocks is called "local". Working across basic blocks requires some form of understanding of how values can flow within the blocks, for example flow analysis.

"Interprocedural" means that Guile 2.0's CSE operates across closure boundaries. Guile 2.0's CSE is "context-insensitive", in the sense that any possible effect of a function is considered to occur at all call sites; there are newer CSE passes in the literature that separate effects of different call sites ("context-sensitive"), but that's not a Guile 2.0 thing. Being interprocedural was necessary for Guile 2.0, as its intermediate language could not represent (e.g.) loops directly.

The conclusion of my previous article was that although CSE could do cool things, in Guile 2.0 it was ultimately limited by the language that it operated on. Because the Tree-IL direct-style intermediate language didn't define order of evaluation, didn't give names to intermediate values, didn't have a way of explicitly representing loops and other kinds of first-order control flow, and couldn't precisely specify effects, the results, well, could have been better.

I know you all have been waiting for the last 27 months for an update, probably forgoing meaningful social interaction in the meantime because what if I posted a followup while you were gone? Be at ease, fictitious readers, because that day has finally come.

CSE over CPS

The upcoming Guile 2.2 has a more expressive language for the optimizer to work on, called continuation-passing style (CPS). CPS explicitly names all intermediate values and control-flow points, and can integrate nested functions into first-order control-flow via "contification". At the same time, the Guile 2.2 virtual machine no longer penalizes named values, which was another weak point of CSE in Guile 2.0. Additionally, the CPS intermediate language enables more fined-grained effects analysis.

All of these points mean that CSE has the possibility to work better in Guile 2.2 than in Guile 2.0, and indeed it does. The shape of the algorithm is a bit different, though, and I thought some compiler nerds might be interested in the details. I'll follow up in the next section with some things that new CSE pass can do that the old one couldn't.

So, by way of comparison, the old CSE pass was a once-through depth-first visit of the nested expression tree. As the visit proceeded, the pass built up an "environment" of available expressions -- for example, that (car a) was evaluated and bound to b, and so on. This environment could be consulted to see if a expression was already present in the environment. If so, the environment would be traversed from most-recently-added to the found expression, to see if any intervening expression invalidated the result. Control-flow joins would cause recomputation of the environment, so that it only held valid values.

This simple strategy works for nested expressions without complex control-flow. CPS, on the other hand, can have loops and other control flow that Tree-IL cannot express, so for it to build up a set of "available expressions" requires a full-on flow analysis. So that's what the pass does: a flow analysis over the labelled expressions in a function to compute the set of "available expressions" for each label. A labelled expression a is available at label b if a dominates b, and no intervening expression could have invalidated the results. An expression invalidates a result if it may write to a memory location that the result may have read. The code, such as it is, may be found here.

Once you have the set of available expressions for a function, you can proceed to the elimination phase. First, you start by creating an "eliminated variable" map, which initially maps each variable to itself, and an "equivalent expressions" table, which maps "keys" to a set of labels and bound variables. Then you visit each expression in a function, again in topologically sorted order. For each expression, you compute a "key", which is some unique representation of an expression that can be compared by structural equality. Keys that compare as equal are equivalent, and are subject to elimination.

For example, consider a call to the add primitive with variables labelled b and c as arguments. Imagine that b maps to a in the eliminated variable table. The expression as a whole would then have a key representation as the list (primcall add a c). If this key is present in the equivalent expression table, you check to see if any of the equivalent labels is available at the current label. If so, hurrah! You mark the outputs of the current label as being replaced by the outputs of the equivalent label. Otherwise you add the key to the equivalent table, associated with the current label.

This simple algorithm is enough to recursively eliminate common subexpressions. Sometimes the recursive aspect (i.e. noticing that b should be replaced by a), along with the creation of a common key, causes the technique to be called global value numbering (GVN), but CSE seems a better name to me.

The algorithm as outlined above eliminates expressions that bind values. However not all expressions do that; some are used as control-flow branches. For this reason, Guile also computes a "truthy table" with another flow analysis pass. This table computes a set of which branches have been taken to get to each program point. In the elimination phase, if a branch is reached that is equivalent to a previously taken branch, we consult the truthy table to see which continuation the previous branch may have taken. If it can be proven to have taken just one of the legs, the test is elided and replaced with a direct jump.

A few things to note before moving on. First, the "compute an analysis, then transform the function" sequence is quite common in this sort of problem. It leads to some challenges regarding space for the analysis; my last article deals with these in more detail.

Secondly, the rewriting phase assumes that a value that is available may be substituted, and that the result would be a proper CPS term. This isn't always the case; see the discussion at the end of the article on CSE in Guile 2.0 about CPS, SSA, dominators, and scope. In essence, the scope tree doesn't necessarily reflect the dominator tree, so not all transformations you might like to make are syntactically valid. In Guile 2.2's CSE pass, we work around the issue by concurrently rewriting the scope tree to reflect the dominator tree. It's something I am seeing more and more and it gives me some pause as to the suitability of CPS as an intermediate language.

Also, consider the clobbering part of analysis, where e.g. an expression that writes a value to memory has to invalidate previously read values. Currently this is implemented by traversing all available expressions. This is suboptimal and could be quadratic in the end. A better solution is to compute a dependency graph for expressions, which links together operations on the same regions of memory; see LLVM's memory dependency analysis for an idea of how to do this.

Finally, note that this algorithm is global but intraprocedural, meaning that it doesn't propagate values across closure boundaries. It's possible to extend it to be interprocedural, though it's less necessary in the presence of contification.

scalar replacement via fabricated expressions

Let's say you get to an expression at label L, (cons a b). It binds a result c. You determine you haven't seen it before, so you add (primcall cons a b) → L, c to your equivalent expressions set. Cool. We won't be able to replace a future instance of (cons a b) with c, because that doesn't preserve object identity of the newly allocated memory, but it's definitely a cool fact, yo.

What if we add an additional mapping to the table, (car c) → L, a? That way any expression at which L is available would replace (car c) with a, which would be pretty neat. To do so, you would have to add the &read effect to the cons call's effects analysis, but since the cons wasn't really up for elimination anyway it's all good.

Similarly, for (set-car! c d) we can add a mapping of (car c) → d. Again we have to add the &read effect to the set-car, but that's OK too because the write invalidated previous reads anyway.

The same sort of transformation holds for other kinds of memory that Guile knows how to allocate and mutate. Taken together, they form a sort of store-to-load forwarding and scalar replacement that can entirely eliminate certain allocations, and many accesses as well. To actually eliminate the allocations requires a bit more work, but that will be the subject of the next article.

future work

So, that's CSE in Guile 2.0. It works pretty well. In the future I think it's probably worth considering an abstract heap-style analysis of effects; in the end, the precision of CSE is limited to how precisely we can model the effects of expressions.

The trick of using CSE to implement scalar replacement is something I haven't seen elsewhere, though I doubt that it is novel. To fully remove the intermediate allocations needs a couple more tricks, which I will write about in my next nargy dispatch. Until then, happy hacking!

August 24, 2014

GSoC final report for libical introspection

My original project for GSoC this summer was to introspect the libecal part in Evolution-Data-Server (EDS). The main reason why this part is not introspectable is because the external library libical is not introspectable. So my task was almost to make every type from libical used in libecal a boxed GObject and therefore make it introspectable.

However, as we progressed, we came to realize two major issues with this method:

1. Restriction on manipulation of libical objects. If we limit our scope only to the EDS, we can only manipulate the libical objects by EDS APIs, which is far less than enough.

2. Terrible memory management. If there is a relation between two libical objects, we can not control the memory very well only throught making each type a boxed GObject.

Based on these these two reasons, we decided to introspect the whole libical library. Since this library can be part of the libical, we wanted to put the newly created library under the libical project. But because we did not get the response from libical developers and this project will be more likely used by GNOME project, we determined to create an independent library named libical-glib for it.

For a standard libical type, take icalcomponent for example, a corresponding GObject will be created, which is ICalComponent in this case.

The component

And when we want to retrieve the icalproperty from the icalcomponent, icalcomponent_get_first_property is called. But besides the function, we also create a new object named property with the type of ICalProperty and bond it with the component together.

The relation between property and component

Now let’s see the benefits of this strategy.

1. When the icalproperty object will be freed by the icalcomponent object.

Let’s assume there is not the red line firstly. When the property is freed, the native part of the property will be freed, but it is also owned by the icalcomponent object, which is owned by the component GObject. So when the component is freed, it will free the native part, then free the icalproperty object which is a piece of trash memory. And it will cause the error. The issue also exists if the component is freed firstly.

If property takes a reference of component, then it will be a different story. when property is freed, we can tell whether we should free the native part by testing whether parent is NULL. If it is, then go ahead to free it since its native part will also not have the parent. If not, then just set the native to NULL, dereference the parent and free the whole object. When the count of component in the GObject memory management reaches 0, it will be freed. And free-memory chain will be triggered: component->an icalcomponent object->an icalproperty object. There won’t be errors popping out. It will be similar for the situation when the component is freed firstly.

2. When the icalproperty object will not be freed by the icalcomponent object.

Untitled drawing

when the property is unreffed, the count of the object won’t be zero since there is still one ref in component. When the ref count of component reaches 0, for the parent and native part, it will still follow the pattern stated in the first case. Besides that, all the dependers will be unreffed. And the GObject memory management will do the memory work itself.

3. For the bare structure.

There will be some structs which are created from the stack instead of heap, like icaltimetype in libical. When dealing with this type of struct, the corresponding GObject’s native part will hold a pointer pointing to an object created from the heap. Then it can be used as other common structs following the rules defined in Case 1 and Case 2.

Based on the observance, most of the APIs in libical have the same pattern. So recommended by Milan, we adopt a method which can generate the whole library based on a bunch of XML files, a parser and a generator. The XML files are used to define the structs and their APIs. The parser is used to parse the XML files and the generator is used to generate the whole library. By using default values and rules, the XML files can be much more lightweight than the hardcoded library.

I finish a demo part of libical-glib using GSoC this year and you can take a look at here: https://github.com/william7777/libical-glib.

The to-do list for this project:

1. Use the abstract struct ICalObject and make all structs inherit from it. It can make the code more clear and more object-oriented.

2. Fine tune all the XMLs I have finished.

3. Restructure the parser and generator. I intend to put all the definitions and defaults in one part. And reduce the interconnections between different part as much as possible so that different parts are more independent and the whole project is easier to be maintained.

Finally, I want to give the best thanks to my mentors Milan Crha, Fabiano Fidêncio and Philip Withnall. Whenever I need help, they can always jump out and give me the best instructions patiently even if they are busy. Thank you very much!

GSoC has ended, but the project has not, my passion and contribution for GNOME has not either. I believe I will finish this project and contribute more and more to this wonderful community.

 


GSoC Final Report

Since last report I have done some fixes and updates in the module and looked for the best way to load the module.

Fixes
I have resolved some memory leaks and updated the build system

Multiple sessions
I Changed the module to support more than one session at a time, it was faking multiple sessions to users while it was actually using only one.
I decided for a mechanism limiting the number of sessions to 16, no hard restrictions at all. But it can easily be changed to a mechanism with unlimited sessions.

Trust
NSS relies on some objects to check for trust on a certificate. These objects are created by NSS when a trust relationship is added. All certificate accessible to NSS were not trusted, the user  needed to set a trust relation with the certificate manually.
Now the module mimetizes these objects to NSS so that all certificates found by the module are trusted to protect e-mail communication.

Loading the module
There are some ways the module can be loaded by an application, using p11-kit, libgck, or NSS. A particularity is that the module will be loaded by Evolution,  but it won’t actually use it. The real user of the module is NSS, who performs the crypto operations. So, Evolution needs to load it and make it available to NSS.
I could not find a way to it with libgck. To do it via p11-kit you can use pkcs11-proxy.so, and with NSS it can be done directly via SECMOD_LoadUserModule(). To keep things simple for now, I’m going for NSS’ method.

Who does the loading?
Evolution, or Evolution Data Server,  needs to load the module at some point. For testing purposes I created a patch to load the module in Evolution (the most appropriate place may be in EDS). And I’m happy to say that the module can do its job.

You finished the module?

The core of the module is finished, it is able to perform the task it was built for.

  • To expose X509 Certificate from Evolution Contacts;
  • To allow sending of encrypted mail using a certificate retrieved from a contact in Evolution.

But it’s a PKCS#11 module, lots of application can use it in a different way, and the module needs to survive whatever situation it comes to face.
For example, it did not work with p11tool correctly. When p11tool wanted to list all objects the  module would return no objects at all. Situations like these will be corrected as they are found.

There are some issues I know that may affect the module, like:

  • Certificate Decoding, it is done using SEC_QuickDERDecodeItem(), but I still have doubts about its robustness;
  • NSS trust, although NSS trusts the certificate it only trusts it for real use when the trust is set through UI.

GSoC period has come to its end, but work is never over. I hope to keep working on the module and contributing to the community.

Thanks to Google and GNOME organizations for supporting this work. And thank you David Woodhouse for mentoring and helping me on this project.


Open Flight Controllers

In my last multirotor themed entry I gave an insight into the magical world of flying cameras. I also gave a bit of a promise to write about the open source flight controllers that are out there. Here’s a few that I had the luck laying my hands on. We’ll start with some acro FCs, with a very differt purpose to the proprietary NAZA I started on. These are meant for fast and acrobatic flying, not for flying your expensive cameras on a stabilized gimbal. Keep in mind, I’m still fairly inexperienced so I don’t want to go into specifics and provide my settings just yet.

Blackout: Potsdam from jimmac on Vimeo.

CC3D

The best thing to be said about CC3D is that while being aimed at acro pilots, it’s relatively newbie friendly. The software is fairly straight forward. Getting the QT app built, set up the radio, tune motors and tweak gains is not going to make your eyes roll in the same way APM’s ground station would (more on that in a future post, maybe). The defaults are reasonable and help you achieve a maiden flight rather than a maiden crash. Updating to the latest firmware over the air is seamless.

Large number of receivers and connection methods is supported. Not only the classic PWM, or the more reasonable “one cable” CPPM method, but even Futaba proprietary SBUS can be used with CC3D. I’ve flown it with Futaba 8J, 14SG and even the Phantom radio (I actually quite like the compact receiver and the sticks on the TX feel good. Maybe it’s just that it’s something I’ve started on). As you’re gonna be flying proximity mostly, the range is not an issue, unless you’re dealing with external interference where a more robust frequency hopping radio would be safer. Without a GPS “break” or even a barometer, losing signal for even a second is fatal. It’s extremely nasty to get a perfect 5.8 video of your unresponsive quad plumetting to the ground :)

Overall a great board and software, and with so much competition, the board price has come down considerably recently. You can get non-genuine boards for around EUR20-25 on ebay. You can learn more about CC3D on openpilot website

Naze32

Sounding very similar to the popular DJI flight controller, this open board is built around the 32-bit STM32 processor. Theoretically it could be used to fly a bit larger kites with features like GPS hold. You’re not limited to the popular quad or hexa setups with it either, you can go really custom with defining your own motor mix. But you’d be stepping in the realm of only a few and I don’t think I’d trust my camera equipment to a platform that hasn’t been so extensively tested.

Initially I didn’t manage to get the cheap acro variant ideal for the minis, so I got the ‘bells & whistles’ edition, only missing the GPS module. The mag compass and air pressure barometer is already on the board, even though I found no use for altitude hold (BARO). You’ll still going to worry about momentum and wind so reaching for those goggles mid flight is still not going to be any less difficult than just having it stabilized.

If you don’t count some youtube videos, there’s not a lot of handholding for the naze32. People assume you have prior experience with similar FCs. There are multiple choices of configuration tools, but I went for the most straight forward one — a Google Chrome/Chromium Baseflight app. No compiling necessary. It’s quite bare bones, which I liked a lot. Reasonably styled few aligned boxes and CLI is way easier to navigate than the non-searchable table with bubblegum styling than what APM provides for example.

One advanced technique that caught my eye, as the typical process is super flimsy and tedious, is ESC calibration. To set the full range of speeds based on your radio, you usually need to make sure to provide power to the RX, and setting the top and bottom throttle leves to each esc. With this FC, you can actually set the throttle levels from the CLI, calibrating all ESCs at the same time. Very clever and super useful.

Another great feature is that you can have up to three setting profiles, depending on the load, wind conditions and the style you’re going for. Typically when flying proximity, between trees and under park benches, you want very responsive controls at the expense of fluid movement. On the other hand if you plan on going up and fast and pretend to be a plane (or a bird), you really need to have that fluid non-jittery movement. It’s not a setting you change mid-flight, using up a channel, but rather something you choose before arming.

To do it, you hold throttle down and yaw to the left and with the elevator/aileron stick you choose the mode. Left is for preset1, up is for preset 2 and right is for preset 3. Going down with the pitch will recalibrate the IMU. It’s good to solder on a buzzer that will help you find a lost craft when you trigger it with a spare channel (it can beep on low voltage too). The same buzzer will beep for selecting profiles as well.

As for actual flying characteristics, the raw rate mode, which is a little tricky to master (and I still have trouble flying 3rd person with it), is very solid. It feels like a lot larger craft, very stable. There’s also quite a feat in the form of HORI mode, where you get a stabilized flight (kite levels itself when you don’t provide controls), but no limit on the angle, so you’re still free to do flips. I can’t say I’ve masted PID tuning to really get the kind of control over the aircraft I would want. Regardless of tweaking the control characteristics, you won’t get a nice fluid video flying HORI or ANGLE mode, as the self leveling will always do a little jitter to compensate for wind or inaccurate gyro readings which seems to not be there when flying rate. Stabilizing the footage in post gets rid of it mostly, but not perfectly:

Minihquad in Deutschland

You can get the plain acro version for about EUR30 which is an incredible value for a solid FC like this. I have a lot of practice ahead to truly get to that fluid fast plane-like flight that drew me into these miniquads. Check some of these masters below:

APM and Sparky next time. Or perhaps you’d be more interested in the video link instead first? Let me know in the comments.

Update: Turns out NAZE32 supports many serial protocols apart form CPPM, such as Futaba SBUS and Graupner SUMD.

August 22, 2014

GNOME documentation video is out

The GNOME Documentation Video has now been released on youtube and as a download (Ogg Theora + Vorbis). This is something I have been waiting for since I finished working on it a few weeks ago. A big thanks to Karen for providing a great voice-over for the second time! Translated subtitles are not online just yet for the video, but should come within the next few days (thanks to pmkovar and claude  for setting this up!).

Click here to view the embedded video.

I’m looking forward to see the response from the community. I read every comment made and try to collect and direct them to the right people.

Next upcoming GNOME video from me would be the 3.14 Release Video. Since the release date is approaching I’m already working on the manuscript and hopefully I can send it off to Karen soon for voice-over. Alexander Larsson was also kind enough to send over one of GNOME’s Chromebook Pixels I had signed up for. Provided that I can get Fedora 21 with 3.13 installed on it, I might be able to record video in a 2560×1700 resolution. This has some technical advantages like green-screening video and throwing it into Blender’s 3D space and other crazy things ( woohoo!).

In the meantime I’m also messing around with some design for Polari and a GNOME flyer I intend to hand out to the new students at my university.  I’m predicting September to be a busy but fun month.

August 21, 2014

Hanging up the hat

Hello. It’s been quite a while. I’ve been meaning to post for a while, but I’ve been too busy trying to get GNOME 3.14 finished up, with Wayland all done for you. I also fixed the last stability issue in GNOME, and now both X11 and Wayland are stable as a rock. If you’ve ever had GNOME freeze up on you when switching windows or Alt-Tabbing, well, that’s fixed, and that was actually the same bug that was crashing Wayland. This was my big hesitation in shipping Wayland, and with that out of the way, I’m really happy. Please try out GNOME 3.13.90 on Wayland and let me know how it goes.

I promise to post a few more Xplain articles before the end of the year, and I have another blog post coming up about GPU rendering that you guys are going to enjoy. Promise. Even though, well…

Changes

I have a new job. Next Tuesday, the 26th, is my final day at Red Hat, and after that I’m going to be starting at Endless Mobile. Working at Red Hat has been a wonderful, life-changing experience, and it was a really hard decision to leave the incredible team that made GNOME what it is today. Thank you, each and every one of you. All of you are incredible, and I hope I keep working with you every single day. It would be an absolute shame if I wasn’t.

Endless Mobile is a fantastic new startup that is focused on shipping GNOME to real end users all across the world, and that’s too exciting of an opportunity to pass by. We, the GNOME community, can really improve the lives of people in developing countries. Let’s make it happen.

I’ll still be around on IRC, mailing lists, reddit, blogging, all the usual places. I’m not planning on leaving the GNOME community. If you have any questions at all, feel free to ask.

Cheers,
  Jasper

Final GSOC Report: Revamping the UI of Gnome-Calculator and Implementing History-View

Hi  Everybody,

The amazing  Google Summer of Code has finally come to an end and I feel very lucky for having the opportunity of being a part of it. I learned a lot of new things as well as had a lot of fun this summer doing what I like to do the most! Let me summarize what I have successfully implemented so far in Gnome-Calculator.

1. History View which displays previous calculations for modification and reuse.

2. Explicit Keyboard Mode with resizing capability.

3. Improved the appearance of User Interface.

Although GSOC has come to an end, my journey as a GNOME Developer has just begun. I plan to continue contributing to Gnome as long as I can and also try to make Gnome-Calculator as advanced as possible.

You can find all the patches related to my GSOC project here.

I would like to sincerely thank my mentor Arth Patel , Garima Joshi and Michael Catanzaro for patiently answering several queries I had during the GSOC period and for helping me to complete my GSOC Project successfully.

Cheers!


GSoC Final Report - Use GNOME Keysign to sign OpenPGP keys

I haven't blogged about my work progress lately but I had implemented most of the features that were planned when GSoC started.

Short description of the project
My project was to make a tool that will help people in the the OpenPGP Keysigning[1] process.
Geysign will take care of all the steps a user must take to get his key signed or sign another one's key  by automating the process while following the OpenPGP best practices.



What Geysign currently does:

  • Displays your personal keys from which you can choose one at a time and get it signed.
  • Encode's the selected key's fingerprint into a barcode that can be scanned. The fingerprint can also be typed if your device has no video camera available.
  • Uses avahi to publishes itself on the network and to discover other Geysign services. This allows a "plug and play"  or straightforward process (otherwiser user would have to get the ip address and port)
  • When requested, starts a local http server that will listen for new connections to download the public key data.
  • Authenticates the received key data by checking if the scanned/typed fingerprint is the same with the one from the key (which was previously imported into a temporary keyring).
  • If the two fingerprint match, then it will proceed to sign the key and export it
  • Email the key back to its owner (not yet implemented).

The last point will be done after GSoC as well as giving the app a new GUI. Until now I had cared more about the functionality of the app and less about how it looks. 

Here is a short demo that shows the process of signing a key by using two Geysign applications on the same machine (unfortunately I didn't had a working network with two computers).
You can check the code on git repository [2]. I am glad to see people are interested in this as I already received a pull request from someone who wants to contribute to Geysign.





With this occasion I want to thank GNOME for sponsoring the accommodation to GUADEC, I enjoyed being there for the first time.

I also want to mention my mentor Tobias Mueller and thank him for the help he gave me, I learnt a lot from him. 
My work on Geysign will continue and I hope that in the future it will be integrated into Seahorse.

[1] Open PGP Web of Trust : https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust
[2] Git repo: https://github.com/andreimacavei/geysigning

New Human Interface Guidelines for GNOME and GTK+

hig-graphic-940

I’ve recently been hard at work on a new and updated version of the GNOME Human Interface Guidelines, and am pleased to announce that this will be ready for the upcoming 3.14 release.

Over recent years, application design has evolved a huge amount. The web and native applications have become increasingly similar, and new design patterns have become the norm. During that period, those of us in the GNOME Design Team have worked with developers to expand the range of GTK+’s capabilities, and the result is a much more modern toolkit.

It has been a long road, in which we have done a lot of testing and prototyping before incorporating new features into GTK+. As a result of that work, GTK+ provides a contemporary lexicon to draw on when designing and implementing applications, including header bars, popovers, switches, view switchers, linked and specially styled buttons, and much more.

There is a downside to all the experimentation that has been happening in software design in recent years, of course – it can often be a bewildering space to navigate. This is where the HIG comes in. Its goal is to help developers and designers take advantage of the new abilities at their disposal, without losing their way in the process. This is reflected in the structure of the new HIG: the guidelines don’t enforce a single template on which applications have to be based, but presents a series of patterns and elements which can be drawn upon. Each of these is accompanied by advice on when each pattern is appropriate, as well as alternatives that can be considered.

The HIG is also designed so that it can grow and evolve over time. The initial version that I have been working on covers the essentials, and there is a lot more ground to be covered. We want to assist people in finding the design that best fits their needs, and we want to make a whole range of creative solutions available.

In writing the HIG, I’ve made an effort to produce a document that is as useful to as many people as possible. While there is an emphasis on integration with GNOME 3, there should be useful material for anyone using GTK+ to create applications. It includes guidelines on creating more traditional desktop applications as well as newfangled ones, and includes advice for those who are responsible for GNOME 2 style apps. Likewise, the new HIG includes guidance on how to design effective cross-platform applications.

The new HIG wouldn’t have been possible without the help and hard work of many individuals. It incorporates updated material from the previous version, which was written by Seth Nickell, Calum Benson, Bryan Clark, Adam Elman, and Colin Robertson, many of whom recently helped us to relicense the original HIG.

Credit also has to go to those people who designed and developed all the new capabilities that are documented in the new HIG, including Jon McCann and Jakub Steiner on the design side, as well as the developer/designers who helped to test new patterns and add new capabilities to GTK+ – Cosimo Cecchi, Matthias Clasen, Carlos Garnacho, Alexander Larsson, Benjamin Otte, and many others.

I’d also like to thank the GNOME Documentation Team for their advice and assistance with proofreading.

This being the initial release, I’d love to hear feedback, and I’m sure that there’s plenty to be improved. If you’re interested, you can clone gnome-devel-docs and read the development version using Yelp.

August 20, 2014

Three Tricks in Xamarin Studio

I wanted to share three tricks that I use a lot in Xamarin Studio/MonoDevelop.

Trick 1: Navigate APIs

Xamarin Studio's code completion for members of an object defaults to showing all the members sorted by name.

But if you press Control-space, it toggles the rendering and organizes the results. For example, for this object of type UIWindow, it first lists the methods available for UIWindow sorted by name, and then the cluster for its base class UIView:

This is what happens if you scroll to the end of the UIWindow members:

Trick 2: Universal Search

Use the Command-. shortcut to activate the universal search, once you do this and start typing it will find matches for both members and types in your solution, as well as IDE commands and the option to perform a full text search:

Trick 3: Dynamic Abbreviation Completion

This is a feature that we took from Emacs's Dynamic Abbrevs.

If you type Control-/ when you type some text, the editor will try to complete the text you are typing based on strings found in your project that start with the same prefix.

Hit control-/ repeatedly to cycle over possible completions.

Bring Your Own Device, But Can't Touch Them

The issue of BYOD (bring your own device) certainly has challenges for IT professionals.  Putting on one hat, you can easily see that it's wonderful to allow users to be productive with their personal tablets.  The other hat comes from years of experience, and knowing that they could be a support nightmare.  In enterprise IT much of what you do is work to having a consistent hardware base, to ease upgrades and reduce the difficulties that arise from diverse hardware.  BYOD is exactly the other end of the spectrum, there are thousands of hardware and operating system possibilities and end users often don't understand why their own personal $200-$500 purchase decision doesn't work.

The IT Director has crafted a new City policy, which includes a description of BYOD in great detail.  The overview is that they are allowed, and that no IT resources will be allocated to making them work or troubleshooting problems.

With all of that said, how then to deploy NX technology to tablets?  Users want to use their own tablets to connect to our GNOME desktops, but we cannot touch the hardware.  Users can download the Nomachine/NX client, but do not have the right key pair and there are settings and optimizations that would difficult for them to do on their own.  So we can't touch them, and it's not secure to email them the settings and keypair.  We kicked around some ideas and decided the best approach was to allow users to connect their Apple and Android tablets to City Workstations via USB and then initiate a small amount of software that mounts and then installs the .nxs and .cfg files needed to make the device work as expected.  This process is initiated by them via icon, and they accept the dialog alerting that there is no support in the event of problems or failure.

Once this R&D project was approved, I started to looking at tablets.  Android tablets mount pretty easily with go-mtpfs and Apple tablets can use ifuse.  I was able to then get to the NX settings folders of both types of devices on the command line and built a platform specific tarball.  I then created a simple Glade UI that requires them to ACCEPT the notice statement (UI is seen below).  This software is running on the workstation (not the server) and downloads the tarball and performs the install of the settings.  So far so good, and it's working on all devices that I have on my test bed.


It was simple enough to add a tab that displays a list of tablets that are known to work, and this is downloaded at runtime with the most recent additions. 


When the current settings profiles are built on the server prior to download, they are date stamped (YYYYMMDD) so that users can easily tell the date of their files right from the NX connection manager.  In the shot below, the UI has been used to install our three profiles and they are display correctly.



We have a few users now testing NX technology with Apple iPads and the feedback so far is promising.  In my next blog,I'll describe the user experience of NX and the GNOME desktop via a tablet designed for Touch software.


Five Cross Platform Pillars

The last couple of years have been good to C# and .NET, in particular in the mobile space.

While we started just with a runtime and some basic bindings to Android and iOS back in 2009, we have now grown to provide a comprehensive development stack: from the runtime, to complete access to native APIs, to designers and IDEs and to a process to continuously deliver polish to our users.

Our solution is based on a blend of C# and .NET as well as bindings to the native platform, giving users a spectrum of tools they can use to easily target multiple platforms without sacrificing quality or performance.

As the industry matured, our users found themselves solving the same kinds of problems over and over. In particular, many problems related to targeting multiple platforms at once (Android, iOS, Mac, WinPhone, WinRT and Windows).

By the end of last year we had identified five areas where we could provide solutions for our users. We could deliver a common framework for developers, and our users could focus on the problem they are trying to solve.

These are the five themes that we identified.

  • Cross-platform UI programming.
  • 2D gaming/retained graphics.
  • 2D direct rendering graphics.
  • Offline storage, ideally using SQLite.
  • Data synchronization.

Almost a year later, we have now delivered four out of the five pillars.

Each one of those pillars is delivered as a NuGet package for all of the target platforms. Additionally, they are Portable Class Libraries, which allows developers to create their own Portable Class Libraries on top of these frameworks.

Cross Platform UI programming

With Xamarin 3.0 we introduced Xamarin.Forms, which is a cross-platform UI toolkit that allows developers to use a single API to target Android, iOS and WinPhone.

Added bonus: you can host Xamarin.Forms inside an existing native Android, iOS or WinPhone app, or you can extend a Xamarin.Forms app with native Android, iOS or WinPhone APIs.

So you do not have to take sides on the debate over 100% native vs 100% cross-platform.

Many developers also want to use HTML and Javascript for parts of their application, but they do not want to do everything manually. So we also launched support for the Razor view engine in our products.

2D Gaming/Retained Graphics

Gaming and 2D visualizations are an important part of applications that are being built on mobile platforms.

We productized the Cocos2D API for C#. While it is a great library for building 2D games -and many developers build their entire experiences entirely with this API- we have also extended it to allow developers to spice up an existing native application.

We launched it this month: CocosSharp.

Offline Storage

While originally our goal was to bring Mono's System.Data across multiple platforms (and we might still bring this as well), Microsoft released a cross-platform SQLite binding with the same requirements that we had: NuGet and PCL.

While Microsoft was focused on the Windows platforms, they open sourced the effort, and we contributed the Android and iOS ports.

This is what powers Azure's offline/sync APIs for C#.

In the meantime, there are a couple of other efforts that have also gained traction: Eric Sink's SQLite.Raw and Frank Krueger's sqlite-net which provides a higher-level ORM interface.

All three SQLite libraries provide NuGet/PCL interfaces.

Data Synchronization

There is no question that developers love Couchbase. A lightweight NoSQL database that supports data synchronization via Sync gateways and Couchbase servers.

While Couchbase used to offer native Android and iOS APIs and you could use those, the APIs were different, since each API was modeled/designed for each platform.

Instead of writing an abstraction to isolate those APIs (which would have been just too hard), we decided to port the Java implementation entirely to C#.

The result is Couchbase Lite for .NET. We co-announced this development with Couchbase back in May.

Since we did the initial work to bootstrap the effort, Couchbase has taken over the maintenance and future development duties of the library and they are now keeping it up-to-date.

While this is not yet a PCL/NuGet, work is in progress to make this happen.

Work in Progress: 2D Direct Rendering

Developers want to have access to a rich API to draw. Sometimes used to build custom controls, sometimes used to draw charts or to build entire applications based on 2D rendered API.

We are working on bringing the System.Drawing API to all of the mobile platforms. We have completed an implementation of System.Drawing for iOS using CoreGraphics, and we are now working on both an Android and WinPhone implementations.

Once we complete this work, you can expect System.Drawing to be available across the board as a NuGet/PCL library.

If you can not wait, you can get your hands today on the Mac/iOS version from Mono's repository.

Next Steps

We are now working with our users to improve these APIs. But we wont stop at the API work, we are also adding IDE support to both Xamarin Studio and Visual Studio.

Thinking Fondly of GUADEC

It’s been a really long time since I’ve blogged and Oliver Propst is here in New York and since I’ve been telling him about GUADEC I realized that instead I should write it all down!

Getting to GUADEC was very exciting for me as I finished my talk at OSCON and then ran straight to the airport in order to make my flight. Unfortunately this meant that I missed the first day of GUADEC in addition to the all day board meeting the day before. All of the travel was worth it when the bus pulled into the station in Strasbourg to find Rosanna and Sri waiting for me! We walked over to the bar gathering and it was fantastic to see everyone and catch up and I was immersed in GUADEC all over again.

It was really fun to be at GUADEC and definitely a different experience than as Executive Director. There were so many great talks that it was often hard to choose between the two tracks. I loved volunteering to help with sessions and felt pretty privileged to introduce two of the keynotes: Nate Willis and Matthew Garrett. Nate spoke about automotive software with the cool narrative of hacking his own car. I loved that he tied it all back to GNOME with practical recommendations for the community. Matthew gave an incredibly inspirational talk about GNOME and its future. I highly recommend watching the video when it comes out if you didn’t get a chance to see it in person. I think we’ll have a lot to talk about over the next year and a lot of work ahead of us too.

I spoke about what I learned as Executive Director of GNOME. It was nice to reflect over the years I spent in the role and also to provide some recommendations going forward. The GNOME community is exceptional and if we can prioritize attracting newcomers and communicating better about why we do what we do we’ll be unstoppable. I proposed that we have technical evangelists for GNOME so that we have the ability to appoint our most articulate and charismatic community members as representatives. I think the GNOME community needs to go to companies and talk to them about GNOME and help them with their GNOME usage (or potential GNOME usage). Happily two extraordinary people volunteered after my talk so we’ll see!

All of the board meetings were a bit grueling but I think good discussions were had. And the marketing hackfest was fun and productive as usual.

I would be remiss if I didn’t mention all of the hard work of Alexandre and Natalie who made GUADEC run so smoothly, even in a venue that they had to scramble to arrange when the original venue fell though at the last minute. Happily, Alexandre was the winner of the coveted Pants Award this year, so we had multiple opportunities for our community to express our gratitude.

I also had a blast shining the bright light of truth on the Swedish Conspiracy. And I’m looking forward to GUADEC in Goethenburg too!

Thanks to the GNOME Foundation for sponsoring my travel!

August 19, 2014

Final thoughts on my OPW experience

It’s official! GTG’s calendar plugin is finally ready! Just in time, since today is the last day of my OPW internship.
My pull request is under review, and (hopefully) it will be merged soon into the main branch.

If you are interested, as soon as the merge is complete, you’ll be able to update your GTG version and play around with the plugin. If you do, please let me know what you think… If you like it, or if you don’t, if you have any suggestion or even if you find a bug (and please file a bug report in this case).

It’s hard to show all the functionalities in a gif, but here is a small sample of what you can expect from the plugin:

calendar_view_end

I had a great time working as an OPW intern and with GTG, and I hope you can benefit from my work.

Thank you to GNOME and all the OPW organizers for making this all happen! And special thanks to Izidor Matušov for mentoring me during this period! I’ve learned sooo much and had a lot of fun! =]

PS: If you are a woman and are thinking about whether or not you should apply for the next round of OPW, my advice is: TOTALLY DO! I never imagined I could do all that I did when I first thought about applying, and I almost gave up thinking I could never work with open source. I’m glad I didn’t, because it was a great and rewarding experience.

August 18, 2014

OPW final report - Greek Translation

Finally, my first contribution to the GNOME Project within the OPW came to an end!!! I can say it was more difficult than expected and required much learning, reading and research for a simple translator like me who has no programming knowledge. 
The main job of the project was to translate and review all GNOME project files in Greek. The Greek translation of GNOME was at a good level, thanks to the work done in previous years by the members of the Greek community. After discussion with the community and my mentor, we found it useful to first do a research on available translation programs and to write a quick guide on How to start using GNOME's translation system.

In order to deliver uniformity we standardized terms and created from scratch a new glossary which took some extra time but is an important tool for all future translations. Also as a new translator Ι wanted a tool to help me search for our glossary terms among the files. The good Open-tran.eu is closed and "grep"-ing po files weren't so easy and efficient for me.
So along with the glossary, we came up with the idea to set up a pylyglot instance for our language. That was something new and experimental for Greek GNOME community, but it proved to be handy as it helped us to speed up the review and translation process. We could easily check the translated files and the consistency of the whole GNOME environment by just searching specific terms from the glossary at our pylyglot website.

Along the way, and after communication with the mentor and the other members of the community, we decided to modify the initial time schedule, aiming precision and consistency especially regarding the new release files of GNOME 3.14 which I believe has been achieved greatly.  

Many thanks to mentor Efstathios Iosifidis, Tom Tryfonidis and all other team members who helped me on this effort!!!
I can say so far the journey with GNOME has been exciting all the way. 

Thank you GNOME!

GSoC Final Report

Now that GSoC is officially finished here is my final report on porting GNOME Calculator to MPFR.

MPFR

I successfully ported GNOME Calculator to use the MPFR library. This allows for more precise calculations. The precision is right now set to 1000 bits. This is changeable through an added gsettings key “precision”. Time will tell if 1000 is too low or too high and if we should change it in a later version. Also, you can now have more than 9 digits after the decimal point. The settings dialog now allows the user to set it to higher numbers, for example 20:

Settings dialog of GNOME Calculator

Please excuse the missing icons, I didn’t include gnome-icon-theme in my .jhbuildrc.

Error reporting

Errors now also include overflow and underflow errors reported by MPFR. I also changed the parser to always check for errors. If an error occurs, the specific part will get highlighted in an equation. Here is a screenshot, where the user divides by zero:

Dividing by zero error

Conclusion

I really enjoyed participating in Google Summer of Code. It was very nice experience and contributing to GNOME is awesome :). I (hopefully) will continue to contribute to GNOME Calculator and maybe some other parts of GNOME.

Final Week of GSOC : Revamping the UI of Gnome-Calculator and adding History-View

Hi Everybody,

We have introduced a new mode into Gnome-Calculator, that is the Keyboard Mode. The Keyboard Mode as its name suggests, is designed for users who prefer to give input to the calculator only from keyboard. This mode does not consists of buttons panel, instead the space is occupied by button panel in other modes is allocated to History View. The mode also allows resizing of the Main Window. On resizing the Main Window, the History View resizes accordingly.

The keyboard mode appears as follows.

keyboard_mode

I have also added Documentation for History View as well as Keyboard mode in the Users Documentation.

 


Navigating the UI by ipython

However brilliant an API is, one never discovers its wonders without really learning how to use it. While some API’s are developer friendly and let their utility be discovered with ease, I found one which has been a rare used tool of awesome in the hiding.

Behave and Dogtail (in ipython majorly) in conjunct have little documentation to guide devs willing to write tests. I ventured into the same wilderness and came out wise enough to put together the following tutorial.

My sample application is Evince-3.8 , which is the latest of its versions available on Fedora 19 (Ancient, yet something I kind of am stuck with at the moment). There are some very good outcomes of my ancientness which will be apparent in the following post.

While there are many ways to navigate the UI of an application , a lot of them with fancy UIs , at-spi browsers like Sniff, Accerciser et al, however nothing beats the simplicity yet efficiency of a terminal. So, I would be covering ipython as a part of this post, which in my experience was the most efficient to figure the most complex/non-a11y friendly UIs.

The modules I found coming in handy are dogtail.root, dogtail.rawinput and dogtail.utils.

Once I had all the above installed and ready I fired the following commands, ready to navigate the evince UI.

$ ipython
$ from dogtail.tree import *
$ from dogtail.utils import *
$ from dogtail.rawinput import *

Once we have this initialized, we would need to “run” the application in order for it to be a part of those listed for browsing via ipython.

$run('evince')
#Initialize the app here, using the "root" from dogtail.tree
$app = root.application('evince')

At this point, everything to know about the application is contained in “app” and the magic of autocomplete in ipython, “blink”, and “dump”.
Standard tutorials online (say the one HERE) right away start describing as to how to access various “widgets”, “children”, “text”, “frame” of the given application, however at this point I am not interested in how to access this Node of the “app” called “x” , rather I want to know WHAT is this node of the “app” called “x” in order to access it finally. This is where I’d need to execute the following:

$app.dump()

which would give you an output on the following lines:

[application | evince]
 [frame | ]
  [filler | ]
    [tool bar | ]
    [panel | ]
     [filler | ]
      [push button | ]
       [action | click |  ]
      [push button | ]
        [action | click |  ]
    [panel | ]
     [filler | ]
      [text | page-label-entry]
       [action | activate |  ]
      [text | ]
       [action | activate |  ]
    [panel | ]
     [filler | ]
      [push button | ]
       [action | click |  ]
       [push button | ]
       [action | click |  ] ...

And it is this output which spawns multiple things hereon..

What I expected out of my magic “dump” was an illustrative verbose of terminal trace telling me all about “roleName”, “name” – much like they said it would be HERE. However, owing to some depredations in dogtail (i guess) I got something like .. well above.
It took me a while to make out that not all magic of dump is lost.

  • The dump trace encapsulates the hierarchy of the nodes of the UI of the application (here evince) , so, frame is the immediate “child” of the application, filler is the child of the frame and so on. which makes it easy to access (say) frame as – $frame=app.child(roleName=’frame’)
  • Which brings me to my second point, the roleName is the left side of the bracket, for instance in a line [x|y], x would be the roleName, while y should be the name/description of the node. If y is missing, we would need to file a Bug (like I did for 3.8 ;)) to request better accessibility for the application (which btw is also better for impaired people using Orca etc.); If however, y is well defined then you can keep accessing the objects like mentioned in the above point – $var=app.child(roleName=’x’,name=’y’)A point to keep in mind while filing a bug for the same is that name should be there only with relevant nodes, like the menu items, buttons et al, since it is super normal to have only “roleName” with fillers, panels. frames etc. – the ‘meta’ objects functioning only to structure the UI together.

What did I do to write some tests for Evince 3.8 without the y’s. My mentor, Vita helped me figure a way around, by accessing the “indices” instead of the non-available names for the respective nodes in question. In case of evince the buttons were ordered in the tree by their positions from left to right , so it was easier to navigate.

By default with dogtail, we can use the Node.children property, which simply returns ‘list’ all the immediate children of the node. So starting with the ‘toolbar’ object we can get to specific button (lets say the last one shown in the above output) :
button = toolbar.children[2].children[0].children[1]
which can be simplified to:
button = toolbar[2][0][1]

which goes as (indexed from 0 up): toolbar’s third child – panel > panel’s first child – filler > filler’s second child – the desired button.
Herein it is important to note that “child()” searches for the children recursively not just the immediate ones.

At this point when I am happy I know how to extract stuff, how do I confirm if what I have in my “button” object is right ?

$button.click() #Clicks the respective button
$button.blink() #Highlights in Red the button in the evince UI

Essentially, when using the ipython to investigate, child is the most common way of exploration, followed with obtaining the Node we want to – and once we have the required node we can do all sorts of stuff to do.Essentially executing actions  (from Node.actions) . For instance – the ones mentioned above like bink, click, typing into it (if its i.e. text field) et al. Its only when one can not child() ourselves onto a Node that the indexing be used as a backup.

 

 

 


summing up 59

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • everyone i know is brokenhearted, i don't believe anymore that the answer lies in more or better tech, or even awareness. i think the only thing that can save us is us. and i do think rage is a component that's necessary here: a final fundamental fed-up-ness with the bullshit and an unwillingness to give any more ground to the things that are doing us in. to stop being reasonable. to stop being well-behaved. not to hate those who are hurting us with their greed and psychopathic self-interest, but to simply stop letting them do it. the best way to defeat an enemy is not to destroy them, but to make them irrelevant. recommended
  • how the other half works: an adventure in the low status of software engineers, there was a time, perhaps 20 years gone by now, when the valley was different. engineers ran the show. technologists helped each other. programmers worked in r&d environments with high levels of autonomy and encouragement. to paraphrase from one r&d shop's internal slogan, bad ideas were good and good ideas were great. silicon valley was an underdog, a sideshow, an ellis island for misfits and led by "sheepdogs" intent on keeping mainstream mba culture (which would destroy the creative capacity of that industry, for good) away. that period ended. san francisco joined the "paper belt" cities of boston, new york, washington and los angeles. venture capital became hollywood for ugly people. the valley became a victim of its own success. bay area landlords made it big. fail-outs from mba-culture strongholds like mckinsey and goldman sachs found a less competitive arena in which they could boss nerds around with impunity; if you weren't good enough to make md at the bank, you went west to become a vc-funded founder. the one group of people that didn't win out in this new valley order were software engineers. housing costs went up far faster than their salaries, and they were gradually moved from being partners in innovation to being implementors' of well-connected mba-culture fail-outs' shitty ideas. that's where we are now
  • the problem with founders, the secret of silicon valley is that the benefits of working at a startup accrues almost entirely to the founders, and that's why people repeat the advice to just go start a business. there is a reason it is hard to hire in silicon valley today, and it isn't just that there are a lot of startups. it's because engineers and other creators are realizing that the cards are stacked against them unless they are the ones in charge
  • the pitchforks are coming… for us plutocrats, we rich people have been falsely persuaded by our schooling and the affirmation of society, and have convinced ourselves, that we are the main job creators. it's simply not true. there can never be enough super-rich americans to power a great economy. i earn about 1,000 times the median american annually, but i don't buy thousands of times more stuff. my family purchased three cars over the past few years, not 3,000. i buy a few pairs of pants and a few shirts a year, just like most american men. i bought two pairs of the fancy wool pants i am wearing as i write, what my partner mike calls my "manager pants." i guess i could have bought 1,000 pairs. but why would i? instead, i sock my extra money away in savings, where it doesn't do the country much good. so forget all that rhetoric about how america is great because of people like you and me and steve jobs. you know the truth even if you won't admit it: if any of us had been born in somalia or the congo, all we'd be is some guy standing barefoot next to a dirt road selling fruit. it's not that somalia and congo don't have good entrepreneurs. it's just that the best ones are selling their wares off crates by the side of the road because that's all their customers can afford

FlocktoPune 2014 Day-2-3



I attended several talks and workshops at Flock on the third and the last day and found all of them interesting. However I would like to mention a few glimpses of them.

Jens Petersen who is the manager of Red Hat's i18n team (and my manager too) presented a talk with the title 'Fedora i18n past, present, and future'.


Few bullet points of his talk:
(a)  Goal of the Fedora i18n project: "Freedom to read and write in one's own language"
(b) Differences between l10n and i18n
(c) Evolution of Anaconda in terms of i18n
(d)  Evolution of Input method frameworks and next generation input method's architecture
(e) Fedora i18n projects

After his talk, we met "Carlos O'Donell" who is one of the maintainers of Glibc and we had a good discussion about following issues:-
(a) Translation process for man pages
(b) Collation problems in glibc for few Japanese scripts
(c) CLDR and Unicode data.
(d) Collation problems in glibc for Japanese hirgana and katakana scripts

We are looking forward to seeing more collaboration with the glibc team in the future.

I attended "Make tools with fedmsg" workshop by Ralph Bean and slides for the presentation can be found on page.

I found this workshop really useful because we use statusapp to keep a track of our team's weekly and monthly status reports and with the help of fedmsg I can automate at least 30-35% of team status.

Later that day I attended Haïkel's and Parag's package review hackfest


Marina and Owen presented "GNOME newcomers workshop", during workshop Owen described GNOME apps and desktop. During workshop,I had a wonderful chat with Kalev regarding library API issues.

I also attended security talks,Kernel testing and Rpm,dnf talk.
To sum up in a one line, "Flock is awesome, kudos to organizing team!!"
I am really glad to be part of such a passionate community and I would like to thank Fedora/Flock once again for my travel arrangements.

Want to join the Red Hat Graphics team?

We have an opening in our Graphics Team to work on improving the state of open source GPU drivers. Your tasks would include working on various types of hardware and make sure it works great under linux and improving the general state of the Linux graphics stack. Since the work would include working on some specific pieces of hardware it would require the candidate to relocate to our Westford office, just north of Boston.

We are open to candidates with a range of backgrounds, but of course previous history with linux kernel codebase or the X.org codebase or Wayland is an advantage.

Please contact me at cschalle-at-redhat-com if you are interested.

FlocktoPune 2014 Day-0-1

This is my first Flock and Fedora conference though I have been contributing in Fedora from past years and eventually it happened in hustle. My plan was to attend GUADEC and Flock together but because of visa issues I could not attend GUADEC this year. This year's Flock was very special to us because  for very first time as a Fedora i18n team, we met each other in person.  It was great to meet each one in person. We exchanged our thoughts on how we can give i18n support to all Fedora products .



First and Second day were packed with great talks which were running in parallel. It was really difficult to choose which one to attend and which one to leave.

On first day Pravin gave talk on fonts in which he mentioned in length about OSFW, open source font world ,one can selectively choose and install fonts. Pravin’s session was then followed by Mike and myself in which we presented on "Text prediction on desktops".

Later in afternoon on first day, there were 3 excellent talks arranged at same time. The topics for afternoon session were
 (a) Fedora Workstation -Goals, Philosophy, and Future
 (b) Python 3 as Default
 (c) Wayland Input Status
I attended Fedora Workstation -Goals, Philosophy, and Future by Christian Schaller. This session was excellent as it highlighted that Desktops are still alive, who says they are dying? This session in particular was attended by large amount of people. The hall was jam pack with people standing, interested in Chritian’s talk. In his talk, Christian mentioned that Fedora Desktop's they are also considering use cases for Developers.This is very interesting for me and feels quite promising.

I must say "Christoph Wickert" had given nice talk on explaining concept behind different Fedora working groups and Fedora products. This talk was mainly intended for ambassadors so that they can explain these products back in community.

Second day started with talk by Stephen Gallagher on Fedora Server Role-ing. I was really impressed by cockpit-project and I would like to contribute to it with i18n work in future.

I attended “UEFI: The Great Satan and You” by Adam. I was curious about secure boot options and was quite happy to attend as explained it really well. There were lot of interesting questions from the audience especially about Microsoft and FSF's strategy on it.

The talk “NoSql in Fedora Infra” was delivered in Spanish and was also translated in English. I personally would like to see more of NoSql in Fedora. Yohan mentioned various Nosql databases and there use cases and how we can use them in Fedora infra. Ralph highlighted a point that we can use Fedmsg's messages to test performance of the NoSql's databases.

At the end of Second day I attended "Introduction to docker" and "Fedora.next.next: Planning for Fedora 22" talk. It's really exciting to see cool features that are coming in Fedora products.

I would like to thank Flock (Fedora) for my travel sponsorship and arrangements. Those were perfect! I wish to extend my gratitude to Ruth Suehle.

Feeds