GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

July 29, 2016

One year of Mesa

Changes

During the last year and something, my work at Igalia was focused on the Intel i965 driver for Mesa, the open source OpenGL implementation. Iago and Samuel were working for some time, then joined Edu and Antía, and then I joined myself, as the fifth igalian.

One of the reasons I will always remember working on this project is that I was also being a father for a year and something, so it will be always easy to relate one and the other. Changes! A project change plus a total life change.

In fact the parenthood affected a little how I joined the project. We (as Igalia) decided that I would be a good candidate to join the project around April. But as my leave of absence was estimated to be around the last weeks of May, and it would be strange to start a project and suddenly disappear, we decided to postpone the joining just after the parental leave. So I spent some time closing and transferring the stuff I was doing at Igalia at that moment, and starting to get into the specifics of Intel driver, and then at the end of May, my parental leave started. Pedro is here!

 

Pedro’s day zero
Parenthood

Mesa can be a challenging project, but nothing compared to how to parent. A lot of changes. In Spain the more usual parental leave for the father is 15 days, where the only  flexibility is that you need to take 2 just after the child has born, and chose when take the other 13. But those 13 needs to be taken in a row. So I decided to took 15 days in a row. Igalia gives you 15 extra days. In fact, the equivalent to 15 days, as they gives you flexibility of how to take them. So instead of took them as 15 full-day, I decided to take 30 part-time days. Additionally Pedro’s grandmother (maternal) came to live with us for a month. That allowed us to do a step by step go back to work. So it was something like:

  • Pedro was born, I am in full parental leave, plus grandmother living at our house helping.
  • I start to work part-time
  • Grandmother goes back to Brazil
  • I start to work full-time.
  • Pedro’s mother goes back to work part-time

And probably the more complicated moment was when Pedro’s mother went back to work. More or less at that time we switched the usual grandparents (paternal) weekend visits to in-week visits. So what it is my usual timetable? Three days per week I work 8:00-14:30 at my home, and I take care of Pedro on the afternoon, when his mother goes to work. Two days per week is 8:00-12:30 at my home, and 15:00-20:00 at my parents home. And that is more or less, as no day is the same that the other 😉 And now I go more to the parks!

Pedro playing on a parkPedro playing on a park

There were other changes. For example switched from being one of the usual suspects at Igalia office to start to work mostly from home. In any case the experience is totally worthing, and it is incredible how fast the time passed. Pedro is one year old already!

Pedro destroying his birthday cake
Mesa tasks

I almost forgot that this blog post started to talk about my work on Mesa. So many photos! During this year, I have participated (although not alone, specially on spec implementations) in the following tasks (and except last one, in chronological order):

About which task I liked the most, I think that it was the work done on optimizing the NIR to vec4 pass. And not forget that the work done by Igalia to implement internalformat_query2 and vertex_attrib_64bit, plus the work done by Iago and Samuel with fp64, helped to get Mesa 12.0 exposing OpenGL 4.2 support on Broadwell and later.

What happens with accessibility?

Being working full time for the same project for a full year, plus learning how to parent, means that I didn’t have too much spare time for accessibility development. Having said so, I have been reviewing ATK patches, doing the ATK releases regularly and keeping an eye on the GNOME accessibility mailing lists, so I’m still around. And we have Joanmarie Diggs, also fellow igalian, still rocking on improving Orca, WebKitGTK and WAI-ARIA.

Thanks

Finally, I’d like to thank both Intel and Igalia for supporting my work on Mesa and i965 all this time. Specially on allowing me so flexible timetable, where the important is what you deliver, and not when you do the work, allowing me to enjoy parenthood. I also want to thanks the igalian colleagues I has been working during this year, Iago, Samuel, Antía, Edu, Andrés, and Juan, and all those at Intel who have been helping and reviewing my work during all this year, like Jason Ekstrand, Kenneth Graunke, Matt Turner and Francisco Jerez.

 

Timing your movie…

A big question when you write a scenario is: how do you time your movie?

CIMA museum's clock, by Rama (CC by-sa)

CIMA museum’s clock, by Rama (CC by-sa 2.0).

From the scenario

You can already do so from your written script. It is usually admitted that 1 page is roughly equivalent to 1 minute of movie. Of course to reach such a standard, you have to format your file appropriately. I have searched the web to find what were these format rules. What I gathered:

Format

  • Pages are A4.
  • Font is 12-point Courier.
  • Margins are 2.5 cm on every side but the left margin which is 3.5 cm.
  • Add 5,5 cm of margin before speaker names in dialogues.
  • Add 2,5 cm of margin before actual dialogue.
  • No justification (left-align).
  • No line indentation at start of paragraphs.

I won’t list more because there are dozen of resources out there which does it in details, with sometimes even examples. For instance, this page was helpful and for French-speaking reader, this one also (and it uses international metric system rather than imperial units), or even Wikipedia.
It would seem that the whole point of all these rules is to have a script with the less possible randomness. A movie script is not meant to be beautiful as an object, but to be as square as possible. Thus exits any kind of justification (which stretches or compresses spaces), as well as any line indentation (which does not happen every line) because they don’t have a behavior set in stone. They were made only so that your document “looks nice” which a script-writer cares less than in the end than being able to say how long will the movie last by just counting the pages.

Free Fonts

Some people may have noted that 12-point Courier is a Microsoft fonts. For GNU/Linux users out there, you can get these with a package called msttcorefonts. On Debian, or Ubuntu, the real package is “ttf-mscorefonts-installer” and it does not look like it is in Fedora repositories. That’s ok because I really don’t care. I use personally Liberation Mono (Liberation is a font family created by RedHat in 2007, under a Free license). FreeMono is also another alternative, but the Liberation fonts work well for me.

You may have noticed that these are all monospace fonts, which means that every character occupy the same horizontal space, i.e. ‘i’ and ‘W’ for instance uses up the same width (adding spaces around the ‘i’ for instance), which opposes to proportional fonts (more common on the web). Once again, proportional fonts are meant to be pretty whereas monospace fonts are meant to be consistent. It all comes back to consistent text-to-timing conversion.
Not sure why Courier ever became a standard in script-writing, but I don’t think that any other font would be much of a problem. Just use any metrically-compatible monospace font.

Side note: I read 3 scenarios in the last year (other than mine) and none of them were using Courier, nor actually most of the rules here. So really I am not sure how much this rule is enforced, at least in France. Maybe in other countries, this is more an hard-on rule?

Writing with LibreOffice

Right now, I simply write with LibreOffice. Now I am not going to make a tutorial about using LibreOffice, because this will diverge too much but my one advice is: use styles! Do not “hardcode” text formatting: don’t increase indents manually, don’t use bold, nor underline your titles…
Instead create styles for “Text body” (default texts), “Dialogue speaker”, “Dialogue”, “Scene title”… Then save a template and reuse it every time you write a new scenario.

While writing this post and looking for reference, I read weird stuff like “use a dedicated software because you don’t want scene titles ending a page”. Seriously? Of course, if you make scene titles by just making your text bold, that happens. But if you use styles, this won’t (option “Keep with next paragraph” in “Text flow” tab which is a default for any Header style). So once again, use styles.

Note: dedicated software are much more than just this basic issue, and they would have a lot more features making a scenarist life easier. I was also planning on developing such a software myself, so clearly I’m not telling you not to use one! I’m just saying that for now, if you can’t afford a dedicated software, LibreOffice is just fine, and styling issues like “scenes titles should not end a page” are just lack of knowledge on how to properly use a word processing software.

So that’s it? I just follow these rules and I get my timing?

Of course, real life hits back. First of all, every language may be more or less verbose. For instance German and French are more verbose than English, which in turn is more than Japanese. So using the same formatting, your page in French would be less than a minute on screen whereas a page in Japanese would be longer than a minute.

There is also the writer’s style. Not everyone writes as concisely and you may write the same scenario with a different timing than your colleague.

As a consequence, writers evaluate their scripts. You can try to act them out for instance. Try to see how long your text really lasts. And then I guess, you can either create a custom text-to-length conversion or adapt the text formatting to end up with the “1 page = 1 minute” approximation. If your scripts are usually going faster, then you need more text in one page. Make smaller margins or use a smaller font maybe?

Of course, it may also be that you use a much too verbose style. A scenario is not a novel: you should not try to make a beautiful text with carefully crafted metaphors and imaging. You are writing a text for actors to read and understand (and in our case, for painters and animators to draw).

ZeMarmot’s case

Moreover the 1 min = 1 page rule is not consistent in the same script either: a page with no dialogue could last several minutes (descriptions and actions are much more condensed than dialogues) whereas a page with only dialogue could be worth a few seconds of screen. But that’s ok, since this is all about average. The timing from scenario is not meant to be perfect. It gives us an approximation.

Yet ZeMarmot is particular since we have no dialogue at all. So are we going to have only 5-minute pages? That was a big question, especially since this is my first scenario. Aryeom helped a lot with her animation experience, and we tried to time several scenes by imagining them or acting them out. This is a good example which shows that no rule is ever made to be universal. And in our case, it took a longer time to accurately calibrate our own page-time rule.

Animatics

This is more animation-specifics: the next step after storyboarding (or before more accurate storyboarding starts) is creating an animatic, which is basically compiling all the storyboard’s images into a single video. From there, we can have a full video, and we will try to time each “image”. Should this action be faster or last longer? This requires some imagination since we may end up with some images lasting a few seconds and we have to imagine all in-between images to get the full idea. But in the end, this is the ultimate timing. We are able to tell quite accurately how long the movie will last once we agree on an animatic.

Should timing lead the writer?

The big question: should the timing lead us? You can get a different timing than you expect, and there are 2 cases: longer or shorter.

The shorter one is easy. Unless you are really really too short (and you don’t qualify anymore as a feature-length for instance), I don’t think it is a problem to have a shorter-than-average movie. I’d prefer 100 times a short but well timed and interesting movie than a boring long movie.

Longer is more difficult because the trend nowadays seem to have longer and longer movies. Now 2h30, sometimes up to 3h, seems to be a standard for big movies (and they manage to lengthen them in the “director cut” edition!). I have seen several movies these last years which were long and boring. I am not even talking of contemplative art movie, but about hard action-packed movies. No, superhero battling for 3 hours, this is just too much.
So my advice if your movie is longer than expected, ask yourself: is it really necessary? Won’t it be boring? Of course, I am not the one to make the rule. If you work in Hollywood, well first you probably don’t read me, and second you don’t care whatever I say. You will make a 2h30 movie and people will go and watch it anyway. Why not. I’m just saying this as a viewer. And since I think this is really not enjoyable, I don’t want to have our own viewer experience be boring (well at least by movie length!).

And so that’s it for my small insight about timing a movie. Of course, as I already told, I am mostly a beginner on the topic. Everything I say here is a mix of my searches these last months, my own experiments, Aryeom’s experience… So don’t take my word as is, and don’t hesitate to react in comments if you have better knowledge or just ideas on the topic.

By the way: ZeMarmot‘s pilote (not the finale movie) has been timed to be about 8 minutes long. 🙂

Reminder: if you want to support our animation film, made with Free
Software, for which we also contribute back a lot of code, and
released under Creative Commons by-sa 4.0 international, you can
support it in USD on Patreon or in EUR on Tipeee.

July 28, 2016

Rygel/Shotwell/GUADEC

  • Rygel is currently mainly receiving maintenance things because reason. This is hopefully changing soonish
  • I picked up Shotwell as maintainer and things are coming along nicely, though its architecture sometimes makes changes that sound really easy very hard (e.g. certain import performance improvements). Most annoying part, though, is that the merging of the awesome map feature is somewhat affected by the recent woes regarding MapQuest’s tile server
  • I’m going to be at GUADEC during its core days

That’s all for now.

Asset Previewer

Mobile developers are working with all kinds of graphics assets and until now, to preview them, we would use an external tool to browse them.

We have developed a plug-in for both Visual Studio and Xamarin Studio that will provide live previews of various assets right into the IDE. It works particularly well for UrhoSharp projects.

The previewer can display in the IDE previews of the following asset types.

  • Static Models (*.mdl files)
  • Materials, Skyboxes, Textures (*.dds)
  • Animations (*.ani)
  • 2D and 3D Particles (*.pex)
  • Urho Prefabs (*.xml if the XML is an Urho Prefab)
  • Scenes (*.xml if the XML is an Urho Scene)
  • SDF Fonts (*.sdf)
  • Post-process effects (*.xml if the XML is an Urho RenderPath)
  • Urho UI layouts (*.xml if the XML is an Urho UI).

For Visual Studio, just download the previwer from the Visual Studio Gallery.

For Xamarin Studio, go to Xamarin Studio > Add-ins go to the "Gallery" tab, and search for "Graphics asset previewer" and install.

Rebasing on the way

As the title says, the User Tracker and the Contextual Popovers are merged and work fine😀, all that remains is to come up with a logical history of the commits so that they make sense chronologically.

Basically, we’ll use the magic that git provides in order to combine, modify, split (and so on…) patches so that they look nice (way better than they look now) when they are going to be landed:).

I admit, the way I committed things doesn’t quite make this task easy, as I would often begin working on something, then commit that, then fix some other thing in some other part, and then commit that, then come back to the first thing that was committed, maybe modify that or delete it completely, or whatnot.

The reason for that is the fact that you cannot always predict what the next step will bring, and that’s totally fine. You don’t even have to. You just make the necessary changes, you end up in a place where you realize that you need to rethink some bits, and then you go back and do that. As I said before, our magical friend Git is here to save the day (only if i were a guru in that, which I am not, but still it’s fun).

More on this soon:)


FINAL REMINDER! systemd.conf 2016 CfP Ends on Monday!

Please note that the systemd.conf 2016 Call for Participation ends on Monday, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are very interested in yours, too!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

ALSO: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

Automatic metadata download in Music

Its been quite some time since I last blogged. My semester exams prevented me from working on the project. I’ve had a month after that to get automatic metadata fetching available for the tag editor.

My original plan was to use metadata already being presented by the solid AcoustID + chromaprint plugin combination through grilo, only to realise later that the AcoustID webservice does not provide all the metadata that I would need for Music’s tag editor. It does, however, provide the Musicbrainz recording ID for a music piece that can be used to fetch a lot of metadata related to that recording. Here’s how we’re doing it…

Chromaprint plugin generates a fingerprint

The chromaprint plugin in grilo uses gstreamer-chromaprint to generate a unique fingerprint for each music file. The resolve function, as any other resolve function in grilo, takes in a list of metadata keys and a GrlMedia. In this case the GrlMedia must be of audio type and contain a URL for the song’s physical location. The supported metadata keys are ‘duration’ and ‘chromaprint’.

def get_chromaprint(self, track_media):
    source = grilo.registry.lookup_source('grl-chromaprint')
    assert source is not None
    fingerprint_key = grilo.registry.lookup_metadata_key('chromaprint')
    assert fingerprint_key != Grl.METADATA_KEY_INVALID

    options = grilo.options
    keys = [fingerprint_key, Grl.METADATA_KEY_DURATION]

    source.resolve(track_media, keys, options, self.resolve_acoustid, None)
    

The fingerprint generated is stored in the GrlMedia that will be passed on to the callback function self.resolve_acoustid. Read here about how the chromaprint is actually generated.

AcoustID plugin fetches Musicbrainz IDs for the fingerprint

Grilo’s AcoustID plugin again takes in an audio type GrlMedia with chromaprint and duration data. This combination (fingerprint + duration) is used to lookup a particular music piece and if available return very accurate data identified with the song. The AcoustID webservice returns a list of recordings that may be possible matches; but currently only the first result is being returned as the most probable match. The supported keys for this plugin ‘title’, ‘mb-recording-ID’, ‘album’,  ‘mb-release-ID’, ‘artist’, ‘mb-artist-ID’.

def resolve_acoustid(self, source, op_id, media, data=None, error=None):
    audio = Grl.Media.audio_new()
    fingerprint_key = grilo.registry.lookup_metadata_key('chromaprint')
    assert fingerprint_key != Grl.METADATA_KEY_INVALID

    chromaprint = media.get_string(fingerprint_key)
    assert chromaprint is not None
    audio.set_string(fingerprint_key, chromaprint)
    audio.set_duration(media.get_duration())
    keys = [
        Grl.METADATA_KEY_MB_ARTIST_ID,
        Grl.METADATA_KEY_ARTIST,
        Grl.METADATA_KEY_MB_ALBUM_ID,
        Grl.METADATA_KEY_ALBUM,
        Grl.METADATA_KEY_MB_RECORDING_ID,
        Grl.METADATA_KEY_TITLE
    ]
    options = Grl.OperationOptions.new()
    options.set_resolution_flags(Grl.ResolutionFlags.NORMAL)

    plugin_source = grilo.registry.lookup_source('grl-acoustid')
    plugin_source.resolve(audio, keys, options, self.resolve_mb, None)
    

The ‘audio’ GrlMedia is passed on to another callback, self.resolve_mb, which receives it with the metadata fetched and stored in the media itself. Read in details about the AcoustID webservice here.

Musicbrainz plugin gets song tags for the mb recording ID

The Musicbrainz plugin would finally be able to fetch desired data for a song (recording) with the help of its mb-recording-ID provided through acoustID. I’ll write about this plugin and the Musicbrainz webservice in another post when it gets reviewed & pushed to grilo-plugins upstream. For now, it takes in a GrlMedia with a valid musicbrainz recording ID and stores the media with fetched data from Musicbrainz database.

def resolve_mb(self, plugin_source, op_id, audio, data=None, error=None):
    mb_source = grilo.registry.lookup_source('grl-musicbrainz')
    keys = self.metadata_keys
    keys.remove(Grl.METADATA_KEY_CREATION_DATE)
    keys.append(Grl.METADATA_KEY_PUBLICATION_DATE)
    options = Grl.OperationOptions.new()
    options.set_resolution_flags(Grl.ResolutionFlags.NORMAL)

    def _mb_callback(mb_source, operation, audio, data=None, error=None):
        if error:
            logger.error(error)
        else:
            if audio.get_title():
                self._title_entry.set_text(audio.get_title())
            if audio.get_album():
                self._album_entry.set_text(audio.get_album())
            if audio.get_artist():
                self._artist_entry.set_text(audio.get_artist())
            if audio.get_composer():
                self._composer_entry.set_text(audio.get_composer())
            if audio.get_genre():
                self._genre_entry.set_text(audio.get_genre())
            if audio.get_track_number():
                self._track_entry.set_text(str(audio.get_track_number()))
            if audio.get_album_disc_number:
                self._disc_entry.set_text(
                    str(audio.get_album_disc_number()))
            if audio.get_publication_date():
                self._year_entry.set_text(
                    str(audio.get_publication_date().get_year()))

    mb_source.resolve(audio, keys, options, _mb_callback, None)
    

Finally the modified media is passed on to another callback responsible for filling up the proper entries in tag editor GUI. I’m still undecided about how to allow the user to selectively write only the tags they want. But more on this later.

Thanks to Victor for helping with his work on all these grilo plugins and also helping me understand. Any helpful comments and suggestions are welcome.


Machine-specific Git config changes

I store my .gitconfig in Git, naturally. It contains this block:

[user]
        email = will@willthompson.co.uk
        name = Will Thompson

which is fine until I want to use a different email address for all commits on my work machine, without needing git config user.email in every working copy. In the past I’ve just made a local branch of the config, merging and cherry-picking as needed to keep in sync with the master version, but I noticed that Git reads four different config files, in this order, with later entries overriding earlier entries:

  1. /etc/gitconfig – system-wide stuff, doesn’t help on multi-user machines
  2. $XDG_CONFIG_HOME/git/config (aka ~/.config/git/config) – news to me!
  3. ~/.gitconfig
  4. $GIT_DIR/config – per-repo, irrelevant here

So here’s the trick: put the standard config file at ~/.config/git/config, and then override the email address in ~/.gitconfig:

[user]
        email = wjt@endlessm.com

Ta-dah! Machine-specific Git config overrides. The spanner in the works is that git config --global always updates ~/.gitconfig if it exists, but it’s a start.

On the killing of intltool

If you have a project that uses intltool, you should be trying to get rid of it in favor of using AM_GNU_GETTEXT instead. Matthias wrote a nice post about this recently. Fortunately, it’s very easy to do. I decided to port gnome-chess during some downtime today, and ran into only one tough problem:

make[1]: Entering directory '/home/mcatanzaro/.cache/jhbuild/build/gnome-chess/po'
Makefile:253: *** target pattern contains no '%'. Stop.
make[1]: Leaving directory '/home/mcatanzaro/.cache/jhbuild/build/gnome-chess/po'

This was pretty inscrutable, but I eventually discovered the cause: I had forgotten to remove [encoding: UTF-8] from POTFILES.in. This line is an intltool thing and you have to remove it when porting, same as you need to remove the type hints from the file, or it will break the Makefile that gets generated. This is just a heads-up as it seems like an easy thing to forget, and since the error message provided by make is fairly useless.

A couple unrelated notes:

  • If your project uses git.mk, as any Autotools project really should, you’ll have to modify that too.
  • Don’t forget to remove any workarounds added to POTFILES.skip to account for intltool’s incompatibility with modern Automake distcheck.
  • For some reason, msgfmt merges translations into XML files in reverse alphabetical order, the opposite of intltool, which seems strange and might be a bug, but is harmless.

Say thanks to Daiki Ueno for his work maintaining gettext and enhancing it to make change practical, and to Javier Jardon for pushing this within GNOME and working to remove intltool from important GNOME modules.

July 27, 2016

On discoverability

Daniel G. Siegel posted an item on his blog recently that is very closely tied to usability testing: On discoverability.

I've discussed elsewhere that usability is about real people doing real tasks in a reasonable amount of time. Some researchers also refer to "learnability" and "memorability" to define usability—this is very similar to discoverability. Can you discover the features of the system just by poking at it? Is the user interface obvious enough that you can figure it out on your own?

Daniel's post includes a video of a usability test that explores the Google search box on a smartphone. It's a short but very interesting video to watch, as the tester clearly becomes more and more frustrated with how to initiate a search.

I want to close by highlighting some of Daniel's comments:
if there is no way to discover what operations are possible just by looking at the screen and the interaction is numbed with no feedback by the devices, what's left? the interaction gets reduced to experience and familiarity where we only rely on readily transferred, existing skills.

And that's why usability is so important, especially for open source software where users often must learn the software with no tutorials or other instructions to guide them.

gnome-boxes: Coder’s log

 

June, 27th-July 9th

So another two weeks have passed and it’s time to sum things up and reflect a little on the struggles and accomplishments that have marked this time period, which was quite a bumpy  ride compared to the others, but definitely more exciting.

Good news first, I have managed to make gnome-boxes successfully advertises SPICE connections on the local network and to browse already existent connections, although the UI is not yet implemented and there is still work to do.

Hopefully very soon, GNOME-Boxes will be able to create new “Boxes” out of advertised SPICE connections in a very straightforward manner. Although there is still a decent amount of work to do, I already have developed strong feelings for this awesome feature, so the future sounds very bright for it:).

Now, to talk about the shared folder feature, I managed to make GNOME-Boxes automatically add the WEBDAV-channel to every virtual machine and I am very close to making Boxes install spice-webdavd after any express installation. To translate this, the shared folders is going to be an out of the box feature, which will in most cases require no configurations from the user.

As for the bad news, well, now that I think about it, there aren’t any bad news per se, but adding a visual display when someone hovers with a file over a machine is proving to be a little more difficult task than expected, because the new UI component ends up on top of our machine’s display and replaces its drag destination site, so it will most likely require some workarounds.  Here is an example of the intended UI:

 

All in all, the light at the end of the tunnel is a ton more visible now for some features, some tiny bugs have been fixed and the future of Boxes looks very interesting and exciting !

 

 

 

 


Avahi browsing example in Vala

After recently posting an example of publishing services in Vala using Avahi, let’s try to paint the whole picture by posting an example of  browsing those services. Enjoy !

using Avahi;
public class AvahiBrowser {
    private const string service_type = "_demo._tcp";

    private Client client;
    private MainLoop main_loop;
    private List<ServiceResolver> resolvers;
    private ServiceBrowser service_browser;

    public AvahiBrowser () {
    main_loop  = new MainLoop ();
    try {
        service_browser = new ServiceBrowser (service_type);
        service_browser.new_service.connect (on_new_service);
        service_browser.removed_service.connect (on_removed_service);
        client = new Client ();
        client.start ();
        service_browser.attach (client);
        resolvers = new List<ServiceResolver> ();
        main_loop. run ();
     } catch (Avahi.Error e) {
     warning (e.message);
    }
}

public void on_found (Interface @interface, Protocol protocol, string name, string type, string domain, string hostname, Address? address, uint16 port, StringList? txt) {
    print ("Found name %s, type %s, port %u address%s\n", name, type, port, address.to_string ());
}
public void on_new_service (Interface @interface, Protocol protocol, string name, string type, string domain, LookupResultFlags flags) {
    ServiceResolver service_resolver = new ServiceResolver (Interface.UNSPEC,
                                                            Protocol.UNSPEC,
                                                            name,
                                                            type,
                                                            domain,
                                                            Protocol.UNSPEC);
                                                            service_resolver.found.connect (on_found);
    service_resolver.failure.connect ( (error) => {
        warning (error.message);
    });

    try {
        service_resolver.attach (client);
    } catch (Avahi.Error e) {
        warning (e.message);
    }

    resolvers.append (service_resolver);
}

public void on_removed_service (Interface @interface, Protocol protocol, string name, string type, string domain, LookupResultFlags flags) {
    print ("Removed service %s, type %s domain %s\n", name, type, domain);
}

static int main (string[] args) {
    AvahiBrowser browser = new AvahiBrowser ();

    return 0;
}
}

For any questions or for any suggestions on improving this example, feel free to contact me at viorel.visarion@gmail.com.


GNOME Keysign 0.6

It’s been a while since I reported on GNOME Keysign. The last few releases have been exciting, because they introduced nice features which I have been waiting long for getting around to implement them.

So GNOME Keysign is an application to help you in the OpenPGP Keysigning process. That process will eventually require you to get hold of an authentic copy of the OpenPGP Key. In GNOME Keysign this is done by establishing a TCP connection between two machines and by exchanging the data via that channel. You may very well ask how we ensure that the key is authentic. The answer for now has been that we transmit the OpenPGP fingerprint via a secure channel and that we use the fingerprint to authenticate the key in question. This achieves at least the same security as when doing conventional key signing, because you get hold of the key either via a keyserver or a third party who organised the “key signing party”. Although, admittedly, in very rare cases you transfer data directly via a USB pendrive or so. Of course, this creates a whole new massive attack surface. I’m curious to see technologies like wormhole deployed for this use case.

The security of you going to the Internet to download the key is questionable, because not only do you leak that you’re intending to communicate with a certain person, but also expose yourself to attacks like someone dropping revocation certificates or UIDs of the key of your interest. While the former issue is being tackled by not going to the Internet in first place, the latter had not been dealt with. But these days are over now.

As of 0.5 GNOME Keysign also generates an HMAC of the data to be transferred and encodes that in the QR Code. The receiving end can then verify whether the data downloaded matches the expected value. I am confident that a new generation hash function serves the same purpose, but I’m not entirely sure how easy it is to get Keccak or siphash into the users’ hands. The HMAC, while being cryptographic overkill, should be fine, though. But the construction leaves a bad taste, especially because a known key is currently used to generate the HMAC. But it’s a mechanism built-in into Python. However, I expect to replace that with something more sensible.

In security, we better imagine a strong attacker who is capable of executing attacks which we think are not necessarily easy or even possible to mount. If we can defend against such a strong attacker then we may trust the system to resist weaker attacks, too. One of such a difficult attack, I think, is to inject just one frame while, at the same time, controlling the network. The attack could then make the victim scan a rogue barcode which delivers a rogue MAC which in turn validates the wrong data. Such an attack should not go unnoticed and, as of 0.5, GNOME Keysign will display the frame that contained the barcode.

This is what it looked like before:

2016-keysign-before

And now you can see the frame that got decoded. This is mainly because the GStreamer zbar element also provides the frame.

2016-keysign-after

Another interesting feature is the availability of a separate tool for producing signatures for a given key in a file. The scenario is that you may have received a key from your friend via a (trusted, haha) pendrive, a secure network connection (like wormhole), or any other means you consider sufficiently integrity preserving. In order to sign that key you can now execute something like python -m keysign.gnome-keysign-sign-key in order to run all the signing logic but without the whole key transfer stuff. This is a bit experimental though and I am not yet happy about the state that program is in, so it’s not directly exposed to users by installing it as executable.

GNOME Keysign is available in OpenSuSE, now. I don’t know the exact details of how to make it work, but rumour has it that you can just do a zypper install gnome-keysign. While getting there we identified a few issues along the way. For example, the gstreamer zbar element needs to be present. But that was a problem, because the zbar element was not built because the zbar library was not available. So that needed to get in first. Then we realised that the most modern OpenSuSE uses a very recent GnuPG which the currently used GnuPG library is not handling so nicely. That caused a few headaches. Also, the firewall seems to be an issue which needs to be dealt with. So much to code, so little time! ;-)

Building and Developing GStreamer using Visual Studio

Two months ago, I talked about how we at Centricular have been working on a Meson port of GStreamer and its basic dependencies (glib, libffi, and orc) for various reasons — faster builds, better cross-platform support (particularly Windows), better toolchain support, ease of use, and for a better build system future in general.

Meson also has built-in support for things like gtk-doc, gobject-introspection, translations, etc. It can even generate Visual Studio project files at build time so projects don't have to expend resources maintaining those separately.

Today I'm here to share instructions on how to use Cerbero (our “aggregating” build system) to build all of GStreamer on Windows using MSVC 2015 (wherever possible). Note that this means you won't see any Meson invocations at all because Cerbero does all that work for you.

Note that this is still all unofficial and has not been proposed for inclusion upstream. We still have a few issues that need to be ironed out before we can do that¹.

First, you need to setup the environment on Windows by installing a bunch of external tools: Python 2, Python3, Git, etc. You can find the instructions for that here:

https://github.com/centricular/cerbero#windows

This is very similar to the old Cerbero instructions, but some new tools are needed. Once you've done everything there (Visual Studio especially takes a while to fetch and install itself), the next step is fetching Cerbero:

$ git clone https://github.com/centricular/cerbero.git

This will clone and checkout the meson-1.8 branch that will build GStreamer 1.8.x. Next, we bootstrap it:

https://github.com/centricular/cerbero#bootstrap

Now we're (finally) ready to build GStreamer. Just invoke the package command:

python2 cerbero-uninstalled -c config/win32-mixed-msvc.cbc package gstreamer-1.0

This will build all the `recipes` that constitute GStreamer, including the core libraries and all the plugins including their external dependencies. This comes to about 76 recipes. Out of all these recipes, only the following are ported to Meson and are built with MSVC:

bzip2.recipe
orc.recipe
libffi.recipe (only 32-bit)
glib.recipe
gstreamer-1.0.recipe
gst-plugins-base-1.0.recipe
gst-plugins-good-1.0.recipe
gst-plugins-bad-1.0.recipe
gst-plugins-ugly-1.0.recipe

The rest still mostly use Autotools, plain GNU make or cmake. Almost all of these are still built with MinGW. The only exception is libvpx, which uses its custom make-based build system but is built with MSVC.

Eventually we want to build everything including all external dependencies with MSVC by porting everything to Meson, but as you can imagine it's not an easy task. :-)

However, even with just these recipes, there is a large improvement in how quickly you can build all of GStreamer inside Cerbero on Windows. For instance, the time required for building gstreamer-1.0.recipe which builds gstreamer.git went from 10 minutes to 45 seconds. It is now easier to do GStreamer development on Windows since rebuilding doesn't take an inordinate amount of time!

As a further improvement for doing GStreamer development on Windows, for all these recipes (except libffi because of complicated reasons), you can also generate Visual Studio 2015 project files and use them from within Visual Studio for editing, building, and so on.

Go ahead, try it out and tell me if it works for you!

As an aside, I've also been working on some proper in-depth documentation of Cerbero that explains how the tool works, the recipe format, supported configurations, and so on. You can see the work-in-progress if you wish to.

1. Most importantly, the tests cannot be built yet because GStreamer bundles a very old version of libcheck. I'm currently working on fixing that.

July 26, 2016

Testing for Usability

I recently came across a copy of Web Redesign 2.0: Workflow That Works (book, 2005) by Goto and Cotler. The book includes a chapter on "Testing for Usability" which is brief but informative. The authors comment that many websites are redesigned because customers want to add new feature or want to drive more traffic to the website. But they rarely ask the important questions: "How easy is it to use our website?" "How easily can visitors get to the information they want and need?" and "How easily does the website 'lead' visitors to do what you want them to do?" (That last question is interesting for certain markets, for example.)

The authors highlight this important attribute of usability: (p. 212)
Ease of use continues to be a top reason why customers repeatedly return to a site. It usually only takes one bad experience for a site to lose a customer. Guesswork has no place here; besides, you are probably too close to your site to be an impartial judge.

This is highly insightful, and underscores why I continue to claim that open source developers need to engage with usability. As an open source developer, you are very close to the project you are working on. You know where to find all the menu items, you know what actions each menu item represents, and you know how to get the program to do what you need it to do. This is obvious to you because you wrote the program. It probably made a lot of sense to you at the time to label a button or menu item the way you did. Will it make the same sense to someone who hasn't used the program before?

Goto and Cotler advise to test your current, soon-to-be-redesigned product. Testing the current system for usability will help you understand how users actually use it, and will indicate rough spots to focus on first.

The authors also provide this useful advice, which I quite like: (p. 215)
Testing new looks and navigation on your current site's regular users will almost always yield skewed results. As a rule, people dislike change. If a site is changed, many regular users will have something negative to say, even if the redesign is easier to use. Don't test solely on your existing audience.

Emphasis is mine. People dislike change. They will respond negatively to any change. So be cautious and include new users in your testing.

But how do you test usability? I've discussed several methods here before, including Heuristic Review, Prototype Test, Formal Usability Test, and Questionnaires. Similarly, Goto and Cotler recommend traditional usability testing, and suggest three general categories of testing: (p. 218)

TypeSettingTestersStyle
Informal testingMay take place in the tester's work environment or other setting.Co-workers or friends.Simple task list, observed and noted by a moderator.
Semiformal testingMay or may not take place in a formal test environment.Testers are pre-screened and selected.Moderator is usually a member of the team.
Formal testingTakes place in a formal facility.Testers are pre-screened and selected.Scenario tasks, moderated by a human factors specialist. May also include one-way mirror and video monitoring.


The authors also recommend building a task list that represents what real people actually would do, and writing a usability test plan (essentially, a brief document that describes your overall goals, typical users, and methodology). Goto and Cotler follow this with a discussion about screening test candidates, then conducting the session. I didn't see a discussion about how to write scenario tasks.

The role of the moderator is to remain neutral. When you introduce yourself as moderator, remind the tester that you will be a silent observer, and that you are testing the system, not them. Encourage the testers to "think aloud"—if they are looking for a "Print" button, they should say "I am looking for a 'Print' button." Don't describe the tasks in advance, and don't set expectations (such as "This is an easy task").

It can be hard to moderate a usability test, especially if you haven't done it before. You need to remain an observer; you cannot "rescue" a tester who seems stuck. Let them work it out for themselves. At the same time, if the tester has given up, you should move on to the next task.

Goto and Cotler recommend you compile and summarize data as you go; don't leave it all for the end. Think about your results while it is still fresh in your mind. The authors prefer a written report to summarize the usability test, showing the results of each test, problem areas, comments and feedback. As a general outline, they suggest: (p. 231)
  1. Executive summary
  2. Methodology
  3. Results
  4. Findings and recommendations
  5. Appendices

This is fine, but I prefer to summarize usability test results via a heat map. This is a simple visual device that concisely displays test results in a colored grid. Scenario tasks are on rows, and testers are on columns. For each cell, use green to represent a task that was easy for the tester to accomplish, yellow to represent a more difficult task, orange for somewhat hard, red for very difficult, and black for tasks the tester could not figure out.

Whatever your method, at the end of your test, you should identify those items that need immediate attention. Work on those, make improvements as suggested by the usability test, then test it again. The value in usability testing is to make it an iterative part of the design process: create a design, test it, update the design, test it again, repeat.

Save. Load. Reset. Shortcuts get new features.

In the recent days, I have been working on implementing the functionality behind saving and loading of custom shortcuts. Moreover, Pitivi's save logic can store the shortcuts in an external file, which holds the user's preferred settings. This file's contents can then be loaded and used to fill the Shortcuts Window - if the file exists. If it does not exist, Shortcut Manager class will simply use the default accelerator settings.

Secondly, I made a step forward in implementing the reseting functionality for the shortcuts. Whenever a user has customised shortcuts, she'll be able to reset it to Pitivi factory settings by a single click of a button. The important point here is that the back-end functionality for this button has just been implemented and so we are ready for putting it all together to some nice UI. It was important to define the logic of storing the default shortcuts, so that we have them accessible at any time, even if user decides to change all of them. Thanks to this, we are ready for reseting the shortcuts at any point in time.
Users will be able to reset either a particular action's accelerators, or reset them all by a single click.

Furthermore, I was able to practice unit testing a lot. For all the pieces of work - save, load and reset functionality - I provided unit tests with fairly extensive use of mock library. I learnt a lot through this, since I have to admit at the very beginning the idea behind what kind of tricks mock is actually doing behind the scenes was hard to understand for me. By now however, Pitivi has a code of good quality for all the work I have done (Alex thanks for the excellent reviews) supported by relevant tests.

So what is next?
Over the course of this and the next week, I will be concentrating on the UI primarily and bringing all the little pieces of work I have done together. Hopefully by the end of this week, I will be able to present an implemented and working UI for the Preferences shortcuts section and also add the reset buttons to reset one or all shortcuts.

Bringing your kids to GUADEC 2016

If you’re coming to GUADEC 2016 and bringing your kids along, there’s a handy wiki page where you can look at for tips on what to do while in Karlsruhe:

https://wiki.gnome.org/GUADEC/2016/Kids

If you’re coming with small kids, I’ll bring along some toys, colors, and other essentials to the conference. To make that happen, put the age of your child in the wiki at the link above.

Going to GUADEC

See you there.

July 25, 2016

GNOME Keysign - Report #2 GSoC 2016

More than a week ago I blogged about the new GUI made with GtkBuilder and Glade [1].  Now, I will talk about what has changed since then with the GUI and also the new functionality that has been added to it.

I will start with the new "transition" page which I've added for the key download phase. Before going more in depth, I have to say that the app knows at each moment in what state it is, which really helps in adding more functionality.

The transition page will give user more feedback about the download status of a key, because in the old gnome-keysign GUI, when the download was interrupted the GUI didn't show anything. Now, the GUI is more explicit about the process:




If the download fails, the spinner widget stops and an error message is shown. If the download is successful, the app will auto-advance to the confirmation page and the user is presented with details for the key he's about to sign:


A few people noticed that I am displaying short key ids in the GUI. I want to say that the entire key fingerprint is used when authenticating a downloaded key. The other info shown in the GUI are just key details that I'm getting from GnuPG and I'm displaying in the interface.
Though, I will stop displaying the 8 chars ID , because user may be influenced somehow by this.

Other changes that have been done since the last post were:

  • added a "Confirm" button to sign a key
  • added a transition phase for the key signing also
  • implement the "Refresh" keylist button
  • minor GUI adjustments
  • use logging instead of print
  • improve code quality

Apart from these, one major change is the GPG functionality added to the new GUI. The gpgmh.py file made by Tobias acts as a common interface for what gpg libraries we'll use in the future. For now, you can test the new GUI with your own keys on the gpgmh branch [2]. This requires having the 'monkeysign' package installed [3].

In the following week I'm adding the widgets for the QR code and the QR scanner, as well as making a simple script that will create a flatpack app.



[1] http://andreimacavei.blogspot.ro/2016/07/gnome-keysign-new-gui-and-updates.html
[2] https://github.com/andreimacavei/gnome-keysign-glade-ui/commits/gpgmh
[3] http://monkeysphere.info/monkeysign/

on discoverability

i recently stumbled on a bunch of videos by clubinternet, exposing people who have never used a smartphone to google. their task was to search for photos of their favorite actress. you'd guess there are not many products out there which are easier to use than a google search box. well, watch this:

while i can't deny a slightly humoristic touch, this video has troubled me. touch interfaces have improved drastically in recent years, and even allow non-tech savvy people to successfully interact with digital devices. nevertheless i always felt that they are not the goose that lays golden eggs. you see, we are actually just moving objects below a screen made of glass. what other object in the world behaves like this? i am of the opinion that there has to be a better way to interact with devices. in the words of bret victor:

I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.

Is that so bad, to dump the tactile for the visual? Try this: close your eyes and tie your shoelaces. No problem at all, right? Now, how well do you think you could tie your shoes if your arm was asleep? Or even if your fingers were numb? When working with our hands, touch does the driving, and vision helps out from the back seat.

Pictures Under Glass is an interaction paradigm of permanent numbness. It denies our hands what they do best. And yet, it's the star player in every Vision Of The Future.

To me, claiming that Pictures Under Glass is the future of interaction is like claiming that black-and-white is the future of photography. It's obviously a transitional technology. And the sooner we transition, the better.

but this is not the only problem touch interfaces have. maybe it is because of the way we move objects below a screen of glass, maybe it is because a screen does not give us a tactile feedback and maybe we need something completely different. but touch interfaces lack discoverability. like almost all digital products of today's time and age. interaction elements are concealed in the user interface, buttons are disguised in text, input fields are not obviously marked as such and interaction elements don't give feedback. we probably can tell what elements we can interact with based on our experience, but there is now way to tell just by looking at the screen. this issue is amazingly well summarized by don norman and bruce tognazzini:

Today’s devices lack discoverability: There is no way to discover what operations are possible just by looking at the screen. Do you swipe left or right, up or down, with one finger, two, or even as many as five? Do you swipe or tap, and if you tap is it a single tap or double? Is that text on the screen really text or is it a critically important button disguised as text? So often, the user has to try touching everything on the screen just to find out what are actually touchable objects.

the truth is this: if there is no way to discover what operations are possible just by looking at the screen and the interaction is numbed with no feedback by the devices, what's left? the interaction gets reduced to experience and familiarity where we only rely on readily transferred, existing skills.

with our digital products we are building environments, not just tools. yet we often think only about the different tasks inside our product. we have to view our products in a context of how and where they are being used. our products are more than just static visual traits, let's start to see them as such.

July 23, 2016

Need for PC - The plan


I have realized quite some time ago that my PC is struggling to keep up with the pace, so I have decided that it is time for an upgrade (after almost 6 years with my Dell Inspiron 560 minitower with C2D Q8300 quad-core).

I have "upgraded" the video card a couple of months ago due to the old one not supporting OpenGL3.2 needed by GtkGLArea. First I went with an ATI Radeon HD6770 I received from my gamer brother, but it was loud and I did not use it as much as it's worth using, with a high cost (108W TDP, bumped the consumption of the idle PC by 30-40W from 70-80W to 110-120W), so I have traded it for another one: a low-consumption (Passively cooled - 25W TDP) Ati Radeon HD4550 working well with Linux and all my Steam games whenever I am gaming (casual gamer). Consumption went back to 90-100W.

After that came the power supply, replacing the Dell-Provided 300W supply with a more efficient one, a 330W Seasonic SS330HB. This resulted in another 20W drop in power consumption, idling below 70W.

The processor being fairly old, and having a 95W TDP, but with the performance way below today's i7 processors with the same TDP, it might be worth upgrading, but that means motherboard + CPU + cooler + memory upgrade, but as I have the rest of the components, I will reuse them, and add a new (old) case to the equation, a PowerMac G5 from around 2004.

So here's the basic plan:
Case - PowerMac G5 modded for mATX compatibility, and repainted - metallic silver the outer case, matt black the inner case - inspired by Mike 7060's G5 Mod
CPU - Intel core i7 6700T - 35W TDP
Cooler - Arctic Alpine 11 Plus - silent, bigger brother of the fanless Arctic Alpine 11 Passive (for up to 35 W TDP, the i7 6700T being right at the edge, I did not want to risk)
MotherBoard - 1151 socket, DDR4, USB3, 4-pin CPU and case fan controller socket, HDMI and DVI video outs being the requirements - I chose the MSI B150M Mortar because of guaranteed Linux compatibility (thanks Phoronix), 2 onboard PWM case fan controllers + PWM controlled CPU fan
Memory - 2x8GB DDR4 Kit - Kingston Hyperx Fury
PSU - Seasonic SS-330HB mounted inside the G5 PSU case, original G5 PSU fans replaced with 2x 60mm Scythe Mini Kaze for silent operation
Case Cooling - Front 2x 92mm - Arctic F9 PWM PST in the original mounts

Video card - Onboard Intel or optional ATI Radeon HD4550 if (probably will not happen) the onboard will not be enough
Optical drive (not sure if it is required) - start with existing DVD-RW drive
Storage - 120 GB Kingston V300 + 1TB HDD - existing

Plans for later
(later/optional) update optical drive to a Blu-Ray drive
(later/optional)  Arctic F9 PWM PST, in the original G5 intake mounts or 120 mm Arctic F12 PWM PST in new intake mounts.

I'll soon be back with details on preparing the case, probably the hardest part of the whole build. The new parts are already ordered (the CPU was pretty hard to find on stock, and will be delivered in a week or so instead of the usual 1-2 days).

July 22, 2016

2016-07-22 Friday.

  • Up early, chat with Eloy, Tor, ownCloud Webinar practice with Lenny, Snezana & Cornelius.

July 21, 2016

The new Keyboard panel

After implementing the new redesigned Shell of GNOME Control Center, it’s now time to move the panels to a bright new future. And the Keyboard panel just walked this step.

After Allan give his usual show with new mockups (didn’t you see? Check it here), . Check this out:

Captura de tela de 2016-07-21 20-45-22The new Keyboard panel

Working on this panel had me take a few conclusions:

  • The new programming tools and facilities that Gtk+ and GLib landed make a huge difference in code legibility and code quality.
  • GObject is awsome. Really.
  • Since GObject is awsome, lets use all the functionality it gives us for free:)
  • I tend to overdocument my code.

And our beloved set of sequential pictures and music:

 

Excited? This is still unders heavy development, and we just started the reviews. You can check the current state here, or test the wip/gbsneto/new-keyboard-panel branch. As always, share your comments, ideas and suggestions!

2016-07-21 Thursday.

  • J's birthday - presents at breakfast; dug through the mountainous admin-queue, synched with Andras & Kendy.

The state of gamepad support in Games

Gamepad support has now been merged into GNOME Games v3.21.4 !!! This means that you can play your favorite retro games using a gamepad!!!

Which gamepads are supported?

But you may wondering which gamepads are supported out of the box. The answer is a lot of them! We use the SDL mappings format to map your gamepad to a standard gamepad (by this I mean a seventh generation/XBox-360 kind of gamepad). And we use a huge community maintained database of mappings, so your device would most likely be there. We use a slightly modified version of this database. See #94 and #95 for more details.

Custom mappings?

Well I just realized while writing this post that we had forgotten about this :sweat_smile:. But I have made a PR for it, so it should get merged soon. But as of now there is no GUI for it. Currently you can use Steam or the SDL test/controllermap tool to generate a custom mapping string as described here. Then you should paste in in a file in the user’s config directory. As per this PR this file is <config_dir>/gnome-games/gamecontrollerdb.txt (<config_dir> is mostly ~/.config).

Multiplayer support

Multiplayer games are quite well supported. As of now there is no GUI for reassigning gamepads to other players, but the default behaviour is quite predictable. Just plugin the gamepads in the order of the players and all will be well.

The exact behaviour is this:

  • the first player with no gamepad will be assigned the keyboard
  • if there are N initially plugged-in gamepads, then they are assigned to the first N players and keyboard is assigned to player N + 1
  • when a gamepad is plugged in, it is assigned to the first player with no gamepad (it may not be the last one), it can replace the keyboard
  • when a gamepad is plugged out, its player shouldn’t have any gamepad assigned but it shouldn’t change the player to which other gamepads are assigned

Next steps

The next steps involve adding a UI to remap the gamepads assigned too the players and then maybe a UI for remapping the controls if time permits.

Happy gaming!

GSoC progress part #3

My last week has been quite busy, but it all paid off in the end as I’ve managed to overcome the issue that I had with the login phase. Thankfully, I was able to take a look at how the postMessage() API is used to do the login in Firefox iOS and implement it myself in Epiphany.

To summarize it, this is how it’s done:

  1. Load the FxA iframe with the service=sync parameter in a WebKitWebView.
  2. Inject a few JavaScript lines to listen to FirefoxAccountsCommand events (sent by the FxA Server). This is done with a WebKitUserContentManager and a WebKitUserScript.
  3. In the event listener use postMessage() to send to back to WebKit the data received from the server.
  4. In the C code, register a script message handler with a callback that gets called whenever something is sent through the postMessage() channel. This is done with webkit_user_content_manager_register_script_message_handler().
  5. In the callback you now hold the server’s response to your request. This includes all the tokens you need to retrieve the sync keys.
  6. Profit!

Basically, postMessage() acts like a forwarder between JavaScript and WebKit. Cool!

With this new sign in method, users can also benefit of the possibility to create new Firefox accounts. The iframe contains a “Create an account” link that shows a form by which the users will create a new account. The user will have to verify the account before he signs in.


Using modern gettext

gettext has seen quite some enhancements in recent years, after Daiki Ueno started maintaining it. It can now extract (and merge back) strings from diverse file formats, including many of the formats that are important for desktop applications. With gettext 0.19.8, there is really no need  anymore to use intltool or GLib’s dated gettext glue (AM_GLIB_GNU_GETTEXT and glib-gettextize).

Since intltool still sticks around in quite a few projects, I thought that I should perhaps explain some of the dos and don’ts for how to get by with plain gettext. Javier Jardon has been tirelessly fighting a battle for using upstream gettext, maybe this will help him reaching the finish line with that project.

Extracting strings

xgettext is the tool used to extract strings from sources into .pot files.

In addition to programming languages such as C, C++, Java, Scheme, etc, it recognizes the following files by their typical file extensions (and it is smart enough to disregard a .in extension):

    • Desktop files: .desktop
    • GSettings schemas: .gschema.xml
    • GtkBuilder ui files: .ui
    • Appdata files: .appdata.xml and .metainfo.xml

You can just add these files to POTFILES.in, without the extra type hints that intltool requires.

One important advantage of xgettext’s xml support, compared to intltool, is that you can install .in files that are valid; no more tag mutilation like <_description> required.

Merging translations

The trickier part is merging translations back into the various file formats. Sometimes that is not necessary, since the file has a reference to the gettext domain, and consumers know to use gettext at runtime: that is the case for GSettings schemas and GtkBuilder ui files, for example.

But in other cases, the translations need to be merged back into  the original file before installing it. In these cases, the original file from which the strings are extracted often has an extra .in extension. The tool that is doing this task is msgfmt.

Intltool installs autotools glue which can define make rules for many of these cases, such as @INTLTOOL_DESKTOP_RULE@. Gettext does not provide this kind of glue, but the msgfmt tool is versatile enough that you can write your own rules fairly easily, for example:

%.desktop: %.desktop.in
        msgfmt --desktop -d $(top_srcdir)/po \
               --template $< -o $@

Extending gettext

Gettext can be extended to understand new xml formats. To do so, you install .its and .loc files. The syntax for these files is explained in detail in the gettext docs. Libraries are expected to install these files for ‘their’ formats (GLib and GTK+ already do, and  PolicyKit will do the same soon.

If you don’t want to wait for your favorite format to come with built-in its support, you can also include its files with your application; gettext will look for such files in $XDG_DATA_DIRS/gettext/its/.

 

July 20, 2016

BOF session of Nautilus – GUADEC

Hello,

As the tittle says, we will have a discussion/bof session about Nautilus in GUADEC.

As you may know, discussing on internet is rarely a great experience, with the big disadvantage that influencing people over it doesn’t work well. From a developer point of view, I don’t know to who I am talking with and why, so in projects where discussions are a daily experience is difficult to know what we should put on the top priority list.

The small hackfest is going to be focused more on the philosophical side, with specific actionable items for the future.

We will talk and discuss about what we have done wrong in the past; what users are missing, like dual panel and type ahead search, why they are missing it and how we can improve those use cases; we will also talk about the transition from Nautilus 2 to Nautilus 3 and what we can learn from it in order to make changes a smoother experience.

The program is here. I will gladly add anything people would like to talk about.

If you ever wanted to influence Nautilus, this is your opportunity, come to GUADEC.

Do not use comment sections to discuss them:) just grab your backpack and come to GUADEC.


libinput is done

Don't panic. Of course it isn't. Stop typing that angry letter to the editor and read on. I just picked that title because it's clickbait and these days that's all that matters, right?

With the release of libinput 1.4 and the newest feature to add tablet pad mode switching, we've now finished the TODO list we had when libinput was first conceived. Let's see what we have in libinput right now:

  • keyboard support (actually quite boring)
  • touchscreen support (actually quite boring too)
  • support for mice, including middle button emulation where needed
  • support for trackballs including the ability to use them rotated and to use button-based scrolling
  • touchpad support, most notably:
    • proper multitouch support on touchpads [1]
    • two-finger scrolling and edge scrolling
    • tapping, tap-to-drag and drag-lock (all configurable)
    • pinch and swipe gestures
    • built-in palm and thumb detection
    • smart disable-while-typing without the need for an external process like syndaemon
    • more predictable touchpad behaviours because everything is based on physical units [2]
    • a proper API to allow for kinetic scrolling on a per-widget basis
  • tracksticks work with middle button scrolling and communicate with the touchpad where needed
  • tablet support, most notably:
    • each tool is a separate entity with its own capabilities
    • the pad itself is a separate entity with its own capabilities and events
    • mode switching is exported by the libinput API and should work consistently across callers
  • a way to identify if multiple kernel devices belong to the same physical device (libinput device groups)
  • a reliable test suite
  • Documentation!
The side-effect of libinput is that we are also trying to fix the rest of the stack where appropriate. Mostly this meant pushing stuff into systemd/udev so far, with the odd kernel fix as well. Specifically the udev bits means we
  • know the DPI density of a mouse
  • know whether a touchpad is internal or external
  • fix up incorrect axis ranges on absolute devices (mostly touchpads)
  • try to set the trackstick sensitivity to something sensible
  • know when the wheel click is less/more than the default 15 degrees
And of course, the whole point of libinput is that it can be used from any Wayland compositor and take away most of the effort of implementing an input stack. GNOME, KDE and enlightenment already uses libinput, and so does Canonical's Mir. And some distribution use libinput as the default driver in X through xf86-input-libinput (Fedora 22 was the first to do this). So overall libinput is already quite a success.

The hard work doesn't stop of course, there are still plenty of areas where we need to be better. And of course, new features come as HW manufacturers bring out new hardware. I already have touch arbitration on my todo list. But it's nice to wave at this big milestone as we pass it into the way to the glorious future of perfect, bug-free input. At this point, I'd like to extend my thanks to all our contributors: Andreas Pokorny, Benjamin Tissoires, Caibin Chen, Carlos Garnacho, Carlos Olmedo Escobar, David Herrmann, Derek Foreman, Eric Engestrom, Friedrich Schöller, Gilles Dartiguelongue, Hans de Goede, Jackie Huang, Jan Alexander Steffens (heftig), Jan Engelhardt, Jason Gerecke, Jasper St. Pierre, Jon A. Cruz, Jonas Ådahl, JoonCheol Park, Kristian Høgsberg, Krzysztof A. Sobiecki, Marek Chalupa, Olivier Blin, Olivier Fourdan, Peter Frühberger, Peter Hutterer, Peter Korsgaard, Stephen Chandler Paul, Thomas Hindoe Paaboel Andersen, Tomi Leppänen, U. Artie Eoff, Velimir Lisec.

Finally: libinput was started by Jonas Ådahl in late 2013, so it's already over 2.5 years old. And the git log shows we're approaching 2000 commits and a simple LOCC says over 60000 lines of code. I would also like to point out that the vast majority of commits were done by Red Hat employees, I've been working on it pretty much full-time since 2014 [3]. libinput is another example of Red Hat putting money, time and effort into the less press-worthy plumbing layers that keep our systems running. [4]

[1] Ironically, that's also the biggest cause of bugs because touchpads are terrible. synaptics still only does single-finger with a bit of icing and on bad touchpads that often papers over hardware issues. We now do that in libinput for affected hardware too.
[2] The synaptics driver uses absolute numbers, mostly based on the axis ranges for Synaptics touchpads making them unpredictable or at least different on other touchpads.
[3] Coincidentally, if you see someone suggesting that input is easy and you can "just do $foo", their assumptions may not match reality
[4] No, Red Hat did not require me to add this. I can pretty much write what I want in this blog and these opinions are my own anyway and don't necessary reflect Red Hat yadi yadi ya. The fact that I felt I had to add this footnote to counteract whatever wild conspiracy comes up next is depressing enough.

July 19, 2016

Cosimo in BJGUG

Last Month Cosimo came Beijing, and we had  a meet up with Beijing GNOME User Group and Beijing Linux User Group in SUSE Office, Cosimo introduced ‘Looking ahead to GNOME 3.22 and beyond’, the flatpak bring lots of attention. Here I just shared some photos. Thanks for Cosimo’s coming!

BIN_6490ss

BIN_6497ss
The child of Martin which is from BLUG.BIN_6500ss

Niclas Hedhman which is from Apache Software FoundationBIN_6499ss

GUADEC Flatpak contest

I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.

Contest

To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

You will have to send me an email about the location of that repository.

I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.

Prize

A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

Good luck to one and all!

Automatic decompression of archives

With extraction support in Nautilus, the next feature that I’ve implemented as part of my project is automatic decompression. While the name is a bit fancy, this feature is just about extracting archives instead of opening them in an archive manager. From the UI perspective, this only means a little change in the context menu:

previous_menuautomatic_decompression_menu

Notice that only the “Open With <default_application>” menu item gets replaced. Archives can still be opened in other applications

Now why would you want to do this? The reason behind is to reduce working with files in a compressed format. Instead of opening the archive in a different application to take out some files, you can extract it all from the start and then interact with the files straight away from the file manager. Once the files are on your system, you can do anything that you could have done from an archive manager and more. For example, if you only wanted to extract a few files, you can just remove the rest.

One could argue that extracting an archive could take way longer than just opening it in an archive manager. While this can be true for very large compressed files, most of the times the process takes only a few seconds – about the same time it takes to open a different application. Moreover, if you just want to open a file inside the archive manager, the manager will first extract it to your disk anyway.

This might be a minor change in terms of code and added functionality, but it is quite important when it comes to how we interact with compressed files. For users that are not fond of it, we decided on adding a preference for disabling automatic decompression.

preferences

That’s pretty much it for extraction in Nautilus. Right now I’m polishing compression, so I’ll see you in my next post where I talk about it! As always, feedback, suggestions and ideas are much appreciated:)


REMINDER! systemd.conf 2016 CfP Ends in Two Weeks!

Please note that the systemd.conf 2016 Call for Participation ends in less than two weeks, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are interested in yours even more!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

And keep in mind:

REMINDER: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

July 18, 2016

Builder Happenings

Over the last couple of weeks I’ve started implementing Run support for Builder. This is somewhat tricky business since we care about complicated manners. Everything from autotools support to profiler/debugger integration to flatpak and jhbuild runtime support. Each of these complicates and contorts the problem in various directions.

Discovering the “target binary” in your project via autotools is no easy task. We have a few clever tricks but more are needed. One thing we don’t support yet is discovering bin_SCRIPTS. So launching targets like python or gjs applications is not ready yet. Once we discover .desktop files that should start working. Pretty much any other build system would be easier to implement this.

So much work behind the scenes for a cute little button.

Screenshot from 2016-07-17 22-09-21

While running, you can click stop to kill the spawned application’s process group.

Screenshot from 2016-07-17 22-10-38

But that is not all that is going on. Matthew Leeds has been working on search and replace recently (as well as a whole bunch of paper cuts). That work has landed and it is looking really good!

Screenshot from 2016-07-17 22-12-45

Also exciting is that thanks to Sebastien Lafargue we have a fancy new color picker that integrated with Builder’s source editor. You can visualize colors in your source files and change them using the dropper or numerous sliders based on how you’d like to tweak the color.

Screenshot from 2016-07-17 22-11-38

I still have a bunch of things I want to land before GUADEC, so back to the hacker den for me.

July 17, 2016

Extraction support in Nautilus

The first feature added to Nautilus as part of my project is support for extracting compressed files. They can be extracted to the current directory or to any other location. The actions are available in the context menu:

extract_context_menuNow you might be wondering, why add these to Nautilus if they look exactly the same as file-roller’s extension? Well, handling extraction internally comes with a few changes:

  • improved progress feedback, integrated into the system used by the rest of the operations
  • fine-grained control over the operation, including conflict situations which are now handled using Nautilus’ dialogs
  • and probably, the most important change, extracting files in a way that avoids cluttering the user’s workspace. No matter what the archive’s contents are, they will always be placed in a single file or top-level folder – I’ll elaborate on that in a moment.

extract_progress

extract_conflict

 

As I mentioned in my first post, the goal of this project is to simplify working with archives, and creating just one top-level item as a result of an extraction really reduces complexity. It is done in a pretty simple way:

  • if the archive has a root element and they have the same base names (like image.zip which contains image.png or a folder named image/), the root element is extracted as it is
  • if the root element has a different name or the archive has multiple elements, they are extracted in a folder having the same name as the archive, without its extension

As a result, the output will always have the name of the source archive, making it easy to find after an extraction. Also, the maximum number of conflicts an extraction can have is just one, the output itself. Hurray, no more need to go through a thousand dialogs!

If you have any suggestion or idea on how to improve this operation, feel free to drop a comment with it! Feedback is also much appreciated:) See you in the next one!


July 16, 2016

Getting a network trace from a single application

I recently wanted a way to get a network packet trace from a specific application. My googling showed me an old askubuntu thread that solved this by using Linux network namespaces.

You create a new network namespace, that will be isolated from your regular network, you use a virtual network interface and iptables to make the traffic from it reach your regular network. Then you start an application and wireshark in that namespace and then you have a trace of that application.

I took that idea and made it into a small program, hosted on github, nsntrace.

> nsntrace
usage: nsntrace [-o file] [-d device] [-u username] PROG [ARGS]
Perform network trace of a single process by using network namespaces.

-o file send trace output to file (default nsntrace.pcap)
-d device the network device to trace
-u username run PROG as username 

It does pretty much the same as the askubuntu thread above describes but with just one step.

> sudo nsntrace -d eth1 wget www.google.com
Starting network trace of 'wget' on interface eth1.
Your IP address in this trace is 172.16.42.255.
Use ctrl-c to end at any time.

--2016-07-15 12:12:17-- http://www.google.com/
Location: http://www.google.se/?gfe_rd=cr&ei=AbeIV5zZHcaq8wfTlrjgCA [following]
--2016-07-15 12:12:17-- http://www.google.se/?gfe_rd=cr&ei=AbeIV5zZHcaq8wfTlrjgCA
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html [ <=> ] 10.72K --.-KB/s in 0.001s

2016-07-15 12:12:17 (15.3 MB/s) - ‘index.html’ saved [10980]

Finished capturing 42 packets.

> tshark -r nsntrace.pcap -Y 'http.response or http.request'
16 0.998839 172.16.42.255 -> 195.249.146.104 HTTP 229 GET http://www.google.com/ HTTP/1.1
20 1.010671 195.249.146.104 -> 172.16.42.255 HTTP 324 HTTP/1.1 302 Moved Temporarily (text/html)
22 1.010898 172.16.42.255 -> 195.249.146.104 HTTP 263 GET http://www.google.se/?gfe_rd=cr&ei=AbeIV5zZHcaq8wfTlrjgCA HTTP/1.1
31 1.051006 195.249.146.104 -> 172.16.42.255 HTTP 71 HTTP/1.1 200 OK (text/html)

If it is something you might have use for or find interesting, please check it out, and help out with patches. It turns out I have a lot to learn about networking and networking code.

All the best!

Maps and tiles

Hello all!

Right now we are having infrastructural problems with Maps. We can no longer use the MapQuest tiles, see mail thread from maps list archive here for more information.

We are working on getting past this and getting a working Maps application released soon. But this is also showing us more clear that we need to get a better grip around the tiles infrastructure if we want to have a Map application and/or a Map infrastructure in GNOME. We are having good discussions and I think we will get through this with a nice plan forward to prevent stuff like this happening. And also with a plan to do better in the future and do cooler stuff with tiles.

All the best!

Generic C++ GObject signals wrapper

Recently I've discovered, that connecting to signals in gstreamermm can be really inconvenient. The problem doesn't exist in the other mm libraries, because most of the classes and their signals are wrapped.
But GStreamer allows to create user-defined elements, so it's actually impossible to wrap everything in gstreamermm (for now library supports wrappers for gstreamer plugins core and base).

Currently, if you want to connect to a signal in gstreamermm, you have two options:
  1. Using pure C API:

  2. auto typefind = Gst::ElementFactory::create_element("typefind");

    g_signal_connect (typefind->gobj(),
    "have-type",
    G_CALLBACK(cb_typefind_havetype),
    (gpointer *)typefind->gobj());

    static void cb_typefind_havetype (GstTypeFindElement *typefind,
    guint probability,
    GstCaps *caps,
    gpointer user_data)
    {
    // callback implementation
    }
    Well, it's not very bad. But... you have to use C structures in the callback instead of C++ wrappers.

  3. Using gstreamermm API
  4. As I mentioned, gstreamermm provide wrappers for core and base plugins, so some of the elements (and their signals) are already wrapped in the library:
    auto typefind = Gst::TypeFindElement::create();
    typefind->signal_have_type().connect(
    [] (guint probability, const Glib::RefPtr& caps)
    {
    // callback implementation
    });
However, many plugins are not wrapped (and they never will), so usually you need to either write wrapper for element which you wanted to use (and then, maintain this wrapper as well), or use pure C API.
Moreover, I'm going to remove plugin API in the next release [1], so user won't be able to use gstreamermm API even for the base and core plugins. I was wondering, if it would be possible to write generic wrapper for the GObject signals. So... there you are! The solution is not perfect yet, and I haven't tested it so much, but so far it works fine with few plugins and signals.

namespace Glib
{
template <typename T>
static constexpr T wrap (T v, bool=true)
{
return v;
}

template <typename T>
static constexpr T unwrap (T v, bool=true)
{
return v;
}

template<typename T>
using unwrapped_t = decltype(unwrap(*((typename std::remove_reference<T>::type*)nullptr)));

template<typename T>
constexpr T return_helper()
{
typedef unwrapped_t<T> Ret;
return Ret();
}

template<>
constexpr void return_helper()
{
return void();
}
}


template<typename>
class signal_callback;

template<typename Ret, typename ...T>
class signal_callback<Ret(T...)>
{
template<typename ...Args>
static auto callback(void* self, Args ...v)
{
using Glib::wrap;
typedef sigc::slot< void, decltype(wrap(v))... > SlotType;

void* data = std::get<sizeof...(Args)-1>(std::tuple<Args...>(v...));

// Do not try to call a signal on a disassociated wrapper.
if(dynamic_cast<Glib::Object*>(Glib::ObjectBase::_get_current_wrapper((GObject*) self)))
{
try
{
if(sigc::slot_base *const slot = Glib::SignalProxyNormal::data_to_slot(data))
{
(*static_cast<SlotType*>(slot))(wrap(std::forward<Args>(v), true)...);
}
}
catch(...)
{
Glib::exception_handlers_invoke();
}
}

return Glib::return_helper<Ret>();
}

public:
auto operator()(const std::string& signal_name, const Glib::RefPtr<Glib::Object>& obj)
{
using Glib::unwrap;
static std::map<std::pair<GType, std::string>, Glib::SignalProxyInfo> signal_infos;

auto key = std::make_pair(G_TYPE_FROM_INSTANCE (obj->gobj()), signal_name);
if (signal_infos.find(key) == signal_infos.end())
{
signal_infos[key] = {
signal_name.c_str(),
(GCallback) &callback<Glib::unwrapped_t<T>..., void*>,
(GCallback) &callback<Glib::unwrapped_t<T>..., void*>
};
}

return Glib::SignalProxy<Ret, T... >(obj.operator->(), &signal_infos[key]);
}
};


auto typefind = Gst::ElementFactory::create_element("typefind"),
signal_callback<void(guint, const Glib::RefPtr<Gst::Caps>&)> signal_wrapper;

signal_wrapper("have-type", typefind).connect(
[&ready, &cv] (guint probability, const Glib::RefPtr<Gst::Caps>& caps) {
std::cout << "have-type: probability=" << probability << std::endl;
Gst::Structure structure = caps->get_structure(0);
const Glib::ustring mime_type = structure.get_name();
std::cout << "have-type: mime_type=" << mime_type << std::endl;

structure.foreach([] (const Glib::QueryQuark& id, const Glib::ValueBase& value) {
const Glib::ustring str_id = id;
gchar* str_value = g_strdup_value_contents(value.gobj());
std::cout << "Structure field: id=" << str_id << ", value=" << str_value << std::endl;
g_free(str_value);
return true;
});
});
Full source of the code can be found on the github [2].
As you see, you still have to know the type of the callback, but at least you can use gstreamermm C++ classes.
There is couple of things to do in this code, like getting last parameter from the list in more efficient way than through the tuple, etc.
I don't feel it is stable enough to integrate it with gstreamermm, but probably in the future I'll do that. Also, we could even internally use it in the glibmm to reduce amount of generated code.

Links
[1] https://bugzilla.gnome.org/show_bug.cgi?id=755395
[2] https://gist.github.com/loganek/7833089caff73ff2e8b1f076c8f7910e

I’m going to GUADEC!

Hey people – I will be coming to GUADEC!

I am exited to be a speaker this time. So this will happen:

  • We will have a workshop about contributing to open source – this is for you! For all the newcomers struggling to get started.
  • I will be able to hold a talk about how to grow an open source community – I have spent a lot of time on this one an people started asking me questions about it.
  • We will have the awesome GSoC lightning talks! The admin team is already working with the students to get you an interesting session about the latest news from our GSoC students – and possibly the first time they’re on stage!
  • If nobody keeps me from it, I will attempt explaining coala again and what happened to it since the last GUADEC. There has been lots of changes, companies and OS projects started using it productively and I would *love* to work together with you to improve code quality in GNOME.

Many thanks to the GNOME foundation for providing travel as well as accommodation to me! I see forward to meeting you people again!

https://wiki.gnome.org/Travel/Policy?action=AttachFile&do=get&target=sponsored-badge-simple.png

GNOME Logs GSoC Progress Report

Hello everyone,

I will be mentioning all the progress done in the GNOME Logs GSoC project in the previous weeks. The search popover is completed, but the patches related to it are yet to be merged, which I hope, will get merged in the coming week. In this post, I will be telling you about features implemented by me in search popover. If you want a brief recap about the search popover, please see my earlier blog post about it.

The implemented search popover looks like this when the drop down button besides the search bar is clicked :

popover-main

Clicking on the button related to what label, it opens up this treeview which allows us to select the journal parameters to search the user entered text from:

popover-select-parameter       popover-select-parameter-2

If we select any particular field from the treeview, it shows an option to select the search type:

popover-search-type

The search type option is hidden when “All Available Fields” is selected as exact search type doesn’t make sense in that case. So, only substring search type is available by default in that case.

Clicking on the button related to when label shows this treeview from where we select the journal range filters:

popover-rage-filters

If we select the “Set Custom Range” option, we go to another submenu:

custom-range-submenu

It allows us to set a custom range for the journal, including the starting date time and ending date time. Clicking on the either of the select date button shows this calendar from which we can select the date:

custom-range-calendar

Clicking on either of the select time button shows these time selection spinboxes:

custom-range-time12-hour format
custom-range-time-2424-hour format

The time spinboxes change depending upon the time format used in the current timezone.

This was all about the search popover as it is currently implemented by me. From next week, I will be working on writing a search provider for GNOME Logs which can be used to query journal entries according to the user entered text in the GNOME Shell search bar. Also, I will be working on getting the patches for search popover merge into the master branch of GNOME Logs. After all the patches related to search popover get merged, I will be making a video on how the search popover works. I would like to thank my mentor David King for guiding me in the right direction and helping me get my patches reviewed and merged.

My next post will be about the search provider for GNOME Logs coming next week.

So, stay tuned and Happy Coding:)


July 15, 2016

Progress so far

As things are beginning to take shape, it's high time for a progress update! If you aren't aware of what my project consists in, be sure to check my introductory post first. Shortly though, the main focus of the project implies wrapping the map horizontally in Champlain in order to ensure smoother panning.

So far a few bugs regarding coordinates translation have been fixed so as to have a clear ground to work on. Other minor fixes concerned dragging markers between map clones and zooming animation behaving weirdly.

The main challenge was making the clones responsive to mouse events: simply cloning the map layer (using ClutterClones) implied having completely static (unresponsive to events) user layers  (e.g. markers). The solution to this implied moving the real map (the source of the clones) in real time based on the cursor location. Always keeping the real map under the cursor creates the seamless illusion that the map is actually responsive throughout the horizontal wrap.

However, the fix came along with some other unexpected issues, some of which were easy to fix, like markers spanning over two clones, and others still requiring some work. 

The upcoming period will mostly consist in fixing bugs and getting closer to a polished horizontal map wrapping. If you are interested in my work you can take a look at my personal Github repository
which contains most of the work I've done so far.




Feeds