GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

July 31, 2015

Retro ThinkPad Survey 4: Miscellaneous

DHill_4survey_hero2

Please keep the feedback coming! Click here to take survey 1, here to take survey 2 or here to take survey 3 if you have not responded yet.

You can take survey 4 by clicking here.

“Next week I’ll post a recap of the results and discuss where we are on the project so far. Thanks again for all the passion and time you have put into this effort. We are listening.” © David Hill

Read more

Remember to register for GUADEC

I’m on a boat at the moment.

07-31-15 going-to-guadec

The sails has been set towards Sweden. With me is this large banner.

IMG_20150729_102342

Looking forward to see you all!

Register for GUADEC below:
https://registration.guadec.org/

badge

Fedora 22 on Cubietruck

In previous post (How-to set up network audio server based on PulseAudio and auto-discovered via Avahi) I’ve wrote details how I set up network audio-server. Actually I’m using cubietruck there.

Now I want to post how I installed F22 and get audio card working (it’s not trivial).

Cubietruck terminal

Requirements

  • Fedora 22 minimal image from dl.fedoraproject.org
  • Installed package uboot-images-armv7
  • Few hours (microsd really sloow)

Write Fedora and u-boot images to microSD card:

# xzcat -T 4 Fedora-Minimal-armhfp-22-3-sda.raw.xz | dd of=/dev/mmcblk0; sync
# dd if=/usr/share/uboot/Cubietruck/u-boot-sunxi-with-spl.bin of=/dev/mmcblk0 bs=1024 seek=8 conv=fsync,notrunc

Read partition table, mount and remove root password:

# partprobe /dev/mmcblk0
# mount /dev/mmcblk0p3 /mnt
# vim /mnt/etc/passwd

And just remove x from root’s line.

Now we can insert card into cubietruck and power it on. It will not load fully because initial-setup-text.service will run on serial console. So just press Ctrl+Alt+F2 and login as root. Now we need to disable initial-setup and set root password.

# systemctl disable initial-setup-text.service
# passwd

Okay, fedora works and we can use it. Now second part about sound. Upstream kernel as for 4.2-rc4 doesn’t support sunxi-codec driver. Maxime Ripard has pointed me to his fork of linux kernel with some patches which adds support for sunxi-codec:

* f968132 Add cubieboard2 audio codec
* 8c1b463 codec: changes for Mele A1000
* 42ac277 ARM: sun7i: dt: enable audio codec on Cubietruck
* 4da9e56 ARM: sun7i: dt: Add sunxi codec device node
* 68d66ec ASoC: sunxi: add support for the on-chip codec on early Allwinner SoCs
* 93f01ab dma: sun4i: Add support for the DMA engine on sun[457]i SoCs
* 47c8977 ARM: sun7i: Add mod1 clock nodes
* ff52fcc ARM: sunxi: Add codec clock support
* 3233580 ARM: sunxi: Add PLL2 support
* 080077c clk: sunxi: mod1 clock support
* 048d967 clk: sunxi: codec clock support
* 7017705 clk: sunxi: Add a driver for the PLL2
* 883efa6 clk: Add a basic factor clock

So I just took Fedora’s kernel git tree, applied patches, enabled some options in kernel, built it in Koji for F22 and installed on cubietruck. Unfortunately, network driver doesn’t work in 4.2-rc4 due to BUG. It’s not fixed in upstream, only way to have network — have USB Network card. I’m not sure if that patches going to 4.3, but I’m sure we will get it in upstream at some point.

diff --git a/config-arm-generic b/config-arm-generic
index 8246024..bd1dee4 100644
--- a/config-arm-generic
+++ b/config-arm-generic
@@ -347,3 +347,6 @@ CONFIG_VFIO_AMBA=m
 # CONFIG_BMP085_SPI is not set
 # CONFIG_TI_DAC7512 is not set
 # CONFIG_SPI_ROCKCHIP is not set
+
+CONFIG_DMA_SUN4I=y
+CONFIG_SND_SUNXI_SOC_CODEC=m
diff --git a/kernel.spec b/kernel.spec
index d1c7d97..a5680ee 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -22,7 +22,7 @@ Summary: The Linux kernel
 %global zipsed -e 's/\.ko$/\.ko.xz/'
 %endif
 
-# % define buildid .local
+%define buildid .a20sound
 
 # baserelease defines which build revision of this kernel version we're
 # building.  We used to call this fedora_build, but the magical name
@@ -584,6 +584,20 @@ Patch503: drm-i915-turn-off-wc-mmaps.patch
 
 Patch904: kdbus.patch
 
+Patch1001: 0001-clk-Add-a-basic-factor-clock.patch
+Patch1002: 0002-clk-sunxi-Add-a-driver-for-the-PLL2.patch
+Patch1003: 0003-clk-sunxi-codec-clock-support.patch
+Patch1004: 0004-clk-sunxi-mod1-clock-support.patch
+Patch1005: 0005-ARM-sunxi-Add-PLL2-support.patch
+Patch1006: 0006-ARM-sunxi-Add-codec-clock-support.patch
+Patch1007: 0007-ARM-sun7i-Add-mod1-clock-nodes.patch
+Patch1008: 0008-dma-sun4i-Add-support-for-the-DMA-engine-on-sun-457-.patch
+Patch1009: 0009-ASoC-sunxi-add-support-for-the-on-chip-codec-on-earl.patch
+Patch1010: 0010-ARM-sun7i-dt-Add-sunxi-codec-device-node.patch
+Patch1011: 0011-ARM-sun7i-dt-enable-audio-codec-on-Cubietruck.patch
+Patch1012: 0012-codec-changes-for-Mele-A1000.patch
+Patch1013: 0013-Add-cubieboard2-audio-codec.patch
+
 # END OF PATCH DEFINITIONS
 
 %endif

Cubietruck

GUADEC 2015 badges

Just one week to GUADEC 2015 in Gothenburg! 7 days!!!

I hope you are all as excited as we are in the local organizing team.

We made a nice badge if you want to blog about the conference. Grab it here and if you are quick, you might be the first person to use it!

And don’t forget to register! https://registration.guadec.org/

See you all in a bit.

badge

gtkmm now uses C++11

Switching to C++11

All the *mm projects now require C++11. Current versions of g++ require you to use the –std=c++11 option for this, but the next version will probably use C++11 by default. We might have done this sooner if it  had been clearer that g++ (and libstdc++) really really supported C++11 fully.

I had expected that switching to C++11 would require an ABI break, but that has not happened, so already-built applications will not be affected. But our API now requires C++11 so this is a minor API change that you will notice when rebuilding your application.

Some distros, such as Fedora, are breaking the libstdc++ ABI slightly and requiring a rebuild of all applications, but that would have happened even without the *mm projects moving to C++11. It looks like Ubuntu might be doing this too, so I am still considering taking advantage of a forced (not gtkmm’s fault) widespread ABI break to make some ABI-breaking changes in gtkmm.

C++11 with autotools

You can use C++11 in your autotools-based project by calling AX_CXX_COMPILE_STDCXX_11() in your configure.ac after copying that m4 macro into your source tree. For instance, I used AX_CXX_COMPILE_STDCXX_11() in glom. The *mm projects use the MM_AX_CXX_COMPILE_STDCXX_11() macro that we added to mm-common, to avoid copying the .m4 file into every project. You may use that in your application instead. For instance, we used MM_AX_CXX_COMPILE_STDCXX_11() in the gtkmm-documentation module.

C++11 features

So far, the use of C++11 in gtkmm doesn’t provide much benefit to application developers and you can already use C++11 in applications that use older versions of gtkmm. But it makes life nicer for the gtkmm developers themselves. I’m enjoying learning about the new C++11 features (particularly move constructors) and enjoying our discussions about how best to use them.

I’m reading and re-reading Scott Meyer’s Effective Modern C++ book.  C++11’s rvalue references alone require great care and understanding.

For now, we’ve just made these changes to the **mm projects:

  • Using auto to simplify the code.
    For instance,
    auto builder = Gtk::Builder::create();
  • Using range-based for-loops.
    For instance,
    for(const auto& row : model->children()) { … }
  • Using nullptr instead of 0 or (void*)0.
    For instance,
    Gtk::Widget* widget = nullptr;
  • Using the override keyword when we override a virtual method.
    For instance,
    bool on_draw(const Cairo::RefPtr<Cairo::Context>& cr) override;
  • Using noexcept instead of throw().
    For instance,
    virtual ~Exception() noexcept;
  • Using “= delete” instead of private unimplemented copy constructors and operator=().
  • Using C++11 lambdas, instead of sigc::slots, for small callback methods.
    See below.

libsigc++ with C+11

libsigc++ has also moved to C++11 and we are gradually trying to replace as much as possible of its internals with C++11. Although C++11 has std::function, there’s still no C++11 equivalent for libsigc++ signals and object tracking

You can use C++11 lambda functions with libsigc++. For instance, with glibmm/gtkmm signals:

button.signal_clicked().connect(
  [] () {
    std::cout << "clicked" << std::endl;
  }
);

And now you don’t need the awkard SIGC_FUNCTORS_DEDUCE_RESULT_TYPE_WITH_DECLTYPE macro if the signal/slot returns a value. For instance:

m_tree_model_filter->set_visible_func(
  [this] (const Gtk::TreeModel::const_iterator& iter) -> bool
  {
    auto row = *iter;
    return row[m_columns.m_col_show];
  }
);

With C++14 that should be even nicer:

m_tree_model_filter->set_visible_func(
  [this] (auto iter) -> decltype(auto)
  {
    auto row = *iter;
    return row[m_columns.m_col_show];
  }
);

These -> return type declarations are necessary in these examples just because of the unusual intermediate type returned by row[].

Some GTK+ sightings

I had a chance to demonstrate some GTK+ file chooser changes that have accumulated in the last year, so I thought I should share some of this material here.

File Chooser 1
All the screenshots here are of the testfilechooser application in GTK+ master as of today (some bugs were found and fixed in the process).

Search

Search in the filechooser area that I have spent a bit of time on myself this cycle. We’ve improved the internals of the search implementation to match the sophistication of nautilus:

  • The current folder is already loaded, so we search through that data without any extra IO.
  • We ask tracker (or the platforms native indexer) for search results.
  • Locations that are not covered by that, we crawl ourselves, skipping remote locations to avoid excessive network traffic.

File Chooser 2
The easiest way to start a search is to just type a few characters – the search entry will appear and the search begin (you can of course also use the Search button in the header, or hit Ctrl-F to reveal the search bar.
File Chooser 6
If you type a character that looks like the beginning of a path (~, / or .), we bring up the location entry instead to let you enter a location.

Note that we show helpful hints in the subtitle in the header: if you are searching, we tell you where. If you are expected to enter a URL, we tell you that.
File Chooser 3For search results, we show a location column that helps to determine quickly where a result comes from – results are sorted so that results from the current folder come first.  Recent files also have a location column. The formatting of the modification time column has been improved, and it can optionally show times in addition to dates.

As you can also see here, the context menu in the file list (as well as the one in the sidebar) has been changed to a popover. The main motivation for this is that we can now trigger it with a long press on touch screens, which does not work well with a menu.

File Chooser 4If the search takes longer than a few moments, we show a spinner.  Hitting Escape will stop the search. Hitting it again will hide the search entry. Hitting it one more time will close the dialog.

File Chooser 5If the search comes up empty, we tell you about it.

File Chooser 7As I already mentioned, we don’t crawl remote locations (and tracker doesn’t index them either). But we still get results for the current folder. The footer below the list informs you about this fact.

Sidebar

The GtkPlacesSidebar is shared between nautilus and the file chooser since a few years ago. This cycle, it has been rewritten to use a GtkListBox instead of a GtkTreeView. This has a number of advantages: we can use real widgets for the rows, and things like the eject button are properly themeable and accessible.

File Chooser 8Another aspect that was improved in the sidebar is the drag-and-drop of files to create bookmarks. We now show a clear drop location and gray out all the rest.  Some of these changes are a direct result of user testing on nautilus that happened last year.

Places

The sidebar used to list all your removable devices, remote mounts, as well as special items for ‘Enter Location’, etc. To prevent the list from getting too long, we have moved most of these to a new view, and just have a single “Other Locations” item in the sidebar to go there.
File Chooser 9As you can see, the places view also has enough room to offer ‘Connect to Server’ functionality.

File Chooser 10It has completion for known server locations.

File Chooser 11We show progress while connecting to a server.

File Chooser 12And after the connection succeeded, the location shows up under ‘Networks’ in the list, from where it can be unmounted again.

File Chooser 13The recent server locations are also available in a popover.

File Chooser 14If you don’t have any, we tell you so.

Save mode

All of the improvements so far apply to both Open mode and Save mode.

File Chooser 15The name entry in Save mode has been moved to the header (if we have one).

File Chooser 16For creating directories, we now use a popover instead of an embedded entry in the list.

File Chooser 18This lets us handle invalid names in a nicer way.

Credits

All of these changes will appear in GTK+ 3.18 in September. And we are not quite done yet – we may still get a modernized path bar this cycle, if everything works out.

The improvements that I have presented here are not all my work.  A lot of credit goes to Allan Day, Carlos Soriano, Georges Basile Stavracas Neto, and Arc Riley. Buy them a drink if you meet them!

July 29, 2015

Caching DNF updates

For the past couple of years, I've been running Fedora. I'm not a distro-crusader, so take this for what it's worth.

I recently moved out of San Francisco and into the forest. (more on that later!) One of the side effects of such a move is that internet service is back onto copper lines and analog repeaters. But given that rent is half the cost of San Francisco and I get to live right near all those fantastic organic farms, surrounded by redwoods, I'll take it.

I have lots of little machines always fighting for DNF header updates, delta rpms, and worse, firefox/libreoffice/kernel updates. So I decided to kick it old school and setup a squid caching proxy.

Basically, somewhere on your home network do the following:

  • Install squid (sudo dnf install squid).
  • Edit /etc/squid/squid.conf and add the following.
# 10 GB of cache. Make sure you have that much space or change it.
cache_dir ufs /var/spool/squid 10000 16 256

# Allow a single object to be cached up to 500MB.
# I actually run this larger so I cache ISOs occasionally.
maximum_object_size 500 MB
  • sudo service squid restart
  • Edit /etc/dnf/dnf.conf on your client machines and add:
proxy=http://proxy-ip:3128/
  • Edit /etc/yum.repos.d/*.repo and replace the metalink= lines with baseurl. This is required so that we always resolve to the same mirror. Otherwise the cache simply wont work. Bummer, I know. I use fedora.osuols.org, but that likely isn't near you. They will be slightly different for any mirror/repo combination you have.
baseurl=http://fedora.osuosl.org/linux/releases/$releasever/Everything/$basearch/os/

The first update for me is still going to be rather slow, but then all the followup updates are nice and snappy.

Personally, this just furthers my desire for a distro that does peer-to-peer updates on local network segments. But that is a story for another day.

Announcing systemd.conf 2015

Announcing systemd.conf 2015

We are happy to announce the inaugural systemd.conf 2015 conference of the systemd project.

The conference takes place November 5th-7th, 2015 in Berlin, Germany.

Only a limited number of tickets are available, hence make sure to sign up quickly.

For further details consult the conference website.

I'm going to GUADEC 2015!

Hi people! I just wanted to tell you that I'm going to GUADEC 2015 in Gothenburg thanks to the great help of GNOME Foundation who is sponsoring a nice part of my travel expenses!

This is going to be my first GUADEC, my first time in Sweden and my first time on the other side of the Ocean. I know I'm going to have a great time with all the GNOMErs and with the Swedish culture. I'm finally going to meet Jonas, Mattias, Zeeshan and all the people who helped me to enjoy my favorite job (GSoC 2014), that's priceless.

See you in Gothenburg!

coala wombat

Hi everyone!

We proudly announce the new release of your favorite code analysis framework coala: coala 0.2 – wombat.

Here are some of the major new features:

  • We have now a dbus-interface: `coala-dbus` fires up an API external applications can use.
  • We have a JSON export for all results and log messages.
  • A sublime plugin (https://github.com/coala-analyzer/coala-sublime).
  • A few new analysis routines: a unique code clone detection able to detect more than just simple copied code for C, PyLint integration as well as spell and grammar checking via LanguageTool.
  • Windows and Mac support.

With 1001 commits since the last release, there’s of course a lot more to see! Take a look at our release notes for more information.

The release can be found here:
– On GitHub (including our awesome ASCII release logo):
https://github.com/coala-analyzer/coala/releases/tag/0.2.0
– On pypi:
https://pypi.python.org/pypi/coala/0.2.0

Thanks to everyone who helped with this release! Special thanks to our new Maintainer Mischa Krüger and our GSoC Students Abdeali Kothari (who has done the sublime plugin for fun) as well as Udayan Tandon (who is constantly updating you with tutorials on doing cool GTK stuff on https://udayantandon.wordpress.com/).

A custom searchbar in Gtk and python

I wanted the following features from my searchbar:

  1. It should be in place where the toolbar is in a window.
  2. It should not be visible always only when the user wants to search something.
  3. It should popup as soon as the user starts typing in the window
  4. It should popup when user presses ctrl+F.
  5. It should hide itself and the search should get invalidated when user presses escape.
  6. It should be toggleable through a button.
  7. It should associate to a window.

Now the searchbar that i wrote has a few requirements or a design paradigm if you must:

  1. The window has to implement its own refilter() function which is called when a search term changes. This should call a refresh of the filter function on the all the views that the search applies to.
  2. The window implements its own filter functions on its views.

Here is the code:

from gi.repository import Gtk, GLib, Gdk


class Searchbar(Gtk.Revealer):
    __gtype_name__ = 'Searchbar'

    def __init__(self, window=None, search_button=None):
        Gtk.Revealer.__init__(self)

        self._searchContainer = Gtk.Box(orientation=Gtk.Orientation.HORIZONTAL,
                                        halign=Gtk.Align.CENTER)

        self._search_entry = Gtk.SearchEntry(max_width_chars=45)
        self._search_entry.connect("search-changed", self.on_search_changed)
        self._search_entry.show()
        self._searchContainer.add(self._search_entry)

        self._searchContainer.show_all()
        toolbar = Gtk.Toolbar()
        toolbar.get_style_context().add_class("search-bar")
        toolbar.show()
        self.add(toolbar)

        item = Gtk.ToolItem()
        item.set_expand(True)
        item.show()
        toolbar.insert(item, 0)
        item.add(self._searchContainer)

        self.window = window
        self.search_button = search_button
        self.show = False

    def on_search_changed(self, widget):
        self.window.refilter(self._search_entry.get_text())

    def set_window(self, window, search_button):
        self.window = window
        self.search_button = search_button
        self.window.connect("key-press-event", self._on_key_press)
        self.window.connect_after("key-press-event", self._after_key_press)

    def show_bar(self, show):
        self.show = show
        if not self.show:
            self._search_entry.set_text('')
        self.set_reveal_child(show)

    def toggle_bar(self):
        self.show_bar(not self.get_child_revealed())

    def _on_key_press(self, widget, event):
        keyname = Gdk.keyval_name(event.keyval)

        if keyname == 'Escape' and self.search_button.get_active():
            if self._search_entry.is_focus():
                self.search_button.set_active(False)
            else:
                self._search_entry.grab_focus()
            return True

        if event.state & Gdk.ModifierType.CONTROL_MASK:
            if keyname == 'f':
                self.search_button.set_active(True)
                return True

        return False

    def _after_key_press(self, widget, event):
        if (not self.search_button.get_active() or
                not self._search_entry.is_focus()):
            if self._search_entry.im_context_filter_keypress(event):
                self.search_button.set_active(True)
                self._search_entry.grab_focus()

                # Text in entry is selected, deselect it
                l = self._search_entry.get_text_length()
                self._search_entry.select_region(l, l)

                return True

        return False

For more info go to https://github.com/coala-analyzer/coala and look for the branch wip/udayan/gui2


Writing custom widgets in Gtk and python.

Many a time the default GtkWidget does not solve the job and you need a custom one with extra features. I could not find a concrete source that solves this problem. Hence here are the steps:

  1. Subclass a GtkWidget.
  2. Call the init method of the GtkWidget in the init method of your custom widget.
  3. Above all  the methods and the init method define <code> __gtype_name__ </code>. This variable will be the name of the custom widget you create and use in .ui files.
  4. Now import the widget object in the file where you define your application so that the name gets registered.
  5. Now you can use the widget in .ui files with the name you defined.

Here is an example. This also contains definition of some extra properties.

class coalaScrolledWindow(Gtk.ScrolledWindow):

    __gtype_name__ = 'coalaScrolledWindow'

    max_contentwidth = GObject.property(type=int,
                                       default=-1,
                                       flags=GObject.PARAM_READWRITE)
    max_contentheight = GObject.property(type=int,
                                        default=-1,
                                        flags=GObject.PARAM_READWRITE)

    def __init__(self, *args, **kwargs):
        Gtk.ScrolledWindow.__init__(self, *args, **kwargs)
        self.max_content_width = -1
        self.max_content_height = -1
        self.connect("notify::max-contentheight",
                     self.on_notify_max_height_changed)
        self.connect("notify::max-contentwidth",
                     self.on_notify_max_width_changed)

Registering it you just need to import it somewhere.

For further information drop a comment or go to https://www.github.com/coala-analyzer/coala and checkout the wip/udayan/gui2 branch.


See you in Gothenburg!

I'm glad to announce that I'm going to the 2015 edition of GUADEC!

I'm delighted to be able to attend GUADEC for the second time, as my first experience in Strasbourg was extremely enriching. I hope I'll see lots of known and new faces. =) As an intern, I'll give a lightning talk about what I've done so far for GNOME Boxes.

I would like to warmly thanks thanks the GNOME foundation for sponsoring by travel, especially the donors and the GUADEC team and the travel committee.

It will be the first time I go to Sweden, I'm sure the trip will be fun.

Vi ses i Göteborg!

Books - July 2015

Books - July 2015

This July 2015 I travelled to the Red Hat office in Brno, Czech Republic to spend some time with my teammates there, and I managed to get a lot of reading done between long plane rides and being jet lagged for many nights :) So I finally managed to finish up some of the books that had been lingering on my ToDo list and even managed to finally read a few of the books that together make up the Chronicles of Narnia, since I had never read them as a kid.

Read

Out of all the books I read this month, I feel that All Quiet on the Western Front and The October Country were the ones I enjoyed reading the most, closely followed by Cryptonomicon, which took me a while to get through. The other books, with the exception of The Memoirs of Sherlock Holmes, helped me pass the time when I only wanted to be entertained.

All Quiet on the Western Front takes the prize for being one of the best books I have ever read! I felt that the way WWI was presented through the eyes of the main character was a great way to represent all the pain, angst and suffering that all sides of conflict went through, without catering for any particular side or having an agenda. Erich Maria Remarque's style had me some times breathless, some times with a knot on the pit of my stomach I as 'endured' the many life changing events that took place in the book. Is this an action-packed book about WWI? Will it read like a thriller? In my opinion, even though there are many chapters with gory details about killings and battles, the answer is a very bland 'maybe'. I think that the real 'star' of this book is its philosophical view of the war and how the main characters, all around 19-20 years of age, learn to deal with its life lasting effects.

Now, I have been a huge fan of Ray Bradbury for a while now, and when I got The October Country for my birthday last month, I just knew that it would be time well spent reading it. For those of you who are more acquainted his science fiction works, this book will surprise you as it shows you a bit of his 'darker' side. All of the short stories included in this collection deal with death, mysterious apparitions, inexplicable endings and are sure to spook you a little bit.

Cryptonomicon was at times slow, some other times funny and, especially toward the end, a very entertaining book. Weighing in at a hefty 1000 pages (depending on the edition you have, plus/minus 50 odd pages), this book covers two different periods in the lives of a number of different characters, past (around WWII) and present, all different threads eventually leading to a great finale. Alternating between past and present, the story takes us to the early days of how cryptology was 'officially invented' and used during the war, and how many of the events that took place back then were affecting the lives of some of the direct descendants of the main characters in our present day. As you go through the back and forth you start to gather bits and pieces of information that eventually connects all the dots of an interesting puzzle. It definitely requires a long term commitment to go though it, but it was enjoyable and, as I mention before, it made me laugh at many places.

Books - June 2015

Books - June 2015

Those of you who know me know that I am a huge book reader and spend most of my free time reading several books at the same time. One could say that reading is one of my passions, and having wasted so many years after high school completely ignoring this passion (in exchange for spending most of my time trying to learn about Linux, get an education, a job and, let's be frank, chasing after girls), I decided that something had to be done about it, and starting around 2008 I 'forced' myself to dedicate at least one solid hour of reading for fun every day.

I find it funny to say that I had to force myself, but this statement is very much true. Being so used to spending all of my time sitting in front of a computer and getting flooded with information every single minute of the day (IRC, Twitter, Facebook, commit emails, RSS feeds, etc), I found it difficult to 'unplug' and spend time doing nothing but focusing on only one thing. I was so used to multitasking and being constantly bombarded with lots of information that sitting quietly and reading didn't feel very productive to me... sad but true.

Anyhow, after several 'agonizing' months of getting up from my desk and making a point of turning off my cel phone and finding a quiet place somewhere in the building (or at home during the weekends), I finally got into the habit of reading for pleasure. I actually looked forward to these reading periods (imagine that, huh?) and eventually I realized that if I skipped this 'ritual' even one day, my days felt like they got longer and I felt stressed out and irritable for the remaining of the day. Reading became not only a good habit but my mechanism for relaxing and recharging my energies during the day!

Well, this passion and appetite for reading has only gotten bigger, and with time I have to say that it has become a pretty big part of who I am today! In a way I am happy that it took me this long to get back into the habit of reading... I mean, I feel that getting older was an important part of preparing myself so that I could really appreciate John Steinbeck, Ray Bradbury and the likes of them! Would I have truly appreciated The Grapes of Wrath when I was younger? Perhaps... but it took me around 40 years to get to it and I'm happy that when it did I was able to appreciate this amazing piece of art!

These last few months I decided that I wanted to start tracking all the books that I read, buy or receive as a gift every month (see my reading progress on GoodReads and add me as a friend), and jot down some of my impressions and motives for reading or buying them. Those familiar with Nick Hornby will probably associate this post (and hopefully others that will surely come) with the work he has done writing for the Believer Magazine ... and this would be correct. My intention is not to copy his style or anything like that, but I thought that the format he chose to report on his own reading 'adventures' would fit in quite nicely with what I wanted to get across to my readers... and I'm sticking with the format as long as it works for me :)

July 28, 2015

Difficult month with F-Spot

I'm having kind of a hard month with F-Spot.

I've started by trying to look for some way to fix one problem with photo thumbnails —they are regenerated every time F-Spot starts—, and I've found that the reason is the lack of Gnome.Thumbnail class. That one is needed to get the correct path for storing the thumbnails in a GNOME environment, and I couldn't figure out a way to get Gnome.Thumbnail using gtk-sharp-3.

So I've switched to another problem, this time with F-Spot's fullscreen mode, which stopped working after the change from gtk-sharp-3 stable to development version referenced in my last post. After some failed attempts to understand and find the problem using the debugger with MonoDevelop, I've got it with the help of GTK+ Inspector: Playing with FullScreenView, the window used to paint the fullscreen mode interface, I found that changing the window opacity made it work, and then it was easy to find the problem, so I fixed it.

But then, when I restored the visibility of the fullscreen mode I faced another problem with the scrollbars I was seeing before changing Mono from stable to development version: The first photo displayed when entering fullscreen mode sometimes has scrollbars, and it shouldn't as it is displayed adjusted. This time I've had no luck.

I've abandoned it, started working in trying to restore the timeline view, and I'm working on that right now. A few days ago I've installed OpenSUSE 12.3 to test the timeline widget:

F-Spot 0.8.2 in OpenSUSE-12.3

The thing I have is a bit different right now...

F-Spot/gtk3 in Ubuntu 15.04

Flattr «Difficult month with F-Spot»

gtk-doc knows your package version

tl;dr: gtk-doc will now bind autoconf PACKAGE_* variables for use in your documentation.

For various modules which use gtk-doc, it’s a bit of a rite of passage to copy some build machinery from somewhere to generate a version.xml file which contains your package version, so that you can include it in your generated documentation (“Documenting version X of package Y”).

This is a bit of a pain. It doesn’t have to happen any longer.

gtk-doc master (to become 1.25) now generates a xml/gtkdocentities.ent file containing a few PACKAGE_* variables as generated by autoconf. So, you can now get rid of your version.xml machinery, and change your top-level documentation XML file’s DOCTYPE to the following:

<?xml version="1.0"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.3//EN"
   "http://www.oasis-open.org/docbook/xml/4.3/docbookx.dtd"
[
 <!ENTITY % local.common.attrib "xmlns:xi CDATA #FIXED 'http://www.w3.org/2003/XInclude'">
 <!ENTITY % gtkdocentities SYSTEM "xml/gtkdocentities.ent">
 %gtkdocentities;
]>

You can now use &package_string;, &package_version;, &package_url;, etc. in your documentation. See gtk-doc.make for a full list.

Thanks to Stefan for getting this feature in!

loop optimizations in guile

Sup peeps. So, after the slog to update Guile's intermediate language, I wanted to land some new optimizations before moving on to the next thing. For years I've been meaning to do some loop optimizations, and I was finally able to land a few of them.

loop peeling

For a long time I have wanted to do "loop peeling". Loop peeling means peeling off the first iteration of a loop. If you have a source program that looks like this:

while foo:
  bar()
  baz()

Loop peeling turns it into this:

if foo:
  bar()
  baz()
  while foo:
    bar()
    baz()

You wouldn't think that this is actually an optimization, would you? Well on its own, it's not. But if you combine it with common subexpression elimination, then it means that the loop body is now dominated by all effects and all loop-invariant expressions that must be evaluated for the expression to loop.

In dynamic languages, this is most useful when one source expression expands to a number of low-level steps. So for example if your language runtime implements top-level variable references in three parts, one where it gets a reference to a mutable box, then it checks if the box has a value, and and the third where it unboxes it, then we would have:

if foo:
  bar_location = lookup("bar")
  bar_value = dereference(bar_location)
  if bar_value is null: throw NotFound("bar")
  call(bar_value)

  baz_location = lookup("baz")
  baz_value = dereference(baz_location)
  if baz_value is null: throw NotFound("baz")
  call(baz_value)

  while foo:
    bar_value = dereference(bar_location)
    call(bar_value)

    baz_value = dereference(baz_location)
    call(baz_value)

The result is that we have hoisted the lookups and null checks out of the loop (if a box can never transition from full back to empty). It's a really powerful transformation that can even hoist things that traditional loop-invariant code motion can't, but more on that later.

Now, the problem with loop peeling is that usually values will escape your loop. For example:

while foo:
  x = qux()
  if x then return x
...

In this little example, there is a value x, and the return x statement is actually not in the loop. It's syntactically in the loop, but the underlying representation that the compiler uses looks more like this:

function qux(k):
  label loop_header():
    fetch(foo) -gt; loop_test
  label loop_test(foo_value):
    if foo_value then -> exit else -> body
  label body():
    fetch(x) -gt; have_x
  label have_x(x_value):
    if x_value then -> return_x else -> loop_header
  label return_x():
    values(x) -> k
  label exit():
    ...

This is the "CPS soup" I described in my last post. Only the bold parts are in the loop; notably, the return is outside the loop. Point being, if we peel off the first iteration, then there are two possible values for x that we would return:

if foo:
  x1 = qux()
  if x1 then return x1
  while foo:
    x2 = qux()
    if x2 then return x2
  ...

I have them marked as x1 and x2. But I've also duplicated the return x terms, which is not what we want. We want to peel off the first iteration, which will cause code growth equal to the size of the loop body, but we don't want to have to duplicate everything that's after the loop. What we have to do is re-introduce a join point that defines x:

if foo:
  x1 = qux()
  if x1 then join(x1)
  while foo:
    x2 = qux()
    if x2 then join(x2)
  ...
label join(x)
  return x

Here I'm playing fast and loose with notation because the real terms are too gnarly. What I'm trying to get across is that for each value that flows out of a loop, you need a join point. That's fine, it's a bit more involved, but what if your loop exits to two different points, but one value is live in both of them? A value can only be defined in one place, in CPS or SSA. You could re-place a whole tree of phi variables, in SSA parlance, with join blocks and such, but it's just too hard.

However we can still get the benefits of peeling in most cases if we restrict ourselves to loops that exit to only one continuation. In that case the live variable set is the intersection of all variables defined in the loop that are live at the exit points. Easy enough, and that's what we have in Guile now. Peeling causes some code growth but the loops are smaller so it should still be a win. Check out the source, if that's your thing.

loop-invariant code motion

Usually when people are interested in moving code out of loops they talk about loop-invariant code motion, or LICM. Contrary to what you might think, LICM is complementary to peeling: some things that peeling+CSE can hoist are not hoistable by LICM, and vice versa.

Unlike peeling, LICM does not cause code growth. Instead, for each expression in a loop, LICM tries to hoist it out of the loop if it can. An expression can be hoisted if all of these conditions are true:

  1. It doesn't cause the creation of an observably new object. In Scheme, the definition of "observable" is quite subtle, so in practice in Guile we don't hoist expressions that can cause any allocation. We could use alias analysis to improve this.

  2. The expression cannot throw an exception, or the expression is always evaluated for every loop iteration.

  3. The expression makes no writes to memory, or if it writes to memory, other expressions in the loop cannot possibly read from that memory. We use effects analysis for this.

  4. The expression makes no reads from memory, or if it reads from memory, no other expression in the loop can clobber those reads. Again, effects analysis.

  5. The expression uses only loop-invariant variables.

This definition is inductive, so once an expression is hoisted, the values it defines are then considered loop-invariant, so you might be able to hoist a whole chain of values.

Compared to loop peeling, this has the gnarly aspect of having to explicitly reason about loop invariance and manually move code, which is a pain. (Really LICM would be better named "artisanal code motion".) However it causes no code growth, which is a plus, though like peeling it can increase register pressure. But the big difference is that LICM can hoist effect-free expressions that aren't always executed. Consider:

while foo:
  x = qux() ? "hi" : "ho"

Here for some reason it could be faster to cache "hi" or "ho" in registers, which is what LICM allows:

hi, ho = "hi", "ho"
while foo:
  x = qux() ? hi : ho

On the other hand, LICM alone can't hoist the if baz is null checks in this example from above:

while foo:
  bar()
  baz()

The issue is that the call to bar() might not return, so the error that might be thrown if baz is null shouldn't be observed until bar is called. In general we can't hoist anything that might throw an exception past some non-hoisted code that might throw an exception. This specific situation happens in Guile but there are similar ones in any language, I think.

More formally, LICM will hoist effectful but loop-invariant expressions that postdominate the loop header, whereas peeling hoists those expressions that dominate all back-edges. I think? We'll go with that. Again, the source.

loop inversion

Loop inversion is a little hack to improve code generation, and again it's a little counterintuitive. If you have this loop:

while n < x:
  n++

Loop inversion turns it into:

if n < x:
  do
    n++
  while n < x

The goal is that instead of generating code that looks like this:

header:
  test n, x;
  branch-if-greater-than-or-equal done;
  x = x + 1
  goto header
done:

You make something that looks like this:

  test n, x;
  branch-if-greater-than-or-equal done;
header:
  x = x + 1
  test n, x;
  branch-if-less-than header;
done:

The upshot is that the loop body now contains one branch instead of two. It's mostly helpful for tight loops.

It turns out that you can express this transformation on CPS (or SSA, or whatever), but that like loop peeling the extra branch introduces an extra join point in your program. If your loop exits to more than one label, then we have the same problems as loop peeling. For this reason Guile restricts loop inversion (which it calls "loop rotation" at the moment; I should probably fix that) to loops with only one exit continuation.

Loop inversion has some other caveats, but probably the biggest one is that in Guile it doesn't actually guarantee that each back-edge is a conditional branch. The reason is that usually a loop has some associated loop variables, and it could be that you need to reshuffle those variables when you jump back to the top. Mostly Guile's compiler manages to avoid shuffling, allowing inversion to have the right effect, but it's not guaranteed. Fixing this is not straightforward, since the shuffling of values is associated with the predecessor of the loop header and not the loop header itself. If instead we reshuffled before the header, that might work, but each back-edge might have a different shuffling to make... anyway. In practice inversion seems to work out fine; I haven't yet seen a case where it doesn't work. Source code here.

loop identification

One final note: what is a loop anyway? Turns out this is a somewhat hard problem, especially once you start trying to identify nested loops. Guile currently does the simple thing and just computes strongly-connected components in a function's flow-graph, and says that a loop is a non-trivial SCC with a single predecessor. That won't tease apart loop nests but oh wells! I spent a lot of time last year or maybe two years ago with that "Loop identification via D-J graphs" paper but in the end simple is best, at least for making incremental steps.

Okeysmokes, until next time, loop on!

July 27, 2015

cps soup

Hello internets! This blog goes out to my long-time readers who have followed my saga hacking on Guile's compiler. For the rest of you, a little history, then the new thing.

In the olden days, Guile had no compiler, just an interpreter written in C. Around 8 years ago now, we ported Guile to compile to bytecode. That bytecode is what is currently deployed as Guile 2.0. For many reasons we wanted to upgrade our compiler and virtual machine for Guile 2.2, and the result of that was a new continuation-passing-style compiler for Guile. Check that link for all the backstory.

So, I was going to finish documenting this intermediate language about 5 months ago, in preparation for making the first Guile 2.2 prereleases. But something about it made me really unhappy. You can catch some foreshadowing of this in my article from last August on common subexpression elimination; I'll just quote a paragraph here:

In essence, the scope tree doesn't necessarily reflect the dominator tree, so not all transformations you might like to make are syntactically valid. In Guile 2.2's CSE pass, we work around the issue by concurrently rewriting the scope tree to reflect the dominator tree. It's something I am seeing more and more and it gives me some pause as to the suitability of CPS as an intermediate language.

This is exactly the same concern that Matthew Fluet and Stephen Weeks had back in 2003:

Thinking of it another way, both CPS and SSA require that variable definitions dominate uses. The difference is that using CPS as an IL requires that all transformations provide a proof of dominance in the form of the nesting, while SSA doesn't. Now, if a CPS transformation doesn't do too much rewriting, then the partial dominance information that it had from the input tree is sufficient for the output tree. Hence tree splicing works fine. However, sometimes it is not sufficient.

As a concrete example, consider common-subexpression elimination. Suppose we have a common subexpression x = e that dominates an expression y = e in a function. In CPS, if y = e happens to be within the scope of x = e, then we are fine and can rewrite it to y = x. If however, y = e is not within the scope of x, then either we have to do massive tree rewriting (essentially making the syntax tree closer to the dominator tree) or skip the optimization. Another way out is to simply use the syntax tree as an approximation to the dominator tree for common-subexpression elimination, but then you miss some optimization opportunities. On the other hand, with SSA, you simply compute the dominator tree, and can always replace y = e with y = x, without having to worry about providing a proof in the output that x dominates y (i.e. without putting y in the scope of x)

[MLton-devel] CPS vs SSA

To be honest I think all this talk about dominators is distracting. Dominators are but a lightweight flow analysis, and I usually find myself using full-on flow analysis to compute the set of optimizations that I can do on a piece of code. In fact the only use I had for dominators in the nested CPS language was to rewrite scope trees! The salient part of Weeks' observation is that nested scope trees are the problem, not that dominators are the solution.

So, after literally years of hemming and hawing about this, I finally decided to remove nested scope trees from Guile's CPS intermediate language. Instead, a function is now a collection of labelled continuations, with one distinguished entry continuation. There is no more $letk term to nest continuations in each other. A program is now represented as a "soup" -- basically a map from labels to continuation bodies, again with a distinguished entry. As an example, consider this expression:

function(x):
  return add(x, 1)

If we rewrote it in continuation-passing style, we'd give the function a name for its "tail continuation", ktail, and annotate each expression with its continuation:

function(ktail, x):
  add(x, 1) -> ktail

Here the -> ktail means that the add expression passes its values to the continuation ktail.

With nested CPS, it could look like:

function(ktail, x):
  letk have_one(one): add(x, one) -> ktail
    load_constant(1) -> have_one

Here the label have_one is in a scope, as is the value one. With "CPS soup", though, it looks more like this:

function(ktail, x):
  label have_one(one): add(x, one) -> ktail
  label main(x): load_constant(1) -> have_one

It's a subtle change, but it took a few months to make so it's worth pointing out what's going on. The difference is that there is no scope tree for labels or variables any more. A variable can be used at a label if it flows to the label, in a flow analysis sense. Indeed, determining the set of variables that can be used at a label requires flow analysis; that's what Weeks was getting at in his 2003 mail about the advantages of SSA, which are really the advantages of an intermediate language without nested scope trees.

The question arises, though, now that we've decided on CPS soup, how should we represent a program as a value? We've gone from a nested term to a graph term, and we need to find a way to represent it somehow that facilitates looking up labels by name, and facilitates tree rewrites.

In Guile's IR, labels and variables are both integers, so happily enough, we have such a data structure: Clojure-style maps specialized for integer keys.

Friends, if there has been one realization or revolution for me in the last year, it has been Clojure-style data structures. Here's why. In compilers, I often have to build up some kind of analysis, then use that analysis to transform data. Often I need to keep the old term around while I build a new one, but it would be nice to share state between old and new terms. With a nested tree, if a leaf changed you'd have to rebuild all surrounding terms, which is gnarly. But with Clojure-style data structures, more and more I find myself computing in terms of values: build up this value, transform this map to that set, fold over this map -- and yes, you can fold over Guile's intmaps -- and so on. By providing an expressive data structure for which I can control performance characteristics by using transients if needed, these data structures make my programs more about data and less about gnarly machinery.

As a concrete example, the old contification pass in Guile, I didn't have the mental capacity to understand all the moving parts in such a way that I could compute an optimal contification from the beginning; instead I had to iterate to a fixed point, as Kennedy did in his "Compiling with Continuations, Continued" paper. With the new CPS soup language and with Clojure-style data structures, I could actually fit more of the algorithm into my head, with the result that Guile now contifies optimally while avoiding the fixed-point transformation. Also, the old pass used hash tables to represent the analysis, which I found incredibly confusing to reason about -- I totally buy Rich Hickey's argument that place-oriented programming is the source of many evils in programs, and hash tables are nothing if not a place party. Using functional maps let me solve harder problems because they are easier for me to reason about.

Contification isn't an isolated case, either. For example, we are able to do the complete set of optimizations from the "Optimizing closures in O(0) time" paper, including closure sharing, which I think makes Guile unique besides Chez Scheme. I wasn't capable of doing it on the old representation because it was just too hard for me to think about, because my data structures weren't right.

This new "CPS soup" language is still a first-order CPS language in that each term specifies its continuation, and that variable names appear in the continuation of a definition, not the definition itself. This effectively makes every variable a phi variable, in the sense of SSA, and you have to do some work to get to a variable's definition. It could be that still this isn't the right number of names; consider this function:

function foo(k, x):
  label have_y(y) bar(y) -> k
  label y_is_two() load_constant(2) -> have_y
  label y_is_one() load_constant(1) -> have_y
  label main(x) if x -> y_is_one else -> y_is_two

Here there is no distinguished name for the value load_constant(1) versus load_constant(2): both are possible values for y. If we ended up giving them names, we'd have to reintroduce actual phi variables for the joins, which would basically complete the transformation to SSA. Until now though I haven't wanted those names, so perhaps I can put this off. On the other hand, every term has a label, which simplifies many things compared to having to contain terms in basic blocks, as is usually done in SSA. Yet another chapter in CPS is SSA is CPS is SSA, it seems.

Welp, that's all the nerdery for right now. Talk at yall later!

new release of zanata-python-client - v1.3.22

This blog post is about short note to new release of zanata-python-client.

zanata which is open source platform for localisation and used by  fedora and openstack. Zanata provides REST API so that developers can interact with zanata server.

I have been maintaining zanata-python-client and recently some developers requested small feature requests. This release contains:
  • Improved error codes, that means in case of error messages z-p-c returns appropriate error codes to system, this is useful for people who uses z-p-c from command line.
  • Bug fixes.
I have updated z-p-c in Fedora. Please feel free to play and report bugs.

Sun 2015/Jul/26

  • An inlaid GNOME logo, part 5

    Esta parte en español

    (Parts 1, 2, 3, 4)

    This is the shield right after it came out of the clamps. I had to pry it a bit from the clamped board with a spatula.

    Unclamped shield

    I cut out the shield shape by first sawing the straight sections, and then using a coping saw on the curved ones.

    Sawing straight edges

    Coping the curves

    All cut out

    I used a spokeshave to smooth the convex curves on the sides.

    Spokeshave for the curves

    The curves on the top are concave, and the spokeshave doesn't fit. I used a drawknife for those.

    Drawknife for the tight curves

    This gives us crisp corners and smooth curves throughout.

    Crisp corner

    On to planing the face flat! I sharpened my plane irons...

    Sharp plane iron

    ... and planed carefully. The cutoff from the top of the shield was useful as a support against the planing stop.

    Starting to plane the shield

    The foot shows through once the paper is planed away...

    Foot shows through the paper

    Check out the dual-color shavings!

    Dual-color shavings

    And we have a flat board once again. That smudge at the top of the sole is from my dirty fingers — dirty with metal dust from the sharpening step — so I washed my hands and planed the dirt away.

    Flat shield

    The mess after planing

    But it is too flat. So, I scribed a line all around the front and edges, and used the spokeshave and drawknife again to get a 45-degree bevel around the shield. The line is a bit hard to see in the first photo, but it's there.

    Scribed lines for bevel

    Beveling with a spokeshave

    Final bevel around the shield

    Here is the first coat of boiled linseed oil after sanding. When it dries I'll add some coats of shellac.

    First coat of linseed oil

July 26, 2015

A brief summary for the second step of GSoC 2015

The Second step has finished June 26. But I prepared several exams in early July and went to Beijing to apply the Visa for GUADEC and had a travel in Beijing. So I was a little busy in the past few days. Now I'd like to spend sometime to write this blog and share something I learned during the second step of my plan.

The commit of this step can be found here.

The job of second step is to add the feature that view logs from different boots to Logs. Previously, Logs could only show the user logs from the current boot. What I did is adding the option for the users to view logs from previous boots. Pictures below show the difference.

Before patch:

After patch:

You can see the icon and the menu list beside the search icon in the second picture. Here users can select the latest five boots. Yep, right now we choose to give users the option to choose the latest five boots. Later on viewing all boots will be available.

There are two things I wanna mention here. The first is how to add a GMune with GtkMuneButton. The second one is how to get all boot ids in coding level.


  • How to add a GMune with GtkMuneButton
We use GtkMenuButton as the menu button shown in the UI. Then we need set the GMenuModel from which the popup will be constructed. Here we use GMenu as the model. Suppose we have the following statement:

GMenu *boot_menu;GMenuItem *item;

We create a new GMenu using

boot_menu = g_menu_new ();

Then we create items in the menu using

item = g_menu_item_new (...);


And append these items into boot_menu using

g_menu_append_item (boot_menu, item);

After appending all the items into boot_menu, it's time to set GtkMenuButton's GMenuModel using

gtk_menu_button_set_menu_model (GTK_MENU_BUTTON (priv->menu_button),
                                                                   G_MENU_MODEL (boot_menu));

And it will be done.


  • How to get all boot ids in coding level
I found these codes in the source code of journalctl. It's a function shown as below:

SD_JOURNAL_FOREACH_UNIQUE (sd_journal *journal, const void *data, size_t length);

In the body of this function, the boot match will be saved into data. You can use the following code to convert it to a sd_id128_t id.

r = sd_id128_from_string (((const char *)data) + strlen ("_BOOT_ID="), &id);

Because boot match is in form of "_BOOT_ID=......", and we need make data point at the start of the actual boot id, data should plus the length of "_BOOT_ID=".

BTW, the Visa application result will probably be out tomorrow. Hope everything is fine and I can go to Sweden to meet you!

July 25, 2015

Useful DuckDuckGo bangs

DuckDuckGo bangs are just shortcuts to redirect your search to another search engine. My personal favorites:

  • !gnomebugs — Runs a search on GNOME Bugzilla. Especially useful followed by a bug number. For example, search for ‘!gnomebugs 100000′ and see what you get.
  • !wkb — Same thing for WebKit Bugzilla.
  • !w — Searches Wikipedia.

There’s 6388 more, but those are the three I can remember. If you work on GNOME or WebKit, these are super convenient.

July 24, 2015

GSoC 2015 Report #3 - New GPG library for Python is ready!

Hi people,

GnomeKeysign now has a new library that can call GnuPG from Python.

This was a key point in making GnomeKeysign to be 'Gnome ready' because our previous gpg library made the project less distributable and it also used GTK2.

The library is using pygpgme to interact with gnupg and aside from what pygpgme does, it also offers some important functionality for us like

  • set up temporary context objects (like keyrings) which don't mess with user's gnupg home
  • import keys from Unicode data that we receive over network
  • sign only a specific UID of a key in order to email back the key signed to that UID
  • sign and encrypt keydata that is going to be sent as email attachment back to the key owner
  • helper functions which allows to export keydata, to extract fingerprint of a key, to format a key such that it will be displayed in the GUI

Of course it will take a few weeks to refine this library, but I'm very optimistic that in the end people could use it if they ever want to work with GnuPG from Python.

I had to overcome some limitations in pygpgme but I'm glad that I managed to do this.

I'm happy to come to GUADEC again this year, after I was the first time in 2014
Thanks GNOME for sponsoring me!



Fri 2015/Jul/24

  • An inlaid GNOME logo, part 4

    Esta parte en español

    (Parts 1, 2, 3)

    In the last part, I glued the paper templates for the shield and foot onto the wood. Now comes the part that is hardest for me: excavating the foot pieces in the dark wood so the light-colored ones can fit in them. I'm not a woodcarver, just a lousy joiner, and I have a lot to learn!

    The first part is not a problem: use a coping saw to separate the foot pieces.

    Foot pieces, cut out

    Next, for each part of the foot, I started with a V-gouge to make an outline that will work as a stop cut. Inside this shape, I used a curved gouge to excavate the wood. The stop cut prevents the gouge from going past the outline. Finally, I used the curved gouge to get as close as possible to the final line.

    V channel as a stop cut Excavating inside the channel

    Each wall needs squaring up, as the curved gouge leaves a chamfered edge instead of a crisp angle. I used the V-gouge around each shape so that one of the edges of the gouge remains vertical. I cleaned up the bottom with a combination of chisels and a router plane where it fits.

    Square walls

    Then, each piece needs to be adjusted to fit. I sanded the edges to have a nice curve instead of the raw edges from the coping saw. Then I put a back bevel on each piece, using a carving knife, so the back part will be narrower than the front. I had to also tweak the walls in the dark wood in some places.

    Unadjusted piece Sanding the curves Beveling the edges

    After a lot of fiddling, the pieces fit — with a little persuasion — and they can be glued. When the glue dries I'll plane them down so that they are level to the dark wood.

    Gluing the pieces Glued pieces

    Finally, I clamped everything against another board to distribute the pressure. Let's hope for the best.

    Clamped

TDD, Unit Tests & Liblarch. Lesson -- learnt!

Yet another week of my GSoC experience has passed by and ... I learnt a lot I must admit.

Over the course of the past few days I have been exposed to many excellent programming practices which I hope I'll remember from now on. Furthermore, I have written (although with serious input and advice from the side of my mentor) my first serious and real unit test for a re-factored feature in GTG's liblarch.

But from the beginning. I started the week with some slight fine-tuning of my last weeks accomplishment: the open_parent popover. I designed the window so that if a task has no parent, the button becomes disabled. This seemed to be perfectly natural and logical to me. If, on the other hand, I opted for removing the button each time it was not necessary, I would have to cope with alternating positioning and shifting of widgets in my toolbar and that might be a painful process. Therefore I think that this is an elegant solution how not to lead a user to use the button when it bears no action.
For a top-level task with no parents like this one, the parent button is disabled from now.
Afterwards, my steps lead to finishing the search function PR. In here, there were a few things remaining, namely the need to refactor the code to polish the quick-add feature. Instead of having two functions: opening and adding tasks, we decided to minimise this to pure adding (well.. quick-ADD, right?). This meant simplifying the code very significantly and getting rid of the rather ugly "options-popover" which used to pop-up as you typed in the entry with two options - Add  | Open.
This was a fairly simple task. I believe this will lead to code that will be easier to adopt by further contributors and make the app faster, too.
Search is now viewed after clicking the search button. Quick-add was moved to the bottom of the window.
However, during my works on implementing the new search feature and re-factoring the quick-add, we made a change which required adjustments in liblarch. Search PR would not be merged without this being fixed and so another task was clear.

So I started to think about how and what to do. I learnt the basics of test driven development. Due to this, the first step before amending the liblarch code was to write a test that would show the flaws in its current version. The problem was that after applying a filter to a tree of tasks using some parameters rather than purely a filter, the tree would not refilter / refresh properly. It took me some time to understand how unit tests work, what are their core parts. However, after some two days or so I had a test that showed exactly what I wanted it to show and so the code changes to repair the error could begin. Having come across several other setbacks on my way towards the final success, I struggled every now and then and also felt demotivated or frustrated too. But this morning, I finally finished my works on this feature too. Since it was a pre-requisite for the search feature, I hope that it will be merged in a short period of time, followed by the search PR and the parent enhancement PR as well.

As a result, all users of the master version of GTG will encounter some nice new features in their app soon and I will be thrilled to hear some feedback or reviews about what is good and what, on the other hand, might have been done better.

Speaking of my plans now, I will return to one of my major re-works of this summer in Getting Things GNOME!: the preferences window. I wish to finish the changes in the general preferences window by the middle of next week. Hopefully everything will go smoothly. I might write some further unit tests also for some features in here, so it will be fun! Afterwards, there are two other key stones to preferences: implement the plugins and synchronisation to this one unified window.
The plan is clear, now back to work for a few more hours before the weekend rest.

I'll be glad to read some comments from you, as to how you perceive my activity :)

July 23, 2015

ReadableStream almost ready

Hello dear readers! Long time no see! You might thing that I have been lazy, and I was in blog posting but I was coding like mad.

First remarkable thing is that I attended the WebKit Contributors Meeting that happened in March at Apple campus in Cupertino as part of the Igalia gang. There we discussed of course about Streams API, its state and different implementation possibilities. Another very interesting point which would make me very happy would be the movement of Mac to CMake.

In a previous post I already introduced the concepts of the Streams API and some of its possible use cases so I’ll save you that part now. The news is that ReadableStream has its basic funcionality complete. And what does it mean? It means that you can create a ReadableStream by providing the constructor with the underlying source and the strategy objects and read from it with its reader and all the internal mechanisms of backpresure and so on will work according to the spec. Yay!

Nevertheless, there’s still quite some work to do to complete the implementation of Streams API, like the implementation of byte streams, writable and transform streams, piping operations and built-in strategies (which is what I am on right now).I don’t know either when Streams API will be activated by default in the next builds of Safari, WebKitGTK+ or WebKit for Wayland, but we’ll make it at some point!

Code suffered already lots of changes because we were still figuring out which architecture was the best and Youenn did an awesome job in refactoring some things and providing support for promises in the bindings to make the implementation of ReadableStream more straitghforward and less “custom”.

Implementation could still suffer quite some important changes as, as part of my work implementing the strategies, some reviewers raised their concerns of having Streams API implemented inside WebCore in terms of IDL interfaces. I have already a proof of concept of CountQueuingStrategy and ByteLengthQueuingStrategy implemented inside JavaScriptCore, even a case where we use built-in JavaScript functions, which might help to keep closer to the spec if we can just include JavaScript code directly. We’ll see how we end up!

Last and not least I would like to thank Igalia for sponsoring me to attend the WebKit Contributors Meeting in Cupertino and also Adenilson for being so nice and taking us to very nice places for dinner and drinks that we wouldn’t be able to find ourselves (I owe you, promise to return the favor at the Web Engines Hackfest). It was also really nice to have the oportunity of quickly visiting New York City for some hours because of the long connection there which usually would be a PITA, but it was very enjoyable this time.

HIG updates

HIG Banner

The GNOME 3 Human Interface Guidelines were released just under a year ago. They incorporated material from the GNOME 2 HIG, but they were also a thorough rework: the GNOME 3 HIG has a radically different structure from the GNOME 2 one, and is largely based on a collection of design patterns. The hope was that this collection could grow and evolve over time, ensuring that the HIG is always up to date with the latest design practice.

I’ve recently been working on the first major update to the GNOME 3 HIG. This has enabled us to put the new structure to good use, and the number of patterns has increased. These new guidelines are the direct result of design work that has happened in the past year. They attempt to distill everything we’ve learned through our own process of trial and error.

There have been some other notable changes to the HIG. Navigation has been improved throughout: the introduction has been thinned down, so you can get straight to the interesting stuff. The front page gives a much better overview now, and the overview pages for design patterns and interface elements have been much improved.

HIG Front Page Design Patterns Interface Elements

Another nice addition is that the HIG now links to the relevant GTK+ API reference documentation for each design component. This is nice for knowing which widget does what; and makes the design guidelines a more effective accompaniment to the toolkit.

I’m hoping to continue fixing bugs in the HIG and expanding the collection of patterns for a little while, so let me know if there’s anything you’d like to see added.

[Edit: these improvements to the HIG are work in progress, and will be released with GNOME 3.18.]

Wikimania 2015 – random thoughts and observations

Random thoughts from Wikimania, 2015 edition (2013, 2014):

"Wikimania 2015 Reception at Laboratorio Arte Alameda - 02" by Jarek Tuszynski,  under CC BY 4.0
Wikimania 2015 Reception at Laboratorio Arte Alameda – 02” by Jarek Tuszynski, under CC BY 4.0
  • Dancing: After five Wikimedia events (not counting WMF all-hands) I was finally dragged onto the dance floor on the last night. I’ll never be Garfield, but I had fun anyway. The amazing setting did not hurt.
  • Our hosts: The conference was excellently organized and run. I’ve never had Mexico City high on my list of “places I must see” but it moved up many spots after this trip.
  • First timers: I always enjoy talking to people who have never been to Wikimania before. They almost always seem to have enjoyed it, but of course the ones I talk to are typically the ones who are more outgoing and better equipped to enjoy things. I do hope we’re also being welcome to people who don’t already know folks, or who aren’t as outgoing.
  • Luis von Ahn: Good to chat briefly with my long-ago classmate. I thought the Q&A section of his talk was one of the best I’ve seen in a long time. There were both good questions and interesting answers, which is more rare than it should be.
  • Keynotes: I’d love to have one keynote slot each year for a contributor to talk about their work within the movement. Finding the right person would be a challenge, of course, as could language barriers, but it seems like it should be doable.
  • US English: I was corrected on my Americanisms and the occasional complexity of my sentence structure. It was a good reminder that even for fairly sophisticated speakers of English as a second language, California-English is not terribly clear. This is especially true when spoken. Verbose slides can help, which is a shame, since I usually prefer minimal slides. I will try to work on that in the future, and see how we can help other WMFers do the same.
  • Mobile: Really hope someday we can figure out how to make the schedule legible on a mobile device :) Good reminder we’ve got a long way to go there.
  • Community engagement: I enjoyed my departments “engage with” session, but I think next year we need to make it more interactive—probably with something like an introduction/overview followed by a World Cafe-style discussion. One thing we did right was to take questions on written cards. This helped indicate what the most important topics were (when questions were repeated), avoided the problem of lecture-by-question, and opened the floor to people who might otherwise be intimidated because of language barriers or personality. Our booth was also excellent and I’m excited to see some of the stories that came out of it.
  • Technology and culture: After talking about how we’d used cards to change the atmosphere of a talk, someone deliberately provoked me: shouldn’t we address on-wiki cultural issues the same way, by changing the “technology” used for discussion? I agree that technology can help improve things, and we should think about it more than we do (e.g.) but ultimately it can only be part of the solution – our most difficult problems will definitely require work on culture as well as interfaces. (Surprisingly, my 2009 post on this topic holds up pretty well.)
  • Who is this for? I’ve always felt there was some tension around whether the conference is for “us” or for the public, but never had language for it. An older gentleman who I spoke with for a while finally gave me the right term: is it an annual meeting or is it a public conference? Nothing I saw here changed my position, which is that it is more annual meeting than public conference, at least until we get much better at turning new users into long-term users.
  • Esino Lario looks like it will be a lot of fun. I strongly support the organizing committee’s decision to focus less on brief talks and more on longer, more interactive conversations. That is clearly the best use of our limited time together. I’m also excited that they’re looking into blind submissions (which I suggested in my Wikimania post from last year).
  • Being an exec: I saw exactly one regular talk that was not by my department, though I did have lots and lots of conversations. I’m still not sure how I feel about this tradeoff, but I know it will become even harder if we truly do transition to a model with more workshops/conversations and fewer lectures, since those will be both more valuable and more time-consuming/less flexible.
  • Some day: I wrote most of this post in the Mexico City airport, and saw that there are flights from there to La Habana. I hope someday we can do a Wikimania there.

July 22, 2015

The history behind the venue for this years GUADEC

My good friend Jonas wrote a nice little piece on the history behind the Folkets Hus and Folkets Park-movement and their relation with the labour movement here. Interesting read in general I'd say, but especially so since this years GUADEC is located at the Folkets Hus in Gothenburg.

See you all in Gothenburg!

Summary of work from July 7th to July 20th

I started with the migration from GConf to GSettings/DConf, but I got some problems. I've asked desktop-devel-list for help, and Mr. Crha gave me some advice. I'm still working on that.

Before I find the best way to do that, I've done some trivial works to make EAS newer. These works should be continued regardless of the finish of this project.

Another digest from Polariland

Polari 3.17.4 is around the corner. For this release, I have worked with Florian to get my work towards a better initial setup experience merged. As can be seen below the design has changed a bit too.

1-thumb

The primary change has been to move away from the installer (anti-)pattern and instead move fully towards a design inspired by the empty-app-states pattern.

empty-state-pattern

In Polari 3.17.4 Florian also fixed a memory bug and a bug with direct messages not showing up until you press the notification in GNOME Shell.

In this digest I also want to briefly mention two more patches I have been working on which likely will land in later releases of Polari:

  • SSL Encryption: When you create a new connection Polari will now try to determine if the server supports SSL and use it unless otherwise specified.
  • Server Entry Validation: When adding a new connection, Polari will validate the server name and display a message if you use characters that are not valid in a server address.

Will I meet you at GUADEC this year? (:

on advice

i recently found this superb analogy by mike booth which i want to quote in full:

This guy has gone to the zoo and interviewed all the animals. The tiger says that the secret to success is to live alone, be well disguised, have sharp claws and know how to stalk. The snail says that the secret is to live inside a solid shell, stay small, hide under dead trees and move slowly around at night. The parrot says that success lies in eating fruit, being alert, packing light, moving fast by air when necessary, and always sticking by your friends.

His conclusion: These animals are giving contradictory advice! And that's because they're all "outliers".

But both of these points are subtly misleading. Yes, the advice is contradictory, but that's only a problem if you imagine that the animal kingdom is like a giant arena in which all the world's animals battle for the Animal Best Practices championship, after which all the losing animals will go extinct and the entire world will adopt the winning ways of the One True Best Animal. But, in fact, there are a hell of a lot of different ways to be a successful animal, and they coexist nicely. Indeed, they form an ecosystem in which all animals require other, much different animals to exist.

And it's insane to regard the tiger and the parrot and the snail as "outliers". Sure, they're unique, just as snowflakes are unique. But, in fact, there are a lot of different kinds of cats and birds and mollusks, not just these three. Indeed, there are creatures that employ some cat strategies and some bird strategies (lions: be a sharp-eyed predator with claws, but live in communal packs). The only way to argue that tigers and parrots and snails are "outliers" is to ignore the existence of all the other creatures in the world, the ones that bridge the gaps in animal-design space and that ultimately relate every known animal to every other known animal.

So, yes, it's insane to try to follow all the advice on the Internet simultaneously. But that doesn't mean it's insane to listen to 37signals advice, or Godin's advice, or some other company's advice. You just have to figure out which part of the animal kingdom you're in, and seek out the best practices which apply to creatures like you. If you want to be a stalker, you could do worse than to ask the tiger for some advice.

next to the story of the blind men and the elephant i think i just found my new favourite analogy when giving advice.

July 21, 2015

Roslyn and Mono

Hello Internet! I wanted to share some updates of Roslyn and Mono.

We have been working towards using Roslyn in two scenarios. As the compiler you get when you use Mono, and as the engine that powers code completion and refactoring in the IDE.

This post is a status update on the work that we have been doing here.

Roslyn on MonoDevelop/XamarinStudio

For the past year, we have been working on replacing the IDE's engine that gives us code completion, refactoring capabilities and formatting capabilities with one powered by Roslyn.

The current engine is powered by a combination of NRefactory and the Mono C# compiler. It is not as powerful, comprehensive or reliable as Roslyn.

Feature-wise, we completed the effort, and we now have a Roslyn-powered branch that uses Roslyn for code completion, refactoring, suggestions and code formatting.

In addition, we ported most of the refactoring capabilities from NRefactory to work on top of Roslyn. These were quite significant. Visual Studio users can try them out by installing the Refactoring Essentials for Visual Studio extension.

While our Roslyn branch is working great and is a pleasure to use, it also consumes more memory and by extension, runs a little slower. This is not Roslyn's fault, but the side effects of leaks and limitations in our code.

Our original plan was to release this for our September release (what we internally call "Cycle 6"), but we decided to pull the feature out from the release to give us time to fix the leaks that affected the Roslyn engine and tune the performance of Roslyn running on Mono.

Our revisited plan is to ship an update to our tooling in Cycle 6 (the regular feature update) but without Roslyn. In parallel, we will ship a Roslyn-enabled preview of MonoDevelop/XamarinStudio. This will give us time to collect your feedback on performance and memory usage regressions, and time to fix the issues before we make Roslyn the default.

Roslyn as a Compiler in Mono

One of the major roadblocks for the adoption of Roslyn in Mono was the requirement to generate debugging information that Mono could consume on Unix (the other one is that our C# batch compiler is still faster than Roslyn).

The initial Roslyn release only had support for generating debug information through a proprietary/native library on Windows, which meant that while Roslyn could be used to compile code on Unix, the result would not contain any debug information - this prevented Roslyn from being useful for most compilation uses.

Recently, Roslyn got support for Portable Program Database (PPDB) files. This is a fully documented, open, compact and efficient format for storing debug information.

Mono's master release contains now support for using PPDB files as its debug information. This means that Roslyn can produce debug information that Mono can consume.

That said, we still need more work in the Mono ecosystem to fully support PPDB files. The Cecil library is used extensively to manipulate IL images as well as their associated debug information. Our Reflection.Emit implementation will need to get a backend to generate PPDBs (for third party compilers, dynamic code generators) and support in IKVM to produce PPDB files (this is used by Mono's C# compiler and other third party compilers).

Additionally, many features in Roslyn surfaced bloat and bugs in Mono's class libraries. We have been fixing those bugs (and in many cases, the bugs have gone away by replacing Mono's implementation with implementations from Microsoft's Reference Source).

Creating menu with actions

I implemented a new feature that view logs from different boots in gnome-logs days ago. As the following picture shows, I need create a new menu with items showing the boot time of the past boots. When I click a specific boot, the logs from that boot should be shown in the app.

I'm not gonna create five new action for these five menu items. It's not a right way to do so. The right way is to use one action but different parameter for different menu items. But what confuses me is how to pass the parameter to the callback function for the "activate" signal of the action. It means while activating the action, I want to pass the boot ID to that function, so I can use this boot ID to query logs from this boot. I'm not that familiar with these stuff. So I just keep reading the API document and source codes of other project until I found this:

g_menu_item_set_action_and_target_value ()

void
g_menu_item_set_action_and_target_value
(GMenuItem *menu_item,
const gchar *action,
GVariant *target_value);
This is a function in the API page of GMenu. Right here we can see this function gives the menu item an action and a GVariant value. 

g_variant_new_string ()

GVariant *
g_variant_new_string (const gchar *string);
Creates a string GVariant with the contents of string .
string must be valid utf8.

Parameters

string
a normal utf8 nul-terminated string

Returns

a floating reference to a new string GVariant instance.
By using this function, we can create a new GVariant value with a string very easily. Then we can use the newly created GVariant value as the parameter of the above function. Then it's done.

GUADEC raffle!

I will be attending GUADEC again this year for the 8th time, in the city of Gothenburg, and I can’t wait!

A few days ago Karen came up with the wonderful idea of a raffle for people attending the conference; I really liked it and when I asked my employer if they could donate some hardware to the cause, I quickly got two Endless computers approved!

The rules are simple:

  • one of the prizes will be raffled exclusively between the volunteers organizing the conference
  • the same person cannot win two prizes

We would love to have more winners! If you work for a company that makes hardware that runs GNOME-based technology, and think you can donate some to the cause, feel free to get in touch with me.

If you want to participate in the raffle, all you have to do is registering for GUADEC. If you haven’t done so, do it now! I want to note that neither Endless nor other companies donating to the raffle will receive any personal information about the registered participants.

guadec-logo-fill

GNOME To Do 3.17.4

After introducing GNOME To Do, it finished a very important cycle of development and we had a great set of fresh features for 3.17.4 release. Check them out:

Today & Scheduled tasks

The most noticeable feature of this release are the “Today” and “Scheduled” task lists, visible from the main view. They are always updated and very handy!

Captura de tela de 2015-07-20 21-34-51 Captura de tela de 2015-07-20 21-34-41

Resizable editor &

search support

The task editor pane received some love, and it nows supports being adjusted with drag and drop! Also, we now have a rudimentary search support for task lists. Since this cannot be expressed through images, see the video below:

Enjoy!

Annoucing GNOME Calendar 3.17.4

During the last period, GNOME Calendar received many improvements and bugfixes.

News

  • Calendar’s Month view received a nice keyboard navigation feature.
  • Many code optimizations, cleanups and fixes
  • Improve Year view’s rendering

Unfortunately, Calendar won’t receive Week view this cycle. It’ll be postponed to 3.20 cycle, which I’ll have much more time to work on Calendar.

July 20, 2015

Your Ubuntu-based container image is probably a copyright violation

Update: A Canonical employee responded here, but doesn't appear to actually contradict anything I say below.

I wrote about Canonical's Ubuntu IP policy here, but primarily in terms of its broader impact, but I mentioned a few specific cases. People seem to have picked up on the case of container images (especially Docker ones), so here's an unambiguous statement:

If you generate a container image that is not a 100% unmodified version of Ubuntu (ie, you have not removed or added anything), Canonical insist that you must ask them for permission to distribute it. The only alternative is to rebuild every binary package you wish to ship[1], removing all trademarks in the process. As I mentioned in my original post, the IP policy does not merely require you to remove trademarks that would cause infringement, it requires you to remove all trademarks - a strict reading would require you to remove every instance of the word "ubuntu" from the packages.

If you want to contact Canonical to request permission, you can do so here. Or you could just derive from Debian instead.

[1] Other than ones whose license explicitly grants permission to redistribute binaries and which do not permit any additional restrictions to be imposed upon the license grants - so any GPLed material is fine

comment count unavailable comments

2015-07-20 Monday.

  • Mail chew, 1:1 with Kendy, Niall, lunch with H. team call. Mail chew, booked LibreOffice conference travel - thank God for RyanAir direct Stansted to Arhus; wow.

Feeds