GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

March 27, 2015

all my blogs are dead

paul neave in why i create for the web:

But the most amazing thing about the web is simple yet devastatingly powerful, and the whole reason the web exists in the first place. It's the humble hyperlink.

paul is right. however links randomly disappear, move and change. carter maness writes:

Despite the pervasive assumption that everything online lasts forever, the internet is inherently unstable. We assume everything we publish online will be preserved. But websites are businesses. They get sold, forgotten and broken. Eventually, someone flips the switch and pulls it all down. Hosting charges are eliminated, and domain names slip quietly back into the pool. What’s left behind once the cache clears? For media companies deleting their sites, legacy doesn’t matter; the work carries no intrinsic value if there is no business remaining to capitalize on it. I asked if a backup still existed on a server somewhere. It apparently does; I was invited to purchase it for next to nothing. I could pay for the hosting, flip the switch on, and all my work would return. But I’d never really look at it. Then, eventually, I would stop paying the bills, too.

imagine books disappearing randomly from your bookshelf from time to time. however, this is a funny thought as it pretends books were always available to everyone trivially.

i for myself started archiving outgoing links in the wayback machine with a zsh snippet like this one. i know well that this is no real solution to this problem, but i hope it helps. for now.

function ia-archive() { curl -s -I https://web.archive.org/save/$* | grep Content-Location | awk '{print "Archived as: https://web.archive.org"$2}'; }

March 26, 2015

Hands-on usability improvements with GNOME 3.16

I downloaded the GNOME 3.16 live demo image and experimented with what the latest GNOME has to offer. My focus is usability testing, so I wanted to explore the live demo to see how the usability has improved in the latest release.

From my 2014 study of GNOME's usability, usability testing revealed several "hot" problem areas, including:

Changing the default font in gedit or Notes
Testers typically looked for a "font" or "text" action under the gear menu. Many testers referred to the gear menu as the "options" or "settings" menu because they previously affiliated a "gear" icon with settings or preferences in Mac OS X or Windows. Testers assumed changing the font was a settings, so they looked for it in what they assumed was a "settings" menu: the gear menu.
Bookmarking a location in Nautilus
Most testers preferred to just move a frequently-used folder to the desktop, so it would be easier to find. But GNOME doesn't have a "desktop" per se by default, and expects users to use the "Bookmark this Location" feature in Nautilus. However, this feature was not very discoverable; many testers moved the target folder into another folder, and believed that they had somehow bookmarked the location.
Finding and replacing text in gedit
When asked to make to replace all instances of a word with another word, across a large text file, testers had trouble discovering the "find and replace text" feature in gedit. Instead, testers experimented with "Find" then simply typed over the old text with the new text.
How does the new GNOME 3.16 improve on these problem areas? Let's look at a few screenshots:

gedit

GNOME 3.14 saw several updates to the gedit editor, which continue in GNOME 3.16:

The new gedit features a clean appearance that features prominent "Open" and "Save" buttons—two functions that average users with average knowledge will frequently access.

A new "three lines" icon replaces the gear menu for the drop-down menu. This "three lines" menu icon is more common in other applications, including those on Mac OS X and Windows, so the new menu icon should be easier to find.

The "Open" menu includes a quick-access list, and a button to look for other files via the finder.


The preferences menu doesn't offer significant usability improvements, although the color scheme selector is now updated in GNOME 3.16.


Nautilus

The updated Nautilus features large icons that offer good visibility without becoming too overwhelming. The "three lines" menu is simplified in this release, and offers an easier path to bookmark a location.


Web

I uncovered a few issues with the Epiphany web browser (aka "GNOME Web") but since I don't usually use Epiphany (I use Firefox or Google Chrome) I'm not sure how long these problems have been there.

Epiphany has a clean appearance that reserves most of the screen real estate to display the web page. This is a nice design tradeoff, but I noticed that after I navigated to a web page, I lost the URL bar. I couldn't navigate to a new website until I opened a new tab and entered my URL there. I'm sure there's another way to bring up the URL bar, but it's not obvious to me.

I'll also add that taking screenshots of Epiphany was quite difficult. For other GNOME applications, I simply hit Alt-PrtScr to save a screenshot of my active window. But the Epiphany web browser seems to grab control of that key binding, and Alt-PrtScr does nothing most of the time—especially when the "three lines" menu is open. I took several screenshots of Epiphany, and about half were whole-desktop screenshots (PrtScr) that I later cropped using the GIMP.


EDIT: If you click the little "down" triangle next to the URL, you can enter a new URL. I don't like this feature; it obscures URL entry. Basic functionality like this should not be hidden in a web browser. I encourage the Epiphany team to bring back the URL entry bar in the next release.

Other changes

Notifications got a big update in GNOME 3.16. In previous versions of GNOME 3, notifications appeared at the bottom of the screen. Now, notifications appear at the top of the screen, merged with the calendar. You might consider this a "calendar and events" feature. The notifications are unobtrusive; when I plugged in my USB fob drive, a small white marker appeared next to the date and time to suggest a new notification had arrived. While I haven't reviewed notifications as part of my usability testing, my heuristic evaluation is that the new notifications design will improve the usability around notifications. I believe most users will see the new "calendar and events" feature as making a lot of sense.

However, I do have some reservations about the updated GNOME. For one, I dislike the darker colors seen in these screenshots. Users don't like dark desktop colors. In user interface design, colors also affect the mood of an application. As seen in this comparison, users perceived the darker colors used in Windows and GNOME as moody, while the lighter colors used in Mac OS X suggest an airy, friendly interface. This may be why users at large perceive the GNOME desktop to have poor usability, despite usability testing showing otherwise. The dark, moody colors used in GNOME provoke feelings of tension and insecurity, which influence the user's perception of poor usability.

I'm also not sure about the blue-on-grey effect to highlight running programs or selected items in the GNOME Shell. In addition to being dark, moody colors, the blue-on-grey is just too hard to see clearly. I would like GNOME to update the default theme to use lighter, airier colors. I'll reserve a discussion of colors in GNOME for a future article.


Overall, I'm very pleased with the usability improvements that have gone into the new GNOME release. Good job, everyone!

I look forward to doing more usability testing in this version of GNOME, so we can continue to make GNOME great. With good usability, each version of GNOME gets better and easier to use.

2015-03-26 Thursday

  • Mihai posted a nice blog with a small video of LibreOffice Online in action - hopefully we'll have a higher-resoluton version that doesn't feature some bearded idiot next time.
  • Out to the Dentist for some drilling action.

Building a SNES emulator with a Raspberry Pi and a PS3 gamepad

It’s been a while since I did this, but I got some people asking me lately about how exactly I did it and I thought it could be nice to write a post answering that question. Actually, it would be a nice thing for me to have anyway at least as “documentation”, so here it is.

But first of all, the idea: my personal and very particular goal was to have a proper SNES emulator plugged to my TV, based on the Raspberry Pi (simply because I had a spare one) that I could control entirely with a gamepad (no external keyboards, no ssh connection from a laptop, nothing).

Yes, I know there are other emulators I could aim for and even Raspberry specific distros designed for a similar purpose but, honestly, I don’t really care about MAME, NeoGeo, PSX emulators or the like. I simply wanted a SNES emulator, period. And on top of that I was quite keen on playing a bit with the Raspberry, so I took this route, for good or bad.

Anyway, after doing some investigation I realized all the main pieces were already out there for me to build such a thing, all that was needed was to put them all together, so I went ahead and did it. And these are the HW & SW ingredients involved in this recipe:

Once I got all these things around, this is how I assembled the whole thing:

1. Got the gamepad paired and recognized as a joystick under /dev/input/js0 using the QtSixA project. I followed the instructions here, which explain fairly well how to use sixpair to pair the gamepad and how to get the sixad daemon running at boot time, which was an important requirement for this whole thing to work as I wanted it to.

2. I downloaded the source code of PiSNES, then patched it slightly so that it would recognize the PS3 DualShock gamepad, allow me define the four directions of the joystick through the configuration file, among other things.

3. I had no idea how to get the PS3 gamepad paired automatically when booting the Raspberry Pi, so I wrote a stupid small script that would basically wait for the gamepad to be detected under /dev/input/js0, and then launch the snes9x.gui GUI to choose a game from the list of ROMS available. I placed it under /usr/local/bin/snes-run-gui, and looks like this:

#!/bin/bash

BASEDIR=/opt/pisnes

# Wait for the PS3 Game pad to be available
while [ ! -e /dev/input/js0 ]; do sleep 2; done

# The DISPLAY=:0 bit is important for the GUI to work
DISPLAY=:0 $BASEDIR/snes9x.gui

4. Because I wanted that script to be launched on boot, I simply added a line to /etc/xdg/lxsession/LXDE/autostart, so that it looked like this:

@lxpanel --profile LXDE
@pcmanfm --desktop --profile LXDE
@xscreensaver -no-splash
@/etc/sudoers.d/vsrv.sh
@/usr/local/bin/snes-run-gui

By doing the steps mentioned above, I got the following “User Experience”:

  1. Turn on the RPi by simply plugging it in
  2. Wait for Raspbian to boot and for the desktop to be visible
  3. At this point, both the sixad daemon and the snes-run-gui script should be running, so press the PS button in the gamepad to connect the gamepad
  4. After a few seconds, the lights in the gamepad should stop blinking and the /dev/input/js0 device file should be available, so snes9x.gui is launched
  5. Select the game you want to play and press with the ‘X’ button to run it
  6. While in the game, press the PS button to get back to the game selection UI
  7. From the game selection UI, press START+SELECT to shutdown the RPi
  8. Profit!

Unfortunately, those steps above were enough to get the gamepad paired and working with PiSNES, but my TV was a bit tricky and I needed to do a few adjustments more in the booting configuration of the Raspberry Pi, which took me a while to find out too.

So, here is the contents of my /boot/config.txt file in case it helps somebody else out there, or simply as reference (more info about the contents of this file in RPiConfig):

# NOOBS Auto-generated Settings:
hdmi_force_hotplug=1
config_hdmi_boost=4
overscan_left=24
overscan_right=24
overscan_top=16
overscan_bottom=16
disable_overscan=0
core_freq=250
sdram_freq=500
over_voltage=2

# Set sdtv mode to PAL (as used in Europe)
sdtv_mode=2

# Force sound to be sent over the HDMI cable
hdmi_drive=2

# Set monitor mode to DMT
hdmi_group=2

# Overclock the CPU a bit (700 MHz is the default)
arm_freq=900

# Set monitor resolution to 1280x720p @ 60Hz XGA
hdmi_mode=85

As you can imagine, some of those configuration options are specific to the TV I have it connected to (e.g. hdmi_mode), so YMMV. In my case I actually had to try different HDMI modes before settling on one that would simply work, so if you are ever in the same situation, you might want to apt-get install libraspberrypi-bin and use the following commands as well:

 $ tvservice -m DMT # List all DMT supported modes
 $ tvservice -d edid.dat # Dump detailed info about your screen
 $ edidparser edid.dat | grep mode # List all possible modes

In my case, I settled on hdmi_mode=85 simply because that’s the one that work better for me, which stands for the 1280x720p@60Hz DMT mode, according to edidparser:

HDMI:EDID DMT mode (85) 1280x720p @ 60 Hz with pixel clock 74 MHz has a score of 80296

And that’s all I think. Of course there’s a chance I forgot to mention something because I did this in my random slots of spare time I had back in July, but that should be pretty much it.

Now, simply because this post has been too much text already, here you have a video showing off how this actually works (and let alone how good/bad I am playing!):

Video: Raspberry Pi + PS3 Gamepad + PiSNES

I have to say I had great fun doing this and, even if it’s a quite hackish solution, I’m pretty happy with it because it’s been so much fun to play those games again, and also because it’s been working like a charm ever since I set it up, more than half a year ago.

And even better… turns out I got it working just in time for “Father’s Day”, which made me win the “best dad in the world” award, unanimously granted by my two sons, who also enjoy playing those good old games with me now (and beating me on some of them!).

Actually, that has been certainly the most rewarding thing of all this, no doubt about it.

March 25, 2015

Python for remote reconfiguration of server firmware

One project I've worked on at Nebula is a Python module for remote configuration of server hardware. You can find it here, but there's a few caveats:
  1. It's not hugely well tested on a wide range of hardware
  2. The interface is not yet guaranteed to be stable
  3. You'll also need this module if you want to deal with IBM (well, Lenovo now) servers
  4. The IBM support is based on reverse engineering rather than documentation, so who really knows how good it is

There's documentation in the README, and I'm sorry for the API being kind of awful (it suffers rather heavily from me writing Python while knowing basically no Python). Still, it ought to work. I'm interested in hearing from anybody with problems, anybody who's interested in getting it on Pypi and anybody who's willing to add support for new HP systems.

comment count unavailable comments

LibreOffice On-Line & IceWarp

Today we announced a collaboration between IceWarp and Collabora to start the creation of LibreOffice On-Line, a scalable, cloud-hostable, full featured version of LibreOffice. My hope is that this has a huge and positive impact for the Free Software community, the business ecosystem, personal privacy, and more. Indeed, this is really one of the last big missing pieces that needs solving (alongside the Android version which is well underway). But wait - this post is supposed to be technical; lets get back to the code.

A prototype - with promise

At the beginning of the LibreOffice project, I created (for our first Paris Conference) a prototype of LibreOffice On-Line using Alex Laarson's (awesome) GTK+ Broadway - you can still see videos of that around the place. Great as the Broadway approach is (it provides essentially a simple Virtual Desktop model into your browser), the prototype taught us several important things which we plan to get right in LibreOffice On-Line:

  • Performance - the Broadway model has the advantage of presenting the full application UI, however every time we want to do anything in the document - such as selecting, panning, or even blinking the cursor; we had to send new image fragments from the server: not ideal.
  • Memory consumption / Scalability - another side effect of this is that, no matter how un-responsive the user is (how many tabs are you long-term-not-looking-at in your browser right now) it was necessary to have a full LibreOffice process running to be responsive & store the document. That memory consumption naturally significantly limits the ability to handle many concurrent clients.
  • Scripting / web-like UI - it would have been possible to extend the gtk javascript to allow tunnelling bespoke commands through to LibreOffice to allow the wrapping of custom UI, but still the work to provide user interface that is expected on the web would be significant.

Having said all this, Broadway was a great basis to prove the feasibility of the concept - and we re-use the underlying concepts; in particular the use of web sockets to provide the low-latency interactions we need. Broadway also worked surprisingly well from eg. a nearby Amazon cloud datacentre. Similarly having full-fidelity rendering - is a very attractive proposition, independent of the fonts, or setup of the client.

An improved approach

Caching document views

One of the key realisations behind LibreOffice On-Line is that much of document editing is not the modification itself; a rather large proportion of time is spent reading, reviewing, and browsing documents. Thus by exposing the workings of document rendering to pixels squares (tiles) via LibreOfficeKit we can cache large chunks of the document content both on the server, and in the client's browser. As the users read though a document, or re-visit it, there is no need to communicate at all with the server, or even (after an initial rendering run) to have a LibreOfficeKit instance around there either.

Thus in this mode, the ability of the browser's Javascript to understand things about the document itself allows us to move much more of the pan/zoom reading goodness into your client. That means after an inital (pre)-fetch that responsiveness can be determined more by your local hardware and it's ability to pre-cache than remote server capacity. Interestingly, this same tiled-rendering approach is used by Fennec (Firefox for Android) and LibreOffice for Android to get smooth mobile-device scrolling and rendering, so LibreOfficeKit is already well adapted for this use-case.

Browser showing hidden tile cache ready to be revealed when panning
Editing live documents

In recent times, The Document Foundation has funded, via the generosity of TDF's donors a chunk of infrastructure work to make it possible to use LibreOfficeKit to create custom document editors. There are several notable pieces of this work that intersect with this; I provide some links to the equivalent work being done for Android from Miklos Vajna:

Cursors & selection

Clearly blinking a cursor is something we can do trivially in the javascript client, rather than on the server; there are however several other interactions that benefit from browser acceleration. Text selection is a big piece of this - re-rendering text on the server simply in order to draw transparent selection rectangles over it makes very little sense - so instead we provide a list of rectangles to render in the browser. Similarly, drawing selection handles and interacting with images is something that can be handled pleasantly in the browser as well.

Keyboard / touch input

Clearly it is necessary to intercept browser keystrokes, gestures and so on, transport these over the websocket and emit them into the LibreOfficeKit core.

Tile invalidation / re-rendering

Clearly when the document changes, it is necessary to re-render and provide new tile data to the client; naturally there is an existing API for this that was put in place right at the start of the Android editing work.

Command invocation

Another piece that is required, is transporting UNO commands, and state (such as 'make it bold', or 'delete it') from the client javascript through into the LibreOfficeKit core. This is a matter again of proxying the required functionality via Javascript. The plan is to make it easy to create custom, bespoke UIs with a bit of CSS / Javascript magic wrapped around and interacting with the remote LibreOfficeKit core.

Serializing selections

Clearly as & when we decide that a user has wandered off, we can save their intermediate document, serialize the cursor location & selection - free up the resources for some other editing process. As/when they return we can then restore that with some small document load delay, as we transparently back their cached view with a live editable LibreOfficeKit instance.

What does that look like roughly ?

Of course, lots of pieces are still moving and subject to change; however here is a perhaps helpful drawing. Naturally integrating with existing storage, orchestration, and security frameworks will be important over time, contributions welcome for your pet framework:

Initial architecture sketch

The case for simple collaboration

A final, rather important part of LibreOffice On-Line; which I've left to last is that of collaborative editing.

The problem of generic, asynchronous, multi-instance / multi-device collaborative document editing is essentially horrendous. Solving even the easy problems (ie. re-ordering non-conflicting edits) is non-trivial for any large set of potentially intersecting operations. However, for this case, there are two very significant simplifying factors.

First there is a single, central instance of LibreOfficeKit rendering and providing document tiles to all clients. This significantly reduces the need to a re-order asynchronous change operation stream, it is also the case that editing conflicts should be seen as they are created.

Secondly, there is a controlled, and reasonably tractable set of extremely high-level operations based on abstract document co-ordinates - initially text selection, editing, deletion, object & shape movement, sizing, etc. which can be incrementally grown over time to extend to the core set of editing functionality.

These two simplifications, combined with managing and opportunistically strobing between users' cursor & selection contexts should allow us to provide the core of the document editing functionality.

Show me the code

The code is available as of now in gerrit's online repository. Clearly it is the Alpha not the Omega; the beginning, and not even the end of the beginning - which is a great time to get involved

Conclusion

LibreOffice On-Line is just beginning, there is a lot that remains to be done, and we appreciate help with that as we execute over the next year for IceWarp. A few words about IceWarp - having spent a rather significant amount of time pitching this work to people, and having listened to many requests for it - it is fantastic to be working with a company that can marry that great strategic sense with the resources and execution to actually start something potentially market-changing here; go IceWarp !

GNOME 3.16 is out!

Did you see?

It will obviously be in Fedora 22 Beta very shortly.

What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!



Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.



I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

Fundraiser campaign for LaTeXila

It was already possible to make a donation for LaTeXila since March 2014. There was just a link on the web site and an entry in the Help menu. But I made almost no advertising for that. Now for the 3.16 release I would like to push the accelerator one step further!

LaTeXila is already a mature and stable application, it doesn’t miss much to become a really awesome LaTeX editor. Some features need to be a little improved, especially the spell checking. And a few features are missing (I’m looking at you, live preview).

So, if you are a LaTeX user and wants a great editor for writing your documents, don’t miss the LaTeXila fundraiser campaign!

Note that some of the planned items would be useful for other text editors as well, since the work would be done in an underlying library (GtkSourceView, GtkSpell, …).

Thanks!

glibmm 2.44.0 and gtkmm 3.16.0

I’ve just done the stable glibmm 2.44.0 and gtkmm 3.16.0 releases with the usual bunch of API additions and deprecations to keep track of the glib and gtkmm API. Thanks to Kjell Ahlstedt in particular for his many well thought-out contributions.

I’ve been maintaining gtkmm since at least some time in 2001. That’s 14 years or so. Has any GNOME maintainer maintained one module for so long?

 

 

March 24, 2015

Creative Commons for Developer Docs

Over the last few years, we’ve seen more and more open source projects transition to a Creative Commons license for their documentation. Specifically, most projects tend to use some version of CC-BY-SA. There are some projects that use a permissive code license like Apache or MIT for documentation, and certainly still some that use the GFDL. But for the most part, the trend has been toward CC-BY-SA.

This is a good thing. Creative Commons has been at the forefront of the open culture movement, which has had just as profound of an impact on our lives as the free software and open source movements before it. Using a Creative Commons license means that documentation writers have access to a wealth of CC-licensed images and videos and audio files. We can reuse icons and other imagery when creating network diagrams. We can use background music in our video demonstrations. And because so many projects are moving toward Creative Commons, we can all share each other’s work.

Sharing work is a two-way street if we all use the same license. If somebody uses a non-sharealike license, others can reuse their content, but they can’t reuse content from projects that use sharealike. So there’s a lot of network value to having everybody use CC-BY-SA.

But CC-BY-SA shares one serious flaw with the GFDL: Any code samples contained in the developer documentation is also licensed under the same license. This is true of any license, even permissive licenses like Apache or MIT, but with a copyleft licenses like CC-BY-SA or GFDL, it means the code can only be used in software projects under that same license. Of course, nobody writes code under CC-BY-SA or GFDL, so this presents a big problem.

We want people to be able to reuse code samples. That’s why we provide them. And we want to place as few barriers as possible to reusing them. Any sufficiently small code sample isn’t worth worrying about, but where’s the cutoff? Are the code samples in the Save Window State Howto sufficiently small? I don’t know. I’m not a lawyer. This is something we struggled with in GNOME, and it’s something other projects have realized is a problem as well. It recently came up on the OpenStack documentation mailing list, for example.

You can always put an exception on your license. You have a few choices. You could explicitly license your code samples under a permissive code license, or even CC0. GNOME has a standard license exception that reads “As a special exception, the copyright holders give you permission to copy, modify, and distribute the example code contained in this documentation under the terms of your choosing, without restriction.” This came from an honest-to-goodness lawyer, so I hope it’s OK.

But this still has a problem. GNOME is no longer using a stock Creative Commons license. Neither is anybody providing an exception to put code samples under a permissive code license. This means that two-way sharing is no longer a viable option. Anybody can take GNOME documentation and reuse it, even effectively uplicensing the code samples to CC-BY-SA. And GNOME can take any non-code prose from other CC-BY-SA content. But GNOME cannot reuse code samples from any project that doesn’t carry a compatible exception.

I’ve seen this in enough projects that I think it’s something Creative Commons should address directly. If there were a standard CC-BY-SA-CODE license that included a stock permissive exception for code samples, we could all switch to that and recommence sharing our developer documentation. Who can help make this happen?

How to turn the Chromebook Pixel into a proper developer laptop

Recently I spent about a day installing Fedora 22 + jhbuild on a Chromebook and left it unplugged overnight. The next day I turned it on with a flat battery, grabbed the charger, and the coreboot bios would not let me do the usual ctrl+L boot-to-SeaBIOS trick. I had to download the ChromeOS image to an SD card, reflash the ChromeOS image and thet left me without any of my Fedora workstation I’d so lovingly created the day before. This turned a $1500 laptop with a gorgeous screen into a liability that I couldn’t take anywhere for fear of losing all my work, again. The need to do CTRL+L every time I rebooted was just crazy.

I didn’t give up that easily; I need to test various bits of GNOME on a proper HiDPI screen and having a loan machine sitting in a bag wasn’t going to help anyone. So I reflashed the BIOS, and now have a machine that boots straight into Fedora 22 without any of the other Chrome stuff getting in the way.

Reflashing a BIOS on a Chromebook Pixel isn’t for the feignt of heart, but this is the list of materials you’ll need:

  • Set of watchmakers screwdrivers
  • Thin plastic shim (optional)
  • At least 1Gb USB flash drive
  • An original Chromebook Pixel
  • A BIOS from here for the Pixel
  • A great big dollop of courage

This does involve deleting the entire contents of your Pixel, so back anything up you care about before you start, unless it’s hosted online. I’m also not going to help you if you brick your machine, cateat emptor and all that. So, lets get cracking:

  • Boot chromebook into Recovery Mode (escape+refresh at startup) then do Control+D, then Enter, wait for ~5 mins while the Pixel reflashes itself
  • Power down the machine, remove AC power
  • Remove the rubber pads from the underside of the Pixel, remove all 4 screws
  • Gently remove the adhesive from around the edges, and use the smallest shim or screwdriver you have to release the 4 metal catches from the front and sides. You can leave the glue on the rear as this will form a hinge you can use. Hint: The tabs have to be released inwards, although do be aware there are 4 nice lithium batteries that might kinda explode if you slip and stab them hard with a screwdriver.
  • Remove the BIOS write protect screw AND the copper washer that sits between the USB drives and the power connector. Put it somewhere safe.
  • Gently close the bottom panel, but not enough for the clips to pop in. Turn over the machine and boot it.
  • Do enough of the registration so you can logon. Then logout.
  • Do the CTRL+ALT+[->] (really F2) trick to get to a proper shell and login as the chromos user (no password required). If you try to do it while logged in via the GUI it will not work.
  • On a different computer, format the USB drive as EXT4 and copy the squashfs.img, vmlinuz and initrd.img files there from your nearest Fedora mirror.
  • Also copy the correct firmware file from johnlewis.ie
  • Unmount the USB drive and remove
  • Insert the USB drive in the Pixel and mount it to /mnt
  • Make a backup of the firmware using /usr/sbin/flashrom -r /mnt/backup.rom
  • Flash the new firmware using /usr/sbin/flashrom -w /mnt/the_name_of_firmware.rom
  • IMPORTANT: If there are any warnings or errors you should reflash with the backup; if you reboot now you’ll have a $1500 brick. If you want to go back to the backup copy just use /usr/sbin/flashrom -w /mnt/backup.rom, but lets just assume it went well for now.
  • /sbin/shutdown -h now, then remove power again
  • Re-open the bottom panel, which should be a lot easier this time, and re-insert the BIOS write washer and screw, but don’t over-tighten.
  • Close the bottom panel and insert the clips carefully
  • Insert the 4 screws and tighten carefully, then convince the sticky feet to get back into the holes. You can use a small screwdriver to convince them a little more.
    Power the machine back on and it will automatically boot to the BIOS. Woo! But not done yet.
  • It will by default boot into JELTKA which is “just enough Linux to kexec another”.
  • When it looks like it’s hung, enter “root” then enter and it’ll log into a root prompt.
  • Mount the USB drive into /mnt again
  • Do something like kexec -l /mnt/vmlinuz --initrd=/mnt/initrd.img --append=stage2=hd:/dev/sdb1:/squashfs.img
  • Wait for the Fedora installer to start, then configure a network mirror where you can download packages. You’ll have to set up Wifi before you can download package lists.

This was all done from memory, so feel free to comment if you try it and I’ll fix things up as needed.

project managers, ducks, and dogs marking territory

rachel kroll:

Anyway, about the duck. As the story goes, the artists had created all of these animation cycles for their game, and it had to pass through the review stage of a project manager. One of the artists knew the way these guys tended to want to "leave their mark" on things, and did something a little extra. Supposedly, the PM saw this and said "it's great... just remove the duck". So, the artist went in and removed the duck (which had been carefully placed to make that easy), and that was that. The sacrificial duck kept the meddling manager away from the stuff that was important. It's almost like they want to be able to point at any given part and say "I'm the reason that happened".

compare this to cody powell's article you're not a software development manager, you're a software helper:

I believe it's actually easy to earn trust as a manager, provided you understand a few very important things. It's the team who contributes the key, valuable actions behind great software like writing, reviewing, and designing code, not you. The people on your team are way better at this than you are, and they have far more context. As a result, your team's contributions are much more important than your personal contribution.

also mike hadlow in coconut headphones: why agile has failed:

Because creating good software is so much about technical decisions and so little about management process, I believe that there is very little place for non-technical managers in any software development organisation. If your role is simply asking for estimates and enforcing the agile rituals: stand-ups, fortnightly sprints, retrospectives; then you are an impediment rather than an asset to delivery.

High Contrast Refresh

One of the major visual updates of the 3.16 release is the high contrast accessible theme. Both the shell and the toolkit have received attention in the HC department. One noteworthy aspect of the theme is the icons. To guarantee some decent amount of contrast of an icon against any background, back in GNOME 2 days, we solved it by “double stroking” every shape. The term double stroke comes from a special case, when a shape that was open, having only an outline, would get an additional inverted color outline. Most of the time it was a white outline of a black silhouette though.

Fuzzy doublestroke PNGs of the old HC theme

In the new world, we actually treat icons the same way we treat text. We can adjust the best contrast by controlling the color at runtime. We do this the same way we’ve done it for symbolic icons, using and embedded CSS stylesheet inside SVG icons. And in fact we are using the very same symbolic icons for the HC variant. You would be right arguing that there are specific needs for high contrast, but in reality majority of the double stroked icons in HC have already been direct conversions of their symbolic counterparts.

Crisp recolorable SVGs of the post 3.16 world

While centralized theme that overrides all application never seemed like a good idea, as the application icon is part of its identity and should be distributed and maintained alongside the actual app, the process to create a high contrast variant of an icon was extremely cumbersome and required quite a bit of effort. With the changes in place for both the toolkit and the shell, it’s far more reasonable to mandate applications to include a symbolic/high contrast variant of its app icon now. I’ll be spending my time transforming the existing double stroke assets into symbolic, but if you are an application author, please look into providing a scalable stencil variant of your app icon as well. Thank you!

March 23, 2015

WebKitGTK+ 2.8.0

We are excited and proud of announcing WebKitGTK+ 2.8.0, your favorite web rendering engine, now faster, even more stable and with a bunch of new features and improvements.

Gestures

Touch support is one the most important features missing since WebKitGTK+ 2.0.0. Thanks to the GTK+ gestures API, it’s now more pleasant to use a WebKitWebView in a touch screen. For now only the basic gestures are implemented: pan (for scrolling by dragging from any point of the WebView), tap (handling clicks with the finger) and zoom (for zooming in/out with two fingers). We plan to add more touch enhancements like kinetic scrolling, overshot feedback animation, text selections, long press, etc. in future versions.

HTML5 Notifications

notifications

Notifications are transparently supported by WebKitGTK+ now, using libnotify by default. The default implementation can be overridden by applications to use their own notifications system, or simply to disable notifications.

WebView background color

There’s new API now to set the base background color of a WebKitWebView. The given color is used to fill the web view before the actual contents are rendered. This will not have any visible effect if the web page contents set a background color, of course. If the web view parent window has a RGBA visual, we can even have transparent colors.

webkitgtk-2.8-bgcolor

A new WebKitSnapshotOptions flag has also been added to be able to take web view snapshots over a transparent surface, instead of filling the surface with the default background color (opaque white).

User script messages

The communication between the UI process and the Web Extensions is something that we have always left to the users, so that everybody can use their own IPC mechanism. Epiphany and most of the apps use D-Bus for this, and it works perfectly. However, D-Bus is often too much for simple cases where there are only a few  messages sent from the Web Extension to the UI process. User script messages make these cases a lot easier to implement and can be used from JavaScript code or using the GObject DOM bindings.

Let’s see how it works with a very simple example:

In the UI process, we register a script message handler using the WebKitUserContentManager and connect to the “script-message-received-signal” for the given handler:

webkit_user_content_manager_register_script_message_handler (user_content, 
                                                             "foo");
g_signal_connect (user_content, "script-message-received::foo",
                  G_CALLBACK (foo_message_received_cb), NULL);

Script messages are received in the UI process as a WebKitJavascriptResult:

static void
foo_message_received_cb (WebKitUserContentManager *manager,
                         WebKitJavascriptResult *message,
                         gpointer user_data)
{
        char *message_str;

        message_str = get_js_result_as_string (message);
        g_print ("Script message received for handler foo: %s\n", message_str);
        g_free (message_str);
}

Sending a message from the web process to the UI process using JavaScript is very easy:

window.webkit.messageHandlers.foo.postMessage("bar");

That will send the message “bar” to the registered foo script message handler. It’s not limited to strings, we can pass any JavaScript value to postMessage() that can be serialized. There’s also a convenient API to send script messages in the GObject DOM bindings API:

webkit_dom_dom_window_webkit_message_handlers_post_message (dom_window, 
                                                            "foo", "bar");

 

Who is playing audio?

WebKitWebView has now a boolean read-only property is-playing-adio that is set to TRUE when the web view is playing audio (even if it’s a video) and to FALSE when the audio is stopped. Browsers can use this to provide visual feedback about which tab is playing audio, Epiphany already does that :-)

ephy-is-playing-audio

HTML5 color input

Color input element is now supported by default, so instead of rendering a text field to manually input the color  as hexadecimal color code, WebKit now renders a color button that when clicked shows a GTK color chooser dialog. As usual, the public API allows to override the default implementation, to use your own color chooser. MiniBrowser uses a popover, for example.

mb-color-input-popover

APNG

APNG (Animated PNG) is a PNG extension that allows to create animated PNGs, similar to GIF but much better, supporting 24 bit images and transparencies. Since 2.8 WebKitGTK+ can render APNG files. You can check how it works with the mozilla demos.

webkitgtk-2.8-apng

SSL

The POODLE vulnerability fix introduced compatibility problems with some websites when establishing the SSL connection. Those problems were actually server side issues, that were incorrectly banning SSL 3.0 record packet versions, but that could be worked around in WebKitGTK+.

WebKitGTK+ already provided a WebKitWebView signal to notify about TLS errors when loading, but only for the connection of the main resource in the main frame. However, it’s still possible that subresources fail due to TLS errors, when using a connection different to the main resource one. WebKitGTK+ 2.8 gained WebKitWebResource::failed-with-tls-errors signal to be notified when a subresource load failed because of invalid certificate.

Ciphersuites based on RC4 are now disallowed when performing TLS negotiation, because it is no longer considered secure.

Performance: bmalloc and concurrent JIT

bmalloc is a new memory allocator added to WebKit to replace TCMalloc. Apple had already used it in the Mac and iOS ports for some time with very good results, but it needed some tweaks to work on Linux. WebKitGTK+ 2.8 now also uses bmalloc which drastically improved the overall performance.

Concurrent JIT was not enabled in GTK (and EFL) port for no apparent reason. Enabling it had also an amazing impact in the performance.

Both performance improvements were very noticeable in the performance bot:

webkitgtk-2.8-perf

 

The first jump on 11th Feb corresponds to the bmalloc switch, while the other jump on 25th Feb is when concurrent JIT was enabled.

Plans for 2.10

WebKitGTK+ 2.8 is an awesome release, but the plans for 2.10 are quite promising.

  • More security: mixed content for most of the resources types will be blocked by default. New API will be provided for managing mixed content.
  • Sandboxing: seccomp filters will be used in the different secondary processes.
  • More performance: FTL will be enabled in JavaScriptCore by default.
  • Even more performance: this time in the graphics side, by using the threaded compositor.
  • Blocking plugins API: new API to provide full control over the plugins load process, allowing to block/unblock plugins individually.
  • Implementation of the Database process: to bring back IndexedDB support.
  • Editing API: full editing API to allow using a WebView in editable mode with all editing capabilities.

March 20, 2015

GStreamer Hackfest 2015

Last weekend I visited my former office in (lovely) Staines-upon-Thames (UK) to attend the GStreamer hackfest 2015, along with other ~30 hackers from all over the world.

This was my very first GStreamer hackfest ever and it was definitely a great experience, although at the beginning I was really not convinced to attend since, after all, why bother attending an event about something I have no clue about?

But the answer turned out to be easy in the end, once I actually thought a bit about it: it would be a good opportunity both to learn more about the project and to meet people in real life (old friends included), making the most of it happening 15min away from my house. So, I went there.

And in the end it was a quite productive and useful weekend: I might not be an expert by now, but at least I broke the barrier of getting started with the project, which is already a good thing.

And even better, I managed to move forward a patch to fix a bug in PulseAudio I found on last December while fixing an downstream issue as part of my job at Endless. Back then, I did not have the time nor the knowledge to write a proper patch that could really go upstream, so I focused on fixing the problem at hand in our platform. But I always felt the need to sit down and cook a proper patch, and this event proved to be the perfect time and place to do that.

Now, thanks to the hackfest (and to Arun Raghavan in particular, thanks!), I’m quite happy to see that the right patch might be on its way to be applied upstream. Could not be happier about it! :)

Last, I’d like to thank to Samsung’s OSG, and specially to Luis, for having done a cracking job on making sure that everything would run smoothly from beginning to end. Thanks!

"GNOME à 15 ans" aux JdLL de Lyon



Le week-end prochain, je vais faire une petite présentation sur les quinze ans de GNOME aux JdLL.

Si les dieux de la livraison sont cléments, GNOME devrait aussi avoir une présence dans le village associatif.

Code Review: Microsoft's System.Net.Mail Implementation

For those reading my blog for the first time and don't know who I am, allow myself to introduce... myself.

I'm a self-proclaimed expert on the topic of email, specifically MIME, IMAP, SMTP, and POP3. I don't proclaim myself to be an expert on much, but email is something that maybe 1 or 2 dozen people in the world could probably get away with saying they know more than I do and actually back it up. I've got a lot of experience writing email software over the past 15 years and rarely do I come across mail software that does things better than I've done them. I'm also a critic of mail software design and implementation.

My latest endeavors in the email space are MimeKit and MailKit, both of which are open source and available on GitHub for your perusal should you doubt my expertise.

My point is: I think my review carries some weight, or I wouldn't be writing this.

Is that egotistical of me? Maybe a little.

I was actually just fixing a bug in MimeKit earlier and when I went to go examine Mono's System.Net.Mail.MailMessage implementation in order to figure out what the problem was with my System.Net.Mail.MailMessage to MimeKit.MimeMessage conversion, I thought, "hey, wait a minute... didn't Microsoft just recently release their BCL source code?" So I ended up taking a look and pretty quickly confirmed my suspicions and was able to fix the bug.

When I begin looking at the source code for another mail library, I can't help but critique what I find.

MailAddress and MailAddressCollection


Parsing email addresses is probably the hardest thing to get right. It's what I would say makes or breaks a library (literally). To a casual onlooker, parsing email addresses probably seems like a trivial problem. "Just String.Split() on comma and then look for those angle bracket thingies and you're done, right?" Oh God, oh God, make the hurting stop. I need to stop here before I go into a long rant about this...

Okay, I'm back. Blood pressure has subsided.

Looking at MailAddressParser.cs (the internal parser used by MailAddressCollection), I'm actually pleasantly surprised. It actually looks pretty decent and I can tell that a lot of thought and care went into it. They actually use a tokenizer approach. Interestingly, they parse the string in reverse which is a pretty good idea, I must say. This approach probably helps simplify the parser logic a bit because parsing forward makes it difficult to know what the tokens belong to (is it the name token? or is it the local-part of an addr-spec? hard to know until I consume a few more tokens...).

For example, consider the following BNF grammar:

address         =       mailbox / group
mailbox         =       name-addr / addr-spec
name-addr       =       [display-name] angle-addr
angle-addr      =       [CFWS] "<" addr-spec ">" [CFWS] / obs-angle-addr
group           =       display-name ":" [mailbox-list / CFWS] ";"
                        [CFWS]
display-name    =       phrase
word            =       atom / quoted-string
phrase          =       1*word / obs-phrase
addr-spec       =       local-part "@" domain
local-part      =       dot-atom / quoted-string / obs-local-part
domain          =       dot-atom / domain-literal / obs-domain
obs-local-part  =       word *("." word)

Now consider the following email address: "Joe Example" <joe@example.com>

The first token you read will be "Joe Example" and you might think that that token indicates that it is the display name, but it doesn't. All you know is that you've got a 'quoted-string' token. A 'quoted-string' can be part of a 'phrase' or it can be (a part of) the 'local-part' of the address itself. You must read at least 1 more token before you'll be able to figure out what it actually is ('obs-local-part' makes things slightly more difficult). In this case, you'll get a '<' which indicates the start of an 'angle-addr', allowing you to assume that the 'quoted-string' you just got is indeed the 'display-name'.

If, however, you parse the address in reverse, things become a little simpler because you know immediately what to expect the next token to be a part of.

That's pretty cool. Kudos to the Microsoft engineers for thinking up this strategy.

Unfortunately, the parser does not handle the 'group' address type. I'll let this slide, however, partly because I'm still impressed by the approach the address parser took and also because I realize that System.Net.Mail is meant for creating and sending new messages, not parsing existing messages from the wild.

Okay, so how well does it serialize MailAddress?

Ugh. You know that face you make when you just see a guy get kicked in the nuts? Yea, that's the face I made when I saw line #227:

encodedAddress = String.Format(CultureInfo.InvariantCulture, "\"{0}\"", this.displayName);

The problem with the above code (and I'll soon be submitting a bug report about this) is that the displayName string might have embedded double quotes in it. You can't just surround it with quotes and expect it to work. This is the same mistake all those programmers make that allow SQL-injection attacks.

For an example of how this should be done, see MimeKit's MimeUtils.Quote() method.

I had such high hopes... at least this is a fairly simple bug to fix. I'll probably just offer them a patch.

ContentType and ContentDisposition


Their parser is decent but it doesn't handle rfc2231 encoded parameter values, so I'm not overly impressed. It'll get the job done for simple name="value" parameter syntax, though, and it will decode the values encoded with the rfc2047 encoding scheme (which is not the right way to encode values, but it is common enough that any serious parser should handle it). The code is also pretty clean and uses a tokenizer approach, so that's a plus. I guess since this isn't really meant as a full-blown MIME parser, they can get away with this and not have it be a big deal. Fair enough.

Serialization, unsurprisingly, leaves a lot to be desired. Parameter values are, as I expected, encoded using rfc2047 syntax rather than the IETF standard rfc2231 syntax. I suppose that you could argue that this is for compatibility, but really it's just perpetuating bad practices. It also means that it can't properly fold long parameter values because the encoded value just becomes one big long encoded-word token. Yuck.

Base64


Amusingly, Microsoft does not use their Convert.FromBase64() decoder to decode base64 in their System.Net.Mail implementation. I point this out mostly because it is the single most common problem users have with every one of the Open Source .NET mail libraries out there (other than MimeKit, of course) because Convert.FromBase64() relies on the data not having any line breaks, white space, etc in the input stream.

This should serve as a big hint to you guys writing your own .NET email libraries not to use Convert.FromBase64() ;-)

They use unsafe pointers, just like I do in MimeKit, but I'm not sure how their performance compares to MimeKit's yet. They do use a state machine, though, so rock on.

I approve this base64 encoder/decoder implementation.

SmtpClient


One thing they do which is pretty cool is connection pooling. This is probably a pretty decent win for the types of things developers usually use System.Net.Mail's SmtpClient for (spam, anyone?).

The SASL AUTH mechanisms that they seem to support are NTLM, GSSAPI, LOGIN and WDIGEST (which apparently is some sort of IIS-specific authentication mechanism that I had never heard of until now). For those that were curious which SASL mechanisms SmtpClient supported, well, now you know.

The code is a bit hard to follow for someone not familiar with the codebase (not nearly as easy reading as the address or content-type parsers, I'm afraid), but it seems fairly well designed.

It does not appear to support PIPELINING or BINARYMIME like MailKit does, though. So, yay! Win for MailKit ;-)

They do support SMTPUTF8, so that's good.

It seems that if you set client.EnableSsl to true, it will also try STARTTLS if it isn't able to connect on the SSL port. I wasn't sure if it did that or not before, so this was something I was personally interested in knowing.

Hopefully my SmtpClient implementation review isn't too disappointing. I just don't know what to say about it, really. It's a pretty straight-forward send-command-wait-for-reply implementation and SMTP is pretty dead simple.

Conclusion


Overall the bits I was interested in were better than I expected they'd be. The parsers were pretty good (although incomplete) and the serializers were "good enough" for normal use.

Of course, it's not as good as MimeKit, but let's be honest, MimeKit sets the bar pretty high ;-)

March 19, 2015

I organize, therefore I am! – GNOME PERU FEST 2015

The GNOME PERU FEST 2015 event took place last Friday 13th, March in Centro Cultural PetroPerú. Special thanks GNOME Foundation for sponsoring us all again, as well Fedora, Infopucp, La Bouquette, Nexsys,  PetroPerú and IBM.

GNOMEPERUThe event was announced in Eventbrite, local news La República, IBM Peru twitter,  IBM Facebook, GNOME wiki, Fedora wiki and this Website (with a language I do not know ).

marinagsocWe started with the welcome words by our manager Patricia Di Negro, then Federico Mena, explained us what GNOME is, Laura Castrotalked about the GNOME community, OPW and GSoC programs with Marina, Valentín Barros (student – Spot project), Marcos Chavarría (student – GNOMECAT project), Fabián Orccón (student – Pitivi project) and

 Patricia Santana Cruz (student – Cheese project), who defined these talks and what people being involved with GSoC and GNOME have: “GNOME vocation“. We expect to have some applications from Peru to these programs, we encourage people to read these GSoC – GNOME sites: 1,2,3,4, 5.

Fedora was present also with Alejandro Perez and Jonathan Campos (Assambadors of Fedora LATAM), I had already received a proposal from peruvian enterpreneus guy to give a workshop of Fedora in Lima. We still have more Fedora DVD to install and stickers to spread the Fedora – GNOME word.

Alex Aragón did during his talk a demostration of the use of Blender, he used the GNOME logo and a chain to make them move. After that, everyone took a box lunch and we share them after taking our grupal photo. You can see more photos of the event by clicking here

GNOME PERU FEST 2015“GNU/Linux in enterprises” afternoon followed: * IBM Perú presented SAP on Linux, by Carol Romani, Christian Chancafe. * Nexsys Perú presented Power on Linux, by Wilder Mendoza. * Watson IBM, presented academic achievement, by Sergio Sotelo. * Neosecure presented Linux security in bussiness, by Juan Pablo Quiñe. * PetroPerú presented The success of Linux in business , by Xavier Sánchez. We have a few tweets with the #GNOMEPERUFEST2015 hashtag  because we do not share the WiFi password to anybody (this is a dilema for me because you can get more attention from them, but they can not publish what in going on during the event). A video camera was on durign the whole day, and I hope soon to get an extract of it! :) GNOMEcake


Filed under: GNOME, τεχνολογια :: Technology Tagged: GNOME Perú, GNOME PERU FEST, GNOME PERU FEST 2015, Julita Inca, Julita Inca Chiroque, Lima, Perú

March 17, 2015

Bye bye Collabora

Seven years ago, immediately after finishing my master’s degree, I visited Cambridge for an “interview” with Collabora. I was hired and, shortly afterwards, I moved to Cambridge.

It has been seven great years since then, even if there were some low points, like when Nokia cancelled some of their projects.
At Collabora I had the opportunity to learn a lot of new things and to work with a lot of incredibly competent and smart people. Despite this, after all this time, I felt like I wanted some little change, but not enough to start looking for another job and risk losing all the good things I had here at Collabora.

Recently, another company got in touch with me and offered me a job. The projects they work on are very interesting and the people there seem great (and, in many ways, similar to the people at Collabora). It was a difficult decision, but I decided to accept.

Today was my last day at Collabora. Thanks to everybody that I’ve met while working there! It was great!
Next week I will start working for Bromium!

(By the way, Collabora is hiring.)

Introducing ColorHug ALS

Ambient light sensors let us change the laptop panel brightness so that you can still see your screen when it’s sunny outside, but we can dim it when the ambient room light level is lower to save power.

colorhug-als1-large

I’ve spent a bit of time over the last few months designing a small OpenHardware USB device that acts as a ambient light sensor. It’s basically an uncalibrated ColorHug1 design with a less powerful processor, but speaking a subset of same protocol so all the firmware update and test tools just work out of the box.

colorhug-als2-large

The sensor itself is a very small (12x22mm) printed circuit board that inserts directly into a spare USB socket. It only sticks out about 9mm from the edge of the laptop as most of the PCB actually gets pushed into the USB slot.

colorhug-als-pcb-large

ColorHugALS can currently control the backlight when running the colorhug-backlight utility. The Up/Down header buttons do the same as the hardware BrightnessUp and BrightnessDown keys. You can still set the absolute backlight so you’re in control of the absolute level right now, the ALS modifying the level either side of what you just set in the coming minutes. The brightness is modified using a exponential moving average, which makes the brightness changes smooth and unnoticeable on hardware with enough brightness levels.

colorhug-backlight-large

We also use the brightness value at start to be what you consider “normal” so the algorithm tries to stay out of the way. When we’ve got some defaults that work well and has been tested the aim is to push this into gnome-control-center and gnome-settings-daemon for GNOME 3.18 so that no additional software is required.

I’ve got 42 devices in stock now. Buy one here!

How do you upgrade your distro? A tale of two workarounds

Every classic Linuxer would know why it's very handy to dedicate a separate partition for the /home folder of your tree: you could in theory share it between multiple OSs that you installed in your box (which you choose to run when you start your computer).

Now, I'm guessing that many people reading and nodding to the above, will also know that sharing /home/ is one thing, sharing $HOME (/home/yourUserName) is a completely different beast.

For example: you have a stable distro installed in your box; you decide to install a new version of that distro along the old one, in the same box. You run the new distro with a new account tied to the old /home/yourUserName folder: KABOOM!!! Weird things start happening. Among these:

  • The newer versions of your desktop or desktop programs don't run properly with the settings saved in your .dotDirectories (they are to blame because they didn't probably have a settings-conversion feature).
  • The newer versions of your desktop or desktop programs have a buggy settings-conversion feature; because your program does not run properly, or as well as it would have run if it had been ran for the first time with no settings saved at all.
  • The newer versions of your non-buggy desktop or desktop programs convert your settings to a new format. Then when you go back and run your old distro again, your old-versioned programs stop working because they see settings in a new format which they don't understand. (This is impossible to fix, or very hard.) It's very important that this scenario works, because the migration to the new version of your distro may not be immediate, it may take you some days to figure everything out, and until that happens, you want to still be able to run the stable version of your desktop and desktop programs
  • Etc.

To workaround these problems, I have a strategy: I use a different /home/ sub-directory for each distro installed in my system. For example, for distro X version A.B I use /home/knocteXAB/, for distro Y version C.D I use /home/knocteYCD/. The advantage about this is that you can migrate your settings manually and at your own pace. But then, you may be asking, how to really take advantage of sharing the /home folder when using this technique?

Easy: I keep non-settings data (mainly the non-dotfiles) in a different /home/ folder with no associated account in any of the distros. For example: /home/knocte/ (no version suffix). Then, from each of the suffixed /home/ subfolders, I setup symlinks to this other folder, setting the appropriate permissions. For instance:

  • /home/knocteXAB/Music -> /home/knocte/Music
  • /home/knocteXAB/Documents -> /home/knocte/Documents
  • /home/knocteYCD/Music -> /home/knocte/Music
  • /home/knocteYCD/Documents -> /home/knocte/Documents
  • Etc.
You may think that it's an interesting strategy and that I'm done with the blog post, however, when using this strategy you may start finding buggy applications that don't deal very well with symlinked paths. The one I found which annoyed the most was my favourite Gnome IDE, because it meant I couldn't develop software without problems. I mean, they were not just cosmetic problems, really:

So I had to use a workaround for my workaround: clone all my projects in $HOME instead of /home/knocte/Documents/Code/OpenSource/ (yah, I'm this organized ;) ).

I've been trying to fix these problems for a while, without much time on my hands.

But the last weeks a magical thing happened: I decided to finally sit down and try to fix the last two remaining, and my patches were all accepted and merged last week! (at least all the ones fixing symlink-related problems), woo!!!

So the lessons to learn here are:

  • Even the slickest workarounds have problems. Try to fix or report settings-conversion bugs!!
  • Don't ever quit trying to fix a problem. Some day you'll have the solution and you will realize it was simpler than you thought.
  • realpath is your friend.
  • MonoDevelop (master branch) is now less buggy and as amazing as (or more than) ever (</PUBLIC_SERVICE_ANNOUNCEMENT>).

March 16, 2015

Preview of GNOME usability results

I have been mentoring Sanskriti Dawle as part of the GNOME Outreach Program for Women. Sanskriti has been working on a usability test of GNOME, an update from my own usability testing which I also shared at GUADEC 2014.

I encourage you to watch Sanskriti's blog for the final results, but I wanted to share a view into her excellent work. You might treat this as a preview of Sanskriti's results.

Sanskriti's usability test included about equal men and women, and about equally divided between "high" and "low" mobile OS exposure (but none at "moderate," which is interesting). Most were Windows OS users, and the testers were equally distributed in expertise. Her test participants had a good distribution in age groups. Click to view a larger version of each chart:


This is important, because GNOME wants to be useful to what I call "average users with average experience." And Sanskriti's usability test participants represent that.

These testers required an average of 38 minutes (±11 minutes) to do all scenario tasks in the full usability test. The median test time was 37.5 minutes. Click to see a larger version of the chart:


I asked Sanskriti to use the heat map method to display her usability test results. This is a method I developed during my master's program research, and refined in later usability testing. In a usability heat map, each scenario task in the usability test is displayed in rows, and each tester is shown as columns. Each tester's experience for every scenario task is represented using a colored block: green if the tester completed this task successfully with little or no problems, orange if the tester experienced some problems but was still able to complete the task successfully, red if the tester encountered great difficulty but still completed the task. And black if the tester was completely unable to finish the task.

Sanskriti's heap map shows some very useful and interesting results. Click to view a larger version of Sanskriti's data:


I can make a few initial observations from this data. Looks like testers had the most difficulty with tasks Gedit.6 and Photos.3 and Photos.4, with noticeable difficulty in tasks Notes.1 and Photos.2. There's some interesting data around tasks Gedit.1 and Music.1 that might reflect testers 9, 11, and 12.

This is only a preview of Sanskriti's results. I encourage you to watch Sanskriti's blog for the final results, which I hope to see in the next week as she wraps up her work in the internship.
image: Outreach Program for Women

March 14, 2015

LinuxFoundationX: LFS101x.2 Introduction to Linux

LinuxFoundationX: LFS101x.2 Introduction to Linux

Few days back I took this wonderful course on moocs http://www.edx.org. Best thing about the course is that it has taken GNOME as its graphical desktop which makes learning `Introduction to Linux` interesting.   Screenshot from 2015-03-14 20:03:19Three Linux distribution families are explicitly covered in this course: CentOS (Fedora family), openSUSE (SUSE family) and Ubuntu (Debian family). KDE come with openSUSE by default but for illustration purpose single desktop environment GNOME is taken on all the three distributions.

For a newbie this course is highly recommended. The course content is exceptionally good. It has online exercises for hands-on and simple questions at the end of every session.

Following is my Honor Code Certificate :) , yay!!:

Screenshot from 2015-03-14 18:55:56

I scored 97%  as my final grade ;) . though the test was very easy I made mistake in answering one question.

Screenshot from 2015-03-14 18:49:45  Screenshot from 2015-03-14 19:54:50


March 12, 2015

Stop using RC4

A follow up of my previous post: in response to my letter, NIST is going to increase the CVSS score of CVE-2013-2566 (RC4) to match CVE-2011-3389 (BEAST). Yay!

In other news, WebKitGTK+ 2.8 has full support for RFC 7465. That’s a fancy way of saying that we will no longer negotiate RC4 connections and you will now be unable to access the small minority of HTTPS sites that offer nothing but RC4. Hopefully other browsers will follow along sooner rather than later. In particular, Firefox nightly has stopped negotiating RC4 except for a few whitelisted sites: I would very much like to see that whitelist removed. Internet Explorer has stopped negotiating RC4 except when it performs voluntary protocol version fallback. It would be great to see a firmer stance from Mozilla and Microsoft, and some action from Google and Apple.

Colorado Mozillians Meetup!

Meet the Colorado Mozillians …

We spent the day co-working at the Boulder Hub.

OLYMPUS DIGITAL CAMERA

Tom Tromey (Developer Tools), Justin Crawford (MDN), Teri Charles (Web QA)
Stormy Peters (MDN), Chuck Harmston (Marketplace)

 

 

Audi Quattro

Winter is definitelly losing its battle and last weekend we had some fun filming with my new folding Xu Gong v2 quad.

Audi Quattro from jimmac on Vimeo.

Vendors continue to break things

Getting on for seven years ago, I wrote an article on why the Linux kernel responds "False" to _OSI("Linux"). This week I discovered that vendors were making use of another behavioural difference between Linux and Windows to change the behaviour of their firmware and breaking things in the process.

The ACPI spec defines the _REV object as evaluating "to the revision of the ACPI Specification that the specified \_OS implements as a DWORD. Larger values are newer revisions of the ACPI specification", ie you reference _REV and you get back the version of the spec that the OS implements. Linux returns 5 for this, because Linux (broadly) implements ACPI 5.0, and Windows returns 2 because fuck you that's why[1].

(An aside: To be fair, Windows maybe has kind of an argument here because the spec explicitly says "The revision of the ACPI Specification that the specified \_OS implements" and all modern versions of Windows still claim to be Windows NT in \_OS and eh you can kind of make an argument that NT in the form of 2000 implemented ACPI 2.0 so handwave)

This would all be fine except firmware vendors appear to earnestly believe that they should ensure that their platforms work correctly with RHEL 5 even though there aren't any drivers for anything in their hardware and so are looking for ways to identify that they're on Linux so they can just randomly break various bits of functionality. I've now found two systems (an HP and a Dell) that check the value of _REV. The HP checks whether it's 3 or 5 and, if so, behaves like an old version of Windows and reports fewer backlight values and so on. The Dell checks whether it's 5 and, if so, leaves the sound hardware in a strange partially configured state.

And so, as a result, I've posted this patch which sets _REV to 2 on X86 systems because every single more subtle alternative leaves things in a state where vendors can just find another way to break things.

[1] Verified by hacking qemu's DSDT to make _REV calls at various points and dump the output to the debug console - I haven't found a single scenario where modern Windows returns something other than "2"

comment count unavailable comments

March 11, 2015

Blogified old AI documents

Back in 199something, I wrote some simple notes about some very basic undergraduate-level AI reading I was doing. I’ve just moved those HTML documents into back-dated WordPress entries. They still interest me and I wish I’d taken it further.

March 10, 2015

That's a wrap

I've been trying to figure out what to say to summarize my experience with OPW and GNOME. To speak and write about these things is never easy since they have a tendency to come out as synthetic, cheesy, and runny as the nacho cheese sauce one finds at your local ball park. However, I feel very grateful to have been chosen to participate. In all honesty, I have to wonder what was developed more, the Keysign application or myself. The experience has added so much to my personal programming toolbox, including specific programming tools to a broader knowledge of FOSS.

Similar to many others, I had tried to find an appropriate place to begin contributing to FOSS before starting the OPW program. I still have a myriad of bookmarked posts and websites all devoted to getting involved in open source. It's pervasive nature as a topic suggests that most people encounter similar barriers when trying to find a good starting point. Fortunately once you gain some momentum, it is so much easier to conserve or transfer that energy to a parallel aspect of a current project or to a completely new undertaking. This, perhaps, is the broader implication of my experience. I now have that momentum. I would like to give a big thank you to Tobias and Marina for  answering all of my questions and fostering a supportive environment. I would also like to thank the GNOME community for making OPW possible.





March 09, 2015

OPW Retrospective

Three months later, I’m done with my OPW internship with GNOME Music.

I’ve learned that open-source contributing isn’t as scary and impossible as it once seemed, and that IRC is full of nice people who are happy to help! But I’ve also learned that diving into a new codebase is challenging, at best, and nearly impossible at worst. pdb, for x in dir(foo): print(x), traceback.print_stack(), and inspect.getargspec(myfunc) have become good friends of mine in the past three months. Good documentation, it turns out, is essential—the project I was working on had very little, and the libraries it utilized (at least, the Python wrappers for those libraries) were similarly sketchily documented, and all this made learning the code waaaaay tougher than it needed to be.

I’ve learned how much I benefit from writing things down. I took notes when I was learning SPARQL, and color-coded my questions so I could come back to them later. I kept track of all the weird dependency problems I ran into during my build—which turned out to be way more useful than I’d even imagined when I ended up having to JHBuild my program all over again on a different machine. I made a to-do list at the beginning of every work day, with a macro-goal or two and manageable sub-steps that I could check off as I did them, to keep myself on-task and non-overwhelmed. I kept a list of bugs as I ran into them so I didn’t get side-tracked—I could finish the task at hand, then go back and report all the intervening bugs at once. Similarly, I tried my best to jot down any real-world distractions—like that email I needed to send, that form I needed to mail…—so that they were out of my head and on paper, and I could come back to them whenever I was done with work.

I’ve learned how important task-setting and accountability is to me. If I don’t know what task I’m doing at any given time, I have the potential to just mess around in the source code for a good few hours without really getting anything done. Having “Bug X: Put Y in the Z” written at the top of my notebook, or telling my mentor Vadim, “Okay, I’m gonna work on foo today,” clarified my purpose and kept me from trying to do 5 things at once, or from doing nothing at all.

I’ve learned that working from home is hard! I need to be pretty firm with myself to get out of bed in the morning, shower, eat a good breakfast, and sit down to work, instead of lazing around in bed with a book for ages and messing around on Facebook. But for the most part, I managed to stick to the rules I laid out for myself at the beginning of all of this, to great positive effect. (Having codified rules definitely helps!) (Disabling Facebook on my work machine, and disabling my Facebook newsfeed on my personal machine, have both been very good choices as far as my productivity goes.) Having a space that is my work space (my desk, with an external monitor) helps a lot—if I find myself distracted, I get up, wander away, and take a break, and come back when I can actually think straight.

I’ve learned that the toughest part of any project tends to come right at the very beginning: the system set-up is always stupidly, grossly difficult, and learning a new codebase (as I mentioned above) is hard! For the first month, I felt like I’d hardly accomplished anything—I could barely write patches because I was still trying to figure out how everything worked.

And of course, I learned a bunch of concrete skills. I know a fair bit (though by no means everything) about how GNOME Music works, I know about Tracker and SPARQL, I know a little something now about open-source workflow, bug reporting, and bugzilla, and I’ve acquired some new git-fu by necessity (since I’m working on a multi-contributor project, need to make my commits as self-contained as possible, etc.). I’m so grateful to the GNOME Foundation and to OPW for this fabulous opportunity, and to Vadim Rutkovsky for being the most friendly, helpful, and laid-back mentor I could have hoped for!

What’s next? At the end of the month, I’ll be starting as a software engineer at Spring, a mobile shopping app startup based in NYC. I’m beyond psyched to join their team, and to put to use all of the skills I’ve gotten from OPW. And I’ll certainly stick around GNOME Music/GNOME/the open-source community—you’ll probably see me poking my head into various IRC channels from time to time (nick = maiamcc). If anyone wants to talk about GNOME Music, or OPW, or SPARQL, or anything, really: come say hi!

More about Nuntius

On Friday Paolo Borelli introduced Nuntius, the small project we’ve been working on to display android notifications on a GNOME desktop.

The feedback we got in these days has been great and it is energizing to receive comments, suggestions, bug reports and even patches! Thank you!
It is probably a good idea to clarify some things and reply to some frequent questions we received.

First of all we want to stress again that what is available today is basically a prototype: as soon as we had something able to connect and show the notifications we went ahead and published it because we think “release early, release often” is the best strategy when it comes to free software. Until yesterday it was not even set up to be translated, thought that is now fixed (hint hint!).

  • Scope: we think better integration between GNOME and smartphone is a very interesting area where a lot of features could be added. People suggested being able to send SMS from your desktop application, being able to answer phone calls through a headset connected to the PC, being able to send notifications to the phone when some events occur, etc. These are all things which I would love to see in GNOME, but I am not sure they belong in Nuntius: for now we think we should focus on one thing: showing phone notifications. Should these other features be part of Nuntius? Should they be programs that complement Nuntius? Should they be part of a larger effort that supersedes Nuntius? Just jump in and help us out to shape the answer to these questions!
  • Bluetooth: we selected bluetooth instead of wifi because it seemed a natural fit for the task: it gives us discovery and pairing of devices out of the box, it gives us a meaningful concept of “proximity” and in our understanding it uses way less power (we do not have hard data, but we keep Nuntius running on our phone and we do not see it show up among the top apps using the battery). The fact that wearables like smart watches use bluetooth seemed to validate our choice. With that said, we have not ruled out also using wifi and help in that direction is more than welcome: for instance we also hit a technical snag in our initial test using plain TCP: the communication would be interrupted once the phone suspends… I am pretty sure this is solvable, but we did not investigate.
  • Why not KDE-connect? the very simple answer is that we did not know about it… Nuntius is a fun project born in front of the coffee machine because we wanted to learn something about android and at the same time do something that we would use every day. We did not spend much time researching existing things, we just went ahead, fired up an editor and started a prototype. Now that we do know about it, we will surely check it out… that’s the beauty of free software!
  • Why not a standard protocol? Once again we did not research much, so I would not be surprised if we missed something and if you have any pointers they are very welcome… Our understanding is that there is a Bluetooth profile for notifications, but glancing at the spec, it is much more limited than what we need (e.g. it tells you how many notifications you have, but not the content)
  • What about iOS? Once again the answer is surprisingly simple: we do not have an iOS device, but if someone wants to start a nuntius-ios project, we would love to integrate it

March 06, 2015

Announcing the announcer

You are sitting in front of the PC busy coding the next great GNOME application or more likely watching funny cat pictures and you hear your cat purr… oh wait, you do not have a cat, so you figure out it is your phone vibrating so you start looking for it among all the mess that’s on your desk. When you finally find it you can read the Whatsapp message or SMS you just received and now you would like to reply sending that funny cat picture you had open in the browser on your PC…

 

Despair no more, the lobster is here to help you! Nacho, Kurt and I started a new side project called Nuntius which lets you read notifications from your android phone directly on your beautiful GNOME desktop. This is going to be even better with GNOME 3.16 and its redesigned notification system.

Both the android application and the GNOME application are free software and are available on github, but the simplest way to try it is to install the android application from the Google Play Store, while the linux application is already available in Fedora and packaging for any other distribution is more than welcome.

Nuntius uses bluetooth to communicate, this is not only a technological choice, but also a design one: notifications will be sent to the PC only when you are nearby and your messages stay local and private and will not be sent to “teh cloud”.

In the best tradition of free software, this is a very early release with just the bare minimum functionality (for instance replying directly to a message from a GNOME notification is not implemented yet) and we welcome any feedback and help.

March 04, 2015

The Joys of SPARQL: An RDF Query Language

I’ve been working with SPARQL a bunch for my OPW project, and found it very slow going at first. SPARQL is apparently one of those little-loved languages that doesn’t have much in the way of tutorials or lay-speak-explanations online—pretty much all I could find were the language’s official docs, where were super technical and near-impossible for a beginner to slog through. Hell, I didn’t even understand what the language did—how could I read the technical specs?

So, I decided to take a step towards remedying this problem. This post won’t actually teach you how to use SPARQL—others do that better than I, and I provide some links at the bottom of the post—but it’s intended to be a primer on how SPARQL works, and what the data you might use it on looks like. (This is a blog-ified version of a Hacker School Thursday Talk presentation given on 2/5/15.)

What is SPARQL?

It’s like SQL, but with extra unicorns. Sparkly Unicorn

No really, what is SPARQL?

Besides a query language with a really ridiculous name?

SPARQL is a (recursive) acronym standing for: SPARQL Protocol and RDF Query Language.

It’s a query language, like SQL, that you use to poke around in your data and find the bits of it that you want. Unlike SQL, which queries tables, SPARQL queries data stored in a different way: a Resource Description Framework (or RDF).

What is RDF?

SQL expects data to be in tables, like this: SQL Table

But SPARQL works with data organized like this: RDF Web

A single row in the SQL table is a collection of bits of information about that one entity (in this case, a person); the web below is another way of visualizing that information. Each bit of information is contained in a subject/predicate/object triple.

Subject/Predicate/Object Triples

SUBJECTPREDICATEOBJECT

This convention plays off of English grammar constructs [fn: and probably lots of other languages too, but I don’t know enough linguistics to make any sort of comprehensive claim] grammar constructs. In English, we can make a sentence like this:

The humanthrowsthe ball.

The human is the subject, throws is the predicate (verb-like thing), and the ball is the object. Likewise, we can express any cell from a SQL table in the same way:

Maiahas favorite color equal torainbow.

Where Maia (the thing we’re referring to—the row in the SQL table representing an entity) is the subject, has favorite color equal to is the predicate (think of this as the property name, or put another way, the column header), and rainbow is the object (the value of that property for the given entity). In diagram form, it would sort of look like this:

RDF in color #1

Only, this is not quite accurate. Maia is not its own entity; it’s a human-readable identifier (what we mortals call a first name) for some entity stored in your computer. This entity hasFirstName Maia just like it hasFavoriteColor Rainbow. So in reality, the visual representation would look more like this:

RDF in color #2

<aabbcc>—the alphanumeric string we give to our entity to represent it and so we can track all of its associated properties and value—is called a Uniform Resource Identifier, or URI. (Not to be confused with Uniform Resource Locators, or URLs. A URL tells you the location of the entity in question, where as the URI is the name the computer has given to our entity; think of a URI as a name and a URL as an address.)

What Does a Query Look Like, Anyway?

The first thing to know is that SPARQL objects and properties aren’t invented at random. When you’re using SPARQL, you work with a predefined set of classes (e.g. contact, email address, etc.) and properties (e.g. hasFirstName, dateAdded, etc.), collectively called an ontology. Generally, systems will use a combination of the standard ontologies floating around the web (GNOME Tracker, for instance uses this collection of ontologies, someone putting together a contacts list might use foaf). I also assume you can make your own, though I’ve never experimented with this. Ontologies are identified by a prefix (and if you’re writing your own queries from scratch, you’ll have to set the prefixes with a link to the ontology on the interwebs)… The point being, in English, you might get confused between “has first name” and “has name” and “is named” and “has given name”… but in SPARQL, there will be only one name for that property (presumably something like foaf:givenName).

Anyway, what does a query look like? It looks something like this:

SELECT ?a ?b ?c
WHERE {
    ...
}
ORDER BY ?a
LIMIT X

Basically, you select some stuff (SELECT ?a ?b ?c as specified by the conditions in your WHERE clause—possibly including some FILTER statements) which you can then do a handful of operations on: ordering by one or more of the values, capping the number of results you want, etc.

But that was (obviously) an extremely sketchy description, and as I warned you, I’m not going to go into any more detail in this post. Others have tackled this material better than I—I learned most of what I knew about SPARQL at the very beginning from Dr. Noureddin Sadawi’s Simple SPARQL Tutorial, in which he plays around with Bob DuCharme’s sample code. Check out their stuff to learn what queries actually look like, and all the cool stuff you can do with them. I hope this has been at least somewhat enlightening; thanks for tuning in!

Security and Privacy Roadmap for Epiphany and WebKitGTK+

I’ve laid out some informal thoughts on where we should be heading with regards to new security and privacy features in Epiphany. It’s in the form of a list of features we really ought to have. (That is, it’s a wishlist.) Most of these features would be implemented in WebKitGTK+, so other applications using WebKitGTK+ would benefit as well.

There’s certainly no shortage of work to be done, so except for a couple items on the list, this is not a list of things you should expect to be implemented soon. Comments welcome on the wiki or on this blog. Volunteers especially welcome! Most of these tasks on the list would make for great GSoC projects (but I’m not accepting more applicants this year: prospective students should find another mentor who’s interested in one of the tasks).

The list will also be used to help assign one or more bounties using some of the money we raised in our 2013 security and privacy campaign.

March 03, 2015

Tue 2015/Mar/03

  • An inlaid GNOME logo, part 3

    Esta parte en español

    (Parts 1, 2)

    The next step is to make a little rice glue for the template. Thoroughly overcook a little rice, with too much water (I think I used something like 1:8 rice:water), and put it in the blender until it is a soft, even goop.

    Rice glue in the blender

    Spread the glue on the wood surfaces. I used a spatula; one can also use a brush.

    Spreading the glue

    I glued the shield onto the dark wood, and the GNOME foot onto the light wood. I put the toes closer to the sole of the foot so that all the pieces would fit. When they are cut, I'll spread the toes again.

    Shield, glued Foot, glued

March 02, 2015

A Hitchhikers Guide to Git: Genesis

Introduction

What is This?

This post tries to provide a gentle introduction to git. While it is aimed at newcomers it is also meant to give a rather good overview and understanding about what one can do with git and thus is extensive. It follows the premise:

If you know how a thing works, you can make it do what you want it to.

Please don’t be afraid as I’m trying to use light language and many examples so you can read fast through it and understand anyway (if I succeed). I’m also trying to highlight important things at least for long texts to ease reading a bit more. However, at some points this tutorial may require you to think for yourselves a bit – it’s a feature, not a bug.

For Whom is This?

This tutorial is meant for everyone willing to learn git – it does not matter if you are a developer who never really got the hang of it or a student wanting to learn. Even if you are doing non coding tasks like design or documentation git can be a gift from heaven. This tutorial is for everyone – don’t be scared by its length and believe me, it pays off! The only requirement is that you have git installed, know cd, ls and mkdir and have something to commit – and who doesn’t have any digital data!?

What’s in There?

This tutorial is the first out of three tutorials which are meant to free your mind from atraditional view on filesystems.

Genesis

This tutorial, Genesis (i.e. “Creation” or “Beginning”), will cover some very basic things:

  • Configuring git so you can use it how you want. (Basic, aliases)
  • Creating your git repository (play god once!)
  • Creating your first git commits.
  • Learning what the repository is and where commits get stored.
  • Browsing through history.
  • Ignoring files.

Exodus

Exodus (i.e. “going out”) will cover:

  • Store temporary changes without committing.
  • Navigating commits in git.
  • Sharing commits to a remote place.
    • Locally (huh, that’s sharing?)
    • Via email
    • To a server
    • To a client
  • Working with others (i.e. non-linear).
    • Join things.
    • Linearize things.
  • Writing good commits.
  • Editing commits. (Actually not editing but it feels like that.)

Apocalypse

Apocalypse (i.e. “uncovering”) will try to uncover some more advanced features of git, finally freeing your mind from your non-versioned filesystem:

  • Finding more information about code.
  • Finding causes of bugs in git.
  • Reverting commits.
  • Reviewing commits.
  • Travelling though time and changing history (you want me to believe you’ve never wanted to do that?)
  • Getting back lost things.
  • Let git do things automatically.

Some Warnings

A short warning: If you ever really got the hang of git you will not be able to use something else without symptoms of frustration and disappointment – you’ll end up writing every document versioned as an excuse to use git.

A warning for windows users: you may need to use equivalent commands to some basic UNIX utilities or just install them with git. (Installer provides an option for that.) In general it’s a bit like travelling with Vogons – avoid when possible.

A warning for GUI users: Don’t use your GUI. Be it the GitHub App or SourceTree or something else – they usually try to make things more abstract for us, thus they hinder us from understanding git and we can then not make git do what we want. Being able to communicate directly with git is a great thing and really bumps productivity!

I wrote this tutorial to the best of my knowledge and experience, if you spot an error or find something important is missing, be sure to drop me a message!

Preparation…

So go now, grab a cup of coffee (or any other drink), a towel, take your best keyboard and open a terminal beneath this window!

What’s Git for Anyway?

Before we really get started it is important to know what git roughly does: git is a program that allows you to manage files. To be more specific git allows you to define changes on files. In the end your repository is just a bunch of changes that may be related to each other.

Setting Up Git

Before we can continue we’ll have to set up a few tiny things for git. For this we will use thegit config --global command which simply stores a key value pair into your user-global git configuration file (usually stored at ~/.gitconfig).

WHOAMI

Let’s tell git who we are! This is pretty straightforward:

$ git config --global user.name "Ford Prefect"
$ git config --global user.email ford@prefect.bg

This makes git store values for “name” and “email” within the “user” section of the gitconfig.

Editor

For some operations git will give you an editor so you can enter needed data. This editor is vim by default. Some people think vim is great (vim is great!), some do not. If you belong to the latter group or don’t know what vim is and how to operate it, let’s change the editor:

$ # Take an editor of your choice instead of nano
$ git config --global core.editor nano 

Please make sure that the command you give to git always starts as an own process and ends only when you finished editing the file. (Some editors might detect running processes, pass the filename to them and exit immediately. Use -s argument for gedit, --waitargument for sublime.) Please don’t use notepad on windows, this program is a perfect example of a text editor which is too dumb to show text unless the text is written by itself.

Create a Repository

So, lets get started – with nothing. Let’s make an empty directory. You can do that from your usual terminal:

$ mkdir git-tutorial
$ cd git-tutorial
$ ls -a
./  ../

So, lets do the first git command here:

$ git init
Initialized empty Git repository in /home/lasse/prog/git-tutorial/.git/
$ ls -a
./  ../  .git/

So now we’ve got the .git folder. Since we just created a repository with git init, so we can deduce, that this .git directory must in fact be the repository!

Creating a God Commit

So, let’s create some content we can manage with git:

$ echo 'Hello World!' >> README
$ cat README
Hello World!

Since we know, that the .git directory is our repository, we also know that we did not add this file to our repository yet. So how do we do that?

As I’ve hinted before, our git repository does not contain files but only changes – so how do we make a change out of our file?

The answer lies in (1) git add and (2) git commit which allow us to (1) specify what files/file changes we want to add to the change and (2) that we want to pack those file changes into a so called commit. Git also offers a helper command so we can see what will be added to our commit: git status.

Let’s try it out:

$ git status
On branch master

Initial commit

Untracked files:
  (use "git add <file>..." to include in what will be committed)

    README

nothing added to commit but untracked files present (use "git add" to track)
$ git add README
$ git status
On branch master

Initial commit

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)

    new file:   README

So obviously with git add we can stage files. What does that mean?

As we know, when we’re working in our directory any actions on files won’t affect our repository. So in order to add a file to the repository, we’ll have to put it into a commit. In order to do that, we need to specify, what files/changes should go into our commit, i.e. stage them. When we did git add README, we staged the file README, thus every change we did until now to it will be included in our next commit. (You can also partially stage files so if you edit README now the change won’t be committed.)

Now we’ll do something very special in git – creating the first commit! (We’ll pass the -vargument to get a bit more info from git on what we’re doing.)

$ git commit -v

You should now get your editor with contents similar to this:


# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch master
#
# Initial commit
#
# Changes to be committed:
#   new file:   README
#ref: refs/heads/master

# ------------------------ >8 ------------------------
# Do not touch the line above.
# Everything below will be removed.
diff --git a/README b/README
new file mode 100644
index 0000000..c57eff5
--- /dev/null
+++ b/README
@@ -0,0 +1 @@
+Hello World!

Since we’re about to create a change, git asks us for a description. (Note: Git actually allows to create commits without a description with a special argument. This is not recommended for productive collaborative work!)

Since we passed the -v parameter, git also shows us below what will be included in our change. We’ll look at this later.

Commit messages are usually written in imperative present tense and should follow certain guidelines. We’ll come to this later.

So, let’s enter: Add README as our commit message, save and exit the editor.

Now, let’s take a look at what we’ve created, git show is the command that shows us the most recent commit:

$ git show
commit ec6c903a0a18960cd73df18897e56738c4c6bb51
Author: Lasse Schuirmann <lasse.schuirmann@gmail.com>
Date:   Fri Feb 27 14:12:01 2015 +0100

    Add README

diff --git a/README b/README
new file mode 100644
index 0000000..980a0d5
--- /dev/null
+++ b/README
@@ -0,0 +1 @@
+Hello World!

So what do we see here:

  • It seems that commits have an ID, in this caseec6c903a0a18960cd73df18897e56738c4c6bb51.
  • Commits also have an author and a creation date.
  • Of course they hold the message we wrote and changes to some files.

What we see below the diff ... line is obviously the change. Let’s take a look at it: since git can only describe changes, it takes /dev/null (which is a bit special, kind of an empty file, not important here), renames it to README and fills it with our contents.

So, this commit is pretty godish: It exists purely on it’s own, has no relations to any other commit (yet, it’s based on an empty repository, right?) and creates a file out of nothing (/dev/null is somehow all and nothing, kind of a unix black hole).

Inspecting What Happened

So, let’s look in our repository!

$ ls -la .git
total 52
drwxrwxr-x. 8 lasse lasse 4096 Feb 27 16:05 ./
drwxrwxr-x. 3 lasse lasse 4096 Feb 27 14:11 ../
drwxrwxr-x. 2 lasse lasse 4096 Feb 27 14:11 branches/
-rw-rw-r--. 1 lasse lasse  486 Feb 27 14:12 COMMIT_EDITMSG
-rwxrw-r--. 1 lasse lasse   92 Feb 27 14:11 config*
-rw-rw-r--. 1 lasse lasse   73 Feb 27 14:11 description
-rw-rw-r--. 1 lasse lasse   23 Feb 27 14:11 HEAD
drwxrwxr-x. 2 lasse lasse 4096 Feb 27 14:11 hooks/
-rw-rw-r--. 1 lasse lasse  104 Feb 27 14:11 index
drwxrwxr-x. 2 lasse lasse 4096 Feb 27 14:11 info/
drwxrwxr-x. 3 lasse lasse 4096 Feb 27 14:12 logs/
drwxrwxr-x. 7 lasse lasse 4096 Feb 27 14:12 objects/
drwxrwxr-x. 4 lasse lasse 4096 Feb 27 14:11 refs/
$

Now let’s look into it further to get to know what it is a bit more. I will try to cover only important parts here, if you’re interested even deeper, you can try DuckDuckGo or take a look at this: http://git-scm.com/docs/gitrepository-layout

The config file

The config file is a similar file to the one where our settings in the beginning got stored. (User and editor configuration, remember?) You can use it to store settings per repository.

The objects directory

The objects directory is an important one: It contains our commits.

One could do a full tutorial on those things but that’s not covered here. If you want that, check out: http://git-scm.com/book/en/v2/Git-Internals-Git-Objects

We just saw the ID of the commit we made: ec6c903a0a18960cd73df18897e56738c4c6bb51

Now let’s see if we find it in the objects directory:

$ ls .git/objects 
98/  b4/  ec/  info/  pack/
$ ls .git/objects/ec
6c903a0a18960cd73df18897e56738c4c6bb51

So, when we create a commit, the contents (including metadata) are hashed and git stores it finely into the objects directory.

That isn’t so complicated at all, is it?

Task: Objects

git show accepts a commit ID as an argument. So you could e.g. do git show ec6c903a0a18960cd73df18897e56738c4c6bb51 instead of git show if this hash is the current commit.

Investigate what the other two objects are, which are stored in the objects directory. (Ignore the info and pack subdirectory.)

Do git show again and take a look at the line beginning with “index”. I’m sure you can make sense out of it!

The HEAD File

The HEAD file is here so git knows what the current commit is, i.e. with which objects it has to compare the files in the file system to e.g. generate a diff. Let’s look into it:

$ cat .git/HEAD
ref: refs/heads/master

So it actually only references to something else.

So let’s take a look into refs/heads/master – what ever this is:

$ cat .git/refs/heads/master
ec6c903a0a18960cd73df18897e56738c4c6bb51

So this HEAD file refers to this master file which refers to our current commit. We’ll see how that makes sense later.

Creating a Child Commit

Now, let’s go on and create another commit. Let’s add something to our README. You can do that by yourself, I’m sure!

Let’s see what we’ve done:

$ git diff
diff --git a/README b/README
index 980a0d5..c9b319e 100644
--- a/README
+++ b/README
@@ -1 +1,2 @@
 Hello World!
+Don't panic!

Let’s commit it. However, since we’re a bit lazy we don’t want to add the README manually again; the commit command has an argument that allows you to auto-stage all changes to all files that are in our repository. (So if you added another file which is not in the repository yet it won’t be staged!)

git commit -a -v

Well, you know the game. Can you come up with a good message on your own?

$ git show
commit 7b4977cdfb3f304feffa6fc22de1007dd2bebf26
Author: Lasse Schuirmann <lasse.schuirmann@gmail.com>
Date:   Fri Feb 27 16:39:11 2015 +0100

    README: Add usage instructions

diff --git a/README b/README
index 980a0d5..c9b319e 100644
--- a/README
+++ b/README
@@ -1 +1,2 @@
 Hello World!
+Don't panic!

So this commit obviously represents the change from a file named README which contents are stored in object 980a0d5 to a file also named README which contents are stored in object c9b319e.

A Glance At Our History

Let’s see a timeline of what we’ve done:

$ git log
commit 7b4977cdfb3f304feffa6fc22de1007dd2bebf26
Author: Lasse Schuirmann <lasse.schuirmann@gmail.com>
Date:   Fri Feb 27 16:39:11 2015 +0100

    README: Add usage instructions

commit ec6c903a0a18960cd73df18897e56738c4c6bb51
Author: Lasse Schuirmann <lasse.schuirmann@gmail.com>
Date:   Fri Feb 27 14:12:01 2015 +0100

    Add README

That looks fairly easy. However I cannot withstand to point out that despite commits look so fine, linearly arranged here, they are actually nothing more than commit objects, floating around in the .git/objects/ directory. So git log just looks where HEAD points to and recursively asks each commit what it’s parent is (if it has one).

Since every good hitchhiker does know how to travel through time and change events, we’ll learn to do that in the next chapter ;)

Configuring Git Even Better

Better staging

It is worth to mention that git add also accepts directories as an argument. I.e. git add . recursively adds all files from the current directory.

In order to generally ignore certain patterns of files (e.g. it’s bad practice to commit any generated stuff), one can write a .gitignore file. This file can look as follows:

README~  # Ignore gedit temporary files
*.o  # Ignore compiled object files

The exact pattern is defined here: http://git-scm.com/docs/gitignore

Files matching this pattern will:

  • Not be added with git add unless forced with -f
  • Not be shown in git status as unstaged

It is usually a good idea to commit the .gitignore to the repository so all developers don’t need to care about those files.

Aliases

So, we’ve learned quite some stuff. However git command’s aren’t as intuitive as they could be sometimes. They could be shorter too. So let’s define us some aliases of the commands we know. The ones given here are only suggestions, you should choose the aliases in a way that suits best for you!

Aliasing Git Itself

If you’re using git much, you might want to add alias g=git to your .bashrc or .zshrc or whatever. (On windows you’re a bit screwed. But what did you expect? Really?)

Aliasing Git Commands

Let’s let git give us our editor since we don’t want to edit just one value:

git config --global --edit

You can add aliases through the [alias] section, here are the aliases I suggest:

[alias]
a = add
c = commit -v
ca = commit -v -a
d = diff
dif = diff # Thats a typo :)
i = init
l = log
st = status
stat = status

Conclusion

So what did we learn?

We did some basic git commands:

  • git config: accessing git configuration
  • git init: creating a repository
  • git status: getting current status of files, staging and so on
  • git add: staging files for the commit
  • git diff: showing the difference between the current commit and what we have on our file system
  • git commit: writing staged changes to a commit
  • git log: browsing history

We also learned how git organizes commits, how it stores files and how we can make git ignore files explicitly.

I hope this helped you understanding a bit what git does and what it is. The next tutorial will hopefully cover all the basics. (Some were already hinted here.)

Online Programming Competitions are Overrated

The title is not merely a clickbait, but my current opinion, after attending a programming competition for the first time. This post expresses my opinions on the hiring processes of [some of] the new age companies through programming competitions and algorithms-focused interviews.

I believe that the assessment for a senior/architect level programmer, should be done by finding how co-operative [s]he is with others to create interesting products and their history than by assessing how competitive [s]he is in a contest.

Algorithms

On my lone programming competition experience (on hackerrank), the focus of the challenges were on Algorithms (discrete math, combinatorics etc.).

Usage of standard, simple algorithms, instead of fancy, non-standard algorithms is a better idea in real life, where the products have to last for a long time, oblivious to changing programmers. Fancy algorithms are usually untested, harder to understand for a maintenance programmer.

Often, it is efficient to use the APIs provided by the standard library or ubiquitously popular libraries (say jquery). Unless you are working on specific areas (say compilers, memory management etc.) an in-depth of knowledge of a wide-range of algorithms may not be very beneficial (imo) in day-to-day work, elaborated in the next section.

Runtime Costs

There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable designs, Pluggable architectures, Points of Failures, etc.

Algorithms optimize mostly one aspect, CPU cycles. There are other aspects (say choice of Data structures, databases, frameworks, memory maps, indexes, How much to cache etc.) which have a bigger impact on the overall performance. CPU cycles are comparatively cheap and we can afford to waste them, instead of doing bad I/O or a non-scalable design.

Most of the times, if you choose proper datastructures and get your API design correct, we can plug the most efficient algorithm, without affecting the other parts of the system, iff your algorithm proves to be really a bottleneck. A good example is the Evolution of filesystems, schedulers in the Linux Kernel. Remember that Intelligent Design school of software development is a myth.

In my decade of experience, I have seen more performance problems due to poor choice of datastructures or unnecessary I/O, than due to poor selection of algorithms. Remember, Ken Thompson said: When in doubt, Use Brute Force. It is not important to get the right algorithm on the first try. Getting the skeleton right is more important. The individual algorithms can be changed, after profiling.

At the same time, this should not be misconstrued as an argument to use bubblesort.

The 10,000 hour rule

Doing well in online programming competitions is mostly the 10,000 hour rule in action. You spend time in enough competitions and solve enough problems, you will quickly know which algorithm or programming technique (say dynamic programming, greedy) to employ if you see a problem.

Being an expert at online programming competitions does not guarantee that [s]he could be trusted with building or maintaining a large scale system, that has to run long and the code live for years (say on the scale of filesystems, databases, etc.). In a competition, you solve a small problem at a microscopic level. In a large scale system, the effects of your code are systemic. Remember how the fdisk, sqlite, firefox fiasco ?!

In addition to programming skills, there are other skills needed such as build systems, dependency management (unless you are working on the kernel), SCM, Versioning, Library design aspects, automated testing, continuous integration etc. These skills cannot be assessed in online programming competitions. 

Hardware

In my competition, I was asked to solve problems in a machine that is constrained to run in a single thread. I do not know if it is a limitation on hackerrank, or if all online competitions enforce this.

If it is the practice in all online programming competitions, then it is a very bad idea. Although I could understand the infrastructure constraints for these sites, with the availability of the multi-core machines these days, your program is guaranteed to run on multiple cores. You miss out on a slew of evaluation options if the candidate is forced to think of single threaded design.

With the arrival of cloud VMs and the Google appengine elasticity, it is acceptable to throw more CPUs or machines at a program on-demand, without incurring high cost. It is okay to make use of a simpler, cleaner algorithm that is more readable and maintenance friendly (than a complex, performant algorithm), if it will scale better on increased number of CPUs or machines. The whole map-reduce model is built around a similar logic.

I don't claim that concurrency/parallelism/cloud is a panacea for all performance problems, but it is too big a thing to ignore while assessing a programmer. A somewhat detailed explanation is at the Redis creator's blog (strongly recommended to subscribe).

AHA Algorithms

I first heard of the concept of AHA Algorithms in the excellent book Programming Pearls by Jon Bentley. These are the algorithms which make very complicated, seemingly impossible problems look trivial, once you know the algorithm. It is impossible for a person to solve such problems within the span of the competition/interview if the candidate is not  aware of the algorithm earlier and/or does not get that AHA moment. Levenshtein Distance, Bitmap algorithms etc. fall in this category. It may not be wise to evaluate a person based on such problems.

Conclusion:

Candidizing (is that a word ?) long-term FOSS contributors for hiring may be an interesting alternative to hiring via online programming competitions or technical interviews. Both the interviews and contests have extrapolation errors when the person starts working on a job, especially on large scale systems.

I see that a lot of the new age companies are asking for github profile in their resumes, which is good. But I would prefer a more thorough analysis on long standing projects and not merely personal pet projects that may not be very large-scale or popular. Not every person works for a FOSS project in free time, is also a deterrent to holding such an approach.

Online programming competition websites could limit the number of participants in advance and give the participants an infrastructure that matches realtime development, instead of a input-output comparison engines, with a 4 second timeout.

Having said all these, these online programming contests are a nice way to improve one's skills and to think faster. I will be participating in a few more to make myself fitter. There may be other programming challenges which are better and test all aspects of an engineer. I should write about my view after an year or so.

One more thing: Zenefits & Riptide I/O

In other news, My classmates' companies Zenefits and Riptide I/O are hiring. Have a look if you are interested in a job change (or even otherwise). They are in an excellent (imo) stage where they are still a startup in engineering culture, but have an enormous funding to work on the next level of products. Should be an exciting opportunity for any curios engineer. Zenefits works on web technologies and delivers their SaaS. I would have joined Zenefits if they had an office in Bangalore. Riptide IO works on IoT and has some high profile customers.

February 28, 2015

February Books: lots of fiction

Nonfiction, a really painful read

The End of Power. I made it through this book but it was a struggle. The author’s premise is that power is becoming more distributed (I agree) and that because of that nobody will be able to get anything done (I disagree). He thinks that if we don’t have a few powerful countries, the world will continue to see more and more terrorism. I think we need a new way to work that takes into account the distributed nature of power – both at the governmental and the corporate level. The author gives lots of data and examples and defines power in interesting ways. However, if he allowed distributed works, I think I could rewrite the book with 80% fewer words. I don’t think I’m the only one that had trouble with this book. After Mark Zuckerberg picked it as his first book of the year, it sold out. Now, 2 months later, it only has 102 reviews on Amazon, so most of those people must not have finished the book …

Book Group: General Fiction Books

My first book and last book in February were for my book group.

The Girl on the Train was an entertaining thriller. I’m not sure what to tell you about it without giving it away, but it does make you question whether you know the whole true story about anyone you meet. The book might also make you stop drinking. It wasn’t the kind of book to drink while reading a glass of wine as the main character loses large parts of her memory due to alcoholism.

I really enjoyed The Rosie Project. I don’t know how realistic it was (I’m curious to see what my friends who have more experience with Asperger’s think) but it was an entertaining read about a man who starts a Wife Project, a survey to find the perfect wife. Then he decides to help a woman with the Father Project, a project to find her biological father. During the process they form a friendship and share many misunderstandings and hilarious moments.

Science Fiction and Fantasy, a bit of every type

Inescapable. I almost quit reading on page 2 when I read “using my mirror to refresh my lip-gloss”. There was a lot of description of clothing and looks. And the way one of the main character’s accents was done was kind of annoying. And the way the mystery is revealed is pretty artificial. On the plus side, I think, the author took all those awkward high school relationships and bundled them all up and shoved them into this book. While not my kind of book, I did read the whole thing.

Third Shift – Pact (Part 8 of the Silo Series) by Hugh Howey. If it’s been a while since you read the previous books, I recommend a refresher. The author just continues the story right where it left off with no reminder of who the characters are or what’s going on. If you haven’t read the Wool Silo series, I highly recommend the books. I think they’d be good for people who haven’t read much science fiction too.

Soul Identity. I thought this would be science fiction but it wasn’t really. It’s about an organization that believes everyone has a unique soul that can be identified by their eyes. And after a person’s death souls comes back in a new person – without any memories. People can leave wealth and belongs to their future soul hosts. The story was good – a bit of a mystery – and I think it’d make a good movie. I found the dialogue to be rather awkward and it was 95% dialogue. I prefer a bit more narrative mixed in.

The Shattergrave Knights proved to be the fantasy book I was looking for. I’d have preferred more character development but I was in the mood for an easy read placed in some fantasy world that resembles the middle ages only very slightly with swords and magic and this book fit the bill. (It’s also only 99 cents on Amazon.)

Tried but didn’t make it …

The Briar King. It seemed like one of those epics where the author has the story they want to to tell and then makes up the people to tell it. The characters were well done but the book was about the epic tale. (And according to Amazon I bought this in 2009. Maybe it’s time to give up?)

Infographic: How Mallard helps cross-stream documentation workflows

Infographic: How Mallard helps cross-stream documentation workflows

Tapped on phone 7 times. Now an Android developer! :D

Nothing has been really tough so far! Just some little tweaks in the build.gradle files I had to make to the sunshine app to make it run, all have been listed in the documentation of the course.

Enabled USB Debugging on my android device and to check if my computer detected the device, ran the following command to obtain some result.

$adb devices


Feeds