GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

December 04, 2016

Fedora and GNOME at the Engineering Week of UPIG

I was invited today to present two Free software projects: GNOME and Fedora at UPIG (Universidad Peruana de Integración Global). They celebrated the event during the whole week the “Engineering Week”. This was the advertisement they used to announce the workshop that last two hours. It was offered free admission with certification fee of s/.25.

15284024_1350078885010813_3865978805236060843_nThe computers of the labs in the university had Windows installed and it was only allowed to install virtual box with a local iso during the workshop. The computers did not have neither permission to read DVDs. So, I did an introduction to the projects that these students have not heard before at all.

upig01I did what I usually do during my workshops such as prizing people when they respond questions of what I have already explained, photo of the entire group and playing with Fedora and GNOME balloons. They somehow get engaged and ask me to give more links related to the Linux history as well the contributions on these projects.

upig02Thank you so much for organizing this event and for helping us to spread the Linux word in universities in Lima, Peru. Thanks Lizbeth Lucar and Solanch Casas.

upig03This is not going to be my last presentation in Lima to spread the GNOME and FEDORA word.  I have previously arranged a couple of more presentations at universities such as UNTELS and UNMSM. Let’s expected more attendances next time, after final exams😉


Filed under: FEDORA, GNOME Tagged: fedora, GNOME, Julita Inca, Julita Inca Chiroque, linux event, Perú, Semana de Ingenieria, UPIG

December 03, 2016

Core Apps Hackfest 2016: report

I spent last weekend at the Core Apps Hackfest in Berlin. The agenda was to work on GNOME’s core applications: Documents, Files, Music, Photos, Videos, Usage, etc.; to raise their overall standard and to make them push beyond the limits of the framework. There were 19 of us and among us we covered a wide range of modules and areas of expertise.

I spent most of my time on the plumbing necessary for Documents and Photos to use GtkFlowBox and GtkListBox. The innards of Photos had already been overhauled to reduce its dependency on GtkTreeModel. Going into the hackfest we were sorely lacking a widget that had all the bells and whistles we need — the idiomatic GNOME 3 selection mode, and seamlessly switching between a list and grid view. So, this is where I decided to focus my energy. As a result, we now have a work-in-progress GdMainBox widget in libgd to replace the old GtkIconView/GtkTreeView-based GdMainView.

gnome-photos-flowbox

In fact, GtkListBox and GtkFlowBox was a recurring theme at the hackfest. Carlos Soriano and Georges were working on using them in Files, and whenever anybody uses them in a non-trivial manner there is the inevitable discussion about performance. Good thing that Benjamin was around. He spent the better part of a tram ride and more than an hour at the whiteboard, sketching out a strategy to make GtkListBox more efficient than it is today.

Like last year, Øyvind joined us. We talked about GEGL, and I finally saw the new GIMP in action. I rarely use GIMP, and I am not sure I have ever built it from source, but I have been reading its sources on a semi-regular basis for almost a year now. It was good to finally address this aberration. Øyvind had with him a cheap hand-held DLNA media renderer that was getting stuck when trying to render more than one image with dleyna-render and Photos. Zeeshan helped me poke at it, but unfortunately we didn’t get anywhere.

Other than that, Petr Stetka impressed everyone with his progress on the new Usage application. Georges refreshed his patches to implement the new Online Accounts Settings panel, Carlos Garnacho helped me with GtkGesture, and I reviewed various patches and branches that had been on my list for a while.

Many thanks to Red Hat for sponsoring me; to Carlos Soriano and Joaquim for organizing the event; to Kinvolk for being such gracious hosts; and to Collabora for the nice dinner.


December 02, 2016

What does configure actually run

One of the claimed advantages of Autotools is that it depends (paraphrasing here) "only on make + sh". Some people add sed to the list as well. This is a nice talking point, but it is actually true? Lets' find out.

Determining this is simple, we just run the GNU Hello program's configure script under strace like this:

strace -e execve  -f ./configure 2>stderr.txt > stdout.txt

This puts all command invocations of the process and its children to stderr.txt. Then we can massage it slightly with Python and get the following list of commands.

arch
as
basename
bash
cat
cc
cc1
chmod
collect2
conftest
cp
diff
dirname
echo
expr
file
gawk
gcc
getsysinfo
grep
hostinfo
hostname
install
ld
ln
ls
machine
make
mkdir
mktemp
msgfmt
msgmerge
mv
oslevel
print
rm
rmdir
sed
sh
sleep
sort
tr
true
uname
uniq
universe
wc
which
xgettext

This list contains a total of 49 different commands including heavyweights such as diff and gawk.

Thus we find that the answer to the question is no. Configure requires a lot more stuff than just shell plus make. In fact it requires a big chunk of the Unix userland implictly.

Pedantic postscriptum

It could be said that all of these programs are not strictly required and that configure could (potentially) work without them present. This is probably correct, but many of these programs provide functionality which is essential and not provided by either plain shell or Make.

Protected: Another GTK+ update

This content is password protected. To view it please enter your password below:

GNOME Core Apps Hackfest 2016

This November from Friday 25 to Sunday 27 was held in Berlin the GNOME Core Apps Hackfest.

My focus during this hackfest was to start implementing a widget for the series view of the Videos application, following a mockup by Allan Day.

To make this more interesting, I implemented this view using Emeus, Emmanuele Bassi's new in-development constraints based layout system for GTK+. You can find the (clearly unfinished) result here: https://github.com/Kekun/totem-series. I will keep working on it with Victor Toso who did some initial prototype last year.

Working at the hackfest was a great experience, interaction with the other contributors was face to face which helps a lot strengthening GNOME as a community. 😃

Thank a lot to Kinvolk and Collabora for helping to make this hackfest a great event!

Ubuntu still isn't free software

Mark Shuttleworth just blogged about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.

The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu.

This remains incompatible with the principles of free software. The freedom to take someone else's work and redistribute it is a vital part of the four freedoms. It's legitimate for Canonical to insist that you not pass it off as their work when doing so, but their IP policy continues to insist that you remove all references to Canonical's trademarks even if their use would not infringe trademark law.

If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.

Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.

[1] And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise

comment count unavailable comments

November Bug Squash Month: GJS

Here’s what happened during November Bug Squash Month in GJS.

First off, I didn’t really get on the ball to promote Bug Squash Month and I didn’t take pictures of any bug squashing activity… which I regret. I hope this post can make up for some of that.

During November I finally took the leap and offered to become a maintainer of GJS. My employer Endless has been sponsoring work on bugs 742249 and 751252, porting GJS’s Javascript engine from SpiderMonkey 24 to SpiderMonkey 31. But aside from that I had been getting interested in contributing more to it, and outside of work I did a bunch of maintenance work modernizing the Autotools scripts and getting it to compile without warnings. From there it was a small step to officially volunteering.

With not much of November remaining and a holiday and family visit coming up (life always is more important than bug squashing!) I decided to start out my bug-squashing campaign with what would get me the most results for the time spent: going through GJS’s bug tracker and closing obsolete or invalid bugs. This I managed to do, closing about 1/4 of all open bugs!

Then I made a list of all open bugs with attached patches and intended to review them to see if they still applied and why they hadn’t been committed yet. I got through a few, and had the dubious distinction of fixing up and committing patches from a 7 year old bug yesterday. But as you can see in the list, there are still 54 remaining. A good to-do list for the next Bug Squash Month, or whenever I feel like working on GJS but don’t know what to work on!

Did you know Bugzilla could generate graphs? I didn’t! Here’s a graph of the total bug count in GJS during November Bug Squash Month:

chart

The clunkiness of this chart kills me though…

My plans now that Bug Squash Month is over are to concentrate on fixing things that make it more pleasant to use and contribute to GJS:

  • Find an active co-maintainer so that we can review each other’s patches (could this be you?)
  • Make ES6 Promises available (this work is also being sponsored by Endless)
  • Rework the test suite to use an embedded copy of Jasmine so that writing automated tests becomes less of a pain
  • Find ways to bring in some of the conveniences that Node developers are used to

I’ve also decided to try an experiment: I’ve just made the Trello board public on which I keep track of what I’m working on and what I’d like to work on. Let me know if this is interesting to you and what features you might like to see on there! (It’s made possible by a Chrome extension, Bug 2 Trello.)

All in all November Bug Squash Month was a success, though next time I will get started earlier in the month. Come join me next time!

gnome_bugsquash_nov_2016

 


Would you write a 911 location app?

John Oliver talked in his show’s most recent episode about the US emergency services phone number, 911. It seems that now nobody uses land lines anymore, sometimes the emergency services have a hard time locating people from their cell phones.

John Oliver: “And if you’re thinking, ‘wait a minute, I can find my location on my cell phone,’ you’re not alone. Dispatchers wonder the same thing.”

Dispatcher: “I can check in on Facebook and it’ll tell you exactly what building I’m in. […] But when you call 911 we don’t get that accurate location information. The technology’s out there, it’s just not getting to us at this point.”

JO: “That’s a good point, because even the Domino’s app can tell where you are, and they’ve barely mastered the technology to make a palatable pizza! So we asked […] why it seems Ubers can find you better than ambulances can, and there doesn’t seem to be a simple satisfying answer.”

Here is my best guess at that answer, as a software engineer. Our industry has a pervasive culture of rush-jobs that get 90% of the way there and then save the rest for version 2; move fast and break things, yada yada. No emergency services provider would adopt it because it would not be reliable.

It’s reasonable to think that 90% would be better than what 911 apparently has now, which according to the video is sometimes only accurate to the nearest cell tower. However, the litigious nature of US society makes that impossible. The first time the software failed, the maker would get sued out of business.

Thus we are stuck, because we teach ourselves not to go the extra mile; and even if we went it, no-one could afford to take responsibility for making things better.


December 01, 2016

gtkmm 4 started

We (the gtkmm developers) have started work on an ABI-breaking gtkmm-4.0, as well as an ABI-breaking glibmm, target GTK+ 4, and letting us clean up some cruft that has gathered over the years. These install in parallel with the existing gtkmm-3.0 and glibmm-2.4 APIs/ABIs.

A couple of days ago I released first versions of glibmm (2.51.1) and gtkmm (3.89.1), as well as accompanying pangomm and atkmm releases.

This also lets us use my rewrite of libsigc++ for libsigc++-3.0, bringing us more fully into the world of “modern C++”. We might use this opportunity to make other fundamental changes, so now is the time to make suggestions.

We did a parallel-installing ABI-breaking glibmm even though glib isn’t doing that. That’s because, now that GTK+ is forcing us to do it for gtkmm, this seems like as good a time as any to do it for glibmm too. It’s generally harder to maintain C++ ABIs than C ABIs, largely because C++ is just more complicated and its types are more exactly specified. For instance we’d never have been able to switch to libsigc++-3.0 without breaking ABI.

 

November 30, 2016

Contribute to Polari with this one simple trick!

I’ve been rather quiet recently working on new features for Builder. But we managed to just release Builder 3.22.3 which is full of bug fixes and a really new important feature. You can now meaningfully target flatpak when building your application. Matthew Leeds has done this outstanding work and it is really going to simplify how you contribute to GNOME applications going forward.

I’m really happy with the quality of this feature because it has shown me where our LibIDE design has done well, and where it has not. Of course, we will address that for 3.24 to help make some of the UI less confusing.

Without further ado, how to clone, build, run, and hack on Polari without so much as a toolchain installed on your host system. The only prerequisite is to get GNOME Builder 3.22.3 from the GNOME flatpak repository (or your distribution if it is up to date).

Edit: Your system might require the installation of flatpak-builder if it is in a separate package (such as on Fedora 25).

# Get things ready on Fedora 25
sudo dnf install flatpak-builder

# Download GNOME's nightly SDK for development.
# We'll automate this for GNOME 3.24.
flatpak --user remote-add gnome-nightly \
https://sdk.gnome.org/gnome-nightly.flatpakrepo
flatpak --user install gnome-nightly org.gnome.Sdk master
flatpak --user install gnome-nightly org.gnome.Platform master

Open Builder, select Clone from the buttons in the header bar.Open Builder, select Clone from the buttons in the header bar.
Set the URL for Polari to git://git.gnome.org/polari (optionally use user@git.gnome.org:/git/polari.git if you have commit access to GNOME)Set the URL for Polari to git://git.gnome.org/polari (optionally use user@git.gnome.org:/git/polari.git if you have commit access to GNOME)
Click on the blue Clone buttonClick on the blue Clone button and wait a few moments while we download the repository
You'll be presented with the Workbench like so, click on “Build Preferences” in the perspective selector.You’ll be presented with the Workbench like so, click on “Build Preferences” in the perspective selector.
Now select “org.gnome.Polari.json” as the build runtime. (We expect this to be treated as a build configuration in 3.24)Now select “org.gnome.Polari.json” as the build runtime. (We expect this to be treated as a build configuration in 3.24 instead of a runtime)
Click the Build button on the header bar or in the build popoverClick the Build button on the header bar or in the build popover
On the first build, flatpak-builder is used to build all the necessary dependencies for Polari (such as telepathy). After it has built, click on the “Run” button in the header bar.On the first build, flatpak-builder is used to build all the necessary dependencies for Polari (such as telepathy). After it has built, click on the “Run” button in the header bar.
Hey! Look at that, a Polari window connected to freenode!Hey! Look at that, a Polari window connected to freenode!
To make sure I'm not fooling you, let's add a printf() to polari.c (the application entry point). Save, then click “Run” again.To make sure I’m not fooling you, let’s add a printf() to polari.c (the application entry point). Save, then click “Run” again.
How about that, we can see the output in the “Run Output” panel and Polari still seems to work. Yay!How about that, we can see the output in the “Run Output” panel and Polari still seems to work. Yay!

Core Apps Hackfest afterthoughts

During last weekend, I was very happy to attend the Core Apps Hackfest in Berlin. This is effectively the first hackfest I’ve ever been! Thanks Carlos for organizing that, thanks Kinvolk folks for hosting the event, and Collabora for sponsoring the dinner.

This event was a great chance to meet the maintainers in person and talk directly to the designers about doubts we have. Since Carlos already wrote down the list of tasks we worked on, I’m not going to repeat it. So here, I’ll report what I was able to work on.

App-specific discussion

GNOME Music

Together with Marinus and Felipe, we were able to give GNOME Music some thought and decide what to do next. Music, unfortunately, was not written to scale, and this is basically the worst blocker we have to deal with. Pieces of code are not contained, and working on something means breaking something else completely unrelated.

So we sit down and analyzed the code, and came up with a solution for that. Music needs to isolate the backend and the UI, so we can write extensions to music providers in such a way that the UI doesn’t have to be patched too much for that. We decided to work on it before trying to add any new features, because otherwise GNOME Music will be unmaintainable.

For the next release, do expect to see lots of action in Music’s codebase, but few features. Most of the efforts will be focused on refactoring and cleaning the code (even if that means more lines of code in the end).

GNOME Calendar

I’ve been able to work on top of Vamsi’s previous prototype of the week view a little bit, and it’s starting to take shape. Basically, getting a working week view is hard. I’m sure we’ll be able to have something working by the end of December, but I expect some pain and headaches before getting into that state.

Andreas also did a quick live testing of GNOME Calendar, and I could see a few bugs to be fixed.

Nautilus

What a great chance to talk about our beloved shellfish! With Carlos and Alex in there, some issues were figured out and hammered off.

We also dicussed the plans for porting Nautilus to GtkListBox and GtkFlowBox. The biggest problem with those widgets is scalability: they’re not performant on big ammounts of data. Put 10k rows inside a listbox and it slows down to death. Same for flowbox.

Following up on the discussion, the flowbox-based view prototype that Carlos did some time ago was rebased against master. I also worked on rebasing the actionbar prototypes against master. From now on, I’ll focus on trying to get the actionbars into the main codebase.

Generic discussions

As Allan detailed in his report, one of the big app-agnostic concepts discussed in this event was the content apps opening and managing files.

I somehow got involved a lot with GNOME Music development, so we discussed how can GNOME Music handle that. We ultimately realized that it’d be better for everyone if we do it The Right Way © and land the code refactorings and backend work before even attempting to do that. This will avoid having to implement the feature twice (before and after the refactoring) and frustrating people with eventual breakages.

This won’t affect GNOME Calendar nor GNOME To Do at the moment, so this discussion wasn’t my focus.

Some Highlights

There’s a bunch of things that happened during the hackfest that I’d like to highlight:

  1. The always-awsome Carlos Garnacho fixed some locking issues in Tracker that gave another performance boost to GNOME Music. Now I can really use Music as my everyday player. It takes only 2 or 3 seconds to load my 10.000-songs library.
  2. The new Online Accounts panel patches were reviewed, and started to land on the main codebase. Thanks Rishi!
  3. The GNOME hackers are absolutely awsome. They’re great people, really. I’m proud and honored to be part of this team with such incredible people. I feel like a small kid both in technical and human terms when with them.

I’m very grateful for the GNOME Foundation for sponsoring me. I wouldn’t be able to attend without the sponsorship. And I also thank Endless for allowing me to attend the event. This event was very productive and I have a strong feeling that this really pays back to the community.

sponsored-badge-shadow

Core Apps Hackfest

Last weekend I attended the Core Apps hackfest in Berlin. This was a reboot of the Content Apps hackfest we held last year around the same time of year, with a slightly broader focus. One motivation behind these events was to try and make sure that GNOME has a UX focused event in Europe at the beginning of the Autumn/Spring development cycle, since this is a really good time to come together and plan what we want to work on for the next GNOME version.

The format of the hackfest worked extremely well this year (something that everyone who attended seemed to agree on). The scope of the event felt just right – it allowed a good level of participation, with slightly over 20 attendees, a mixed agenda, and also opportunities for collaboration and cross-cutting discussion.

We had three designers in attendance, in the form of Jakub, Andreas and myself. This felt extremely helpful, as we were able to both work on tricky design tasks together as well as split up in order to support groups of hackers in parallel. We worked non-stop over the course of the event and pumped out a serious amount of design work.

In all the event felt extremely productive and is something I would love to repeat in the future.

dscf2518 core-apps-hackfest dscf2483

What we worked on

One of the main design areas at the hackfest was how to allow the content applications (Documents, Music, Photos and Videos) the ability to open items in the Files application. This is something that has been on the cards for a while and we had already had designs for. However, reviewing the plan we had, we realised that it needed improvement, and we spent a good chunk of the hackfest evaluating different options before settling on an improved design.

open-files-with-content-apps

View the complete wireframes

Software was another focus for the hackfest, as we had several Software hackers in attendance. On the design side, we worked on how to better integrate shell extensions. We also worked on improving the UI for browsing categories of applications and add-ons. The result was some fairly detailed mockups for changes we hope to make this cycle.

software-category-page

View the full wireframes

Towards the end of the event, Cosimo, Andreas, Jakub, Joaquim and I discussed plans for “content selection”. This is the idea of being able to open content items that are provided by sandboxed applications, in a similar method to the existing file chooser dialog, but without actually interacting with the file system. This way, applications will be able to make content available that isn’t stored in a shared home directory or might not even be stored locally on the device. It’s a tricky design problem that will require more work, but we made a good start on establishing the parameters of the design.

In addition to these main areas, there were lots of other smaller design tasks and discussions. We worked a bit on Music, talked a bit about Usage and even had a bit of time for GTK+ Inspector.

Acknowledgements

These events only happen due to the work of community members and support from GNOME’s partners and donors. Special thanks has to go to Carlos Soriano for organising the event. Also a big thank you to Kinvolk for hosting and providing snacks and to Chris Kühl and Joaquim Rocha for acting as our local guides. Finally, thank you to Collabora for helping to feed and water everyone!

Kinvolk Logo

Collabora Logo

Installing flatpaks gets easier in Fedora 25

A lot of users complained that installing flatpaks was too difficult. And they were right, just look at the installation instructions on the Flatpak download page at LibreOffice.org. But that was never meant to be the final user experience.

flatpak-logo

Richard Hughes integrated Flatpak support into GNOME Software and the Red Hat desktop apps team worked with him to make sure it works well with apps we’ve already packaged for Flatpak. And this is the result. As you can see installing LibreOffice for Flatpak is now a matter of a couple of clicks with GNOME Software 3.22.2 in Fedora 25:

 

Flatpak allows you to generate a .flatpak bundle which includes the app and all the necessary info for installation of the app and setting up its repo for future updates. You can also create a .flatpakref file which doesn’t contain the app, but all the installation info and the app is downloaded during the installation. This format is also supported by GNOME Software now. LibreOffice offers a .flatpak bundle because it’s more similar to what users are used to from Windows and macOS.

As you can see on the video, installing .flatpak bundles is a matter of downloading the file and opening it directly with GNOME Software or double-clicking it. There is one prerequisite though. You need to have a repo of the runtime the app requires enabled which I had because I had been using the GNOME runtime for other apps already. Installation of runtimes is being streamlined as well. As a runtime provider, you can ship .flatpakrepo file which includes necessary info for setting up the repo and is as easy to install as .flatpak and .flatpakref. For Fedora Workstation we’re currently considering to enable repos of most common runtimes by default, so users would not have to deal with them at all, the required runtimes would get installed automatically with the app.


GNOME Core Apps Hackfest 2016

Last weekend I attended the GNOME Core Apps hackfest that I helped organize here in Berlin.

It was the first time I participated in a Core Apps hackfest and I must say I am really glad with how it all went. I felt like there was a perfect balance of planning, working, and just hanging out together. If you want to know more about the planned items, check out this very complete post by Carlos Soriano.
I focused on GNOME Software, leveraging the chance of having Allan Day and Jakub Steiner (from the GNOME Design team) nearby to get some long due ideas implemented upstream. They were restless in helping the different projects!

Here are the things I got done and are already integrated upstream:

Installed badge

— The way to show an installed app was by displaying a label “Installed” with a blue background on top of the app tiles, sometimes even covering important parts of them; now we use GNOME Software’s shopping bag icon with a check-mark, and a simpler “Installed” label when appropriate:

Installed icons in the category view

Installed icons in the category view

Installed icons in the main view

Installed icons in the main view

Installed icons + labels in the search view

Installed icons + labels in the search view

Categories grid

— The expander widget that reveals the secondary categories now remains visible when expanded, allowing to hide them back;
— Merged the Education and Science categories; they had so many apps in common that it just made sense to make them one:

Categories view

Categories expander visible, showing the recently merged Education & Science category

Display sizes in the installed view

— For some users it is important to quickly see how much apps are occupying so they can decide to eventually remove them; as indicated by Jakub, this should be a temporary measure while we do not have this information in the Usage app which is the right place to display that:

GNOME Software Installed view

Apps’ sizes showing under the remove button

Until the next!

In the end I am happy with the outcome but there were two main features I didn’t have time to implement: categories redesign, and a license agreement dialog.
I must also say that it was very nice to work along with Kalev and we both missed Richard who unfortunately could not make it.
Cosimo also came all the way from SF for the hackfest which was great because we could tackle some Endless OS related tasks that were more complex to discuss over the interwebs.

It was great meeting everyone and I am looking forward to participating in the next one!
Special thanks to Kinvolk for sharing their space, their snacks, and their help in organizing everything. Also to Collabora for sponsoring the nice dinner on Friday.

#LinuXatUNI

This last Saturday 26th was celebrated the #LinuXatUNI event at National University of Engineering. There were more than 250 people registered, but we have only 84 attended, though. I was surprised about this! It might be the upcoming final exams at universities in Lima or the early time on weekend.

Beside this, students and enthusiastic Linux people were gathered at the venue before 8:00 a.m. We started by projecting the “Revolution of the Operating System” movie, for people who did not bring their laptops.

linuxatuni_14While people had been registered, we gave all the participants stickers of GNOME, FEDORA & LinuXatUNI, and DVDs of F25 as a gift. We did also ask to pose at the camera with a special photo booth frame:

linuxatuni_01The Install Fedora 25 party took place to people who brought laptop. Some of them had Windows Operating System and others Ubuntu. For newbies, “Dual boot” was the most demanding way, among Virtual Box and Live DVD.

linuxatuni_06After that, the conferences onsite started with the Welcome message and the Fedora Ambassadors in the morning and Erick Cachay from IBM del Perú in the afternoon.

linuxatuni_02The sharing cake had arrived, and we did have cake and coke for everyone!

linuxatuni_03When we have people happy and sweet, we can ask for a group photo. Thanks for all the volunteers that helped us with the registration in pictures of this special day.

linuxatuni_08A collective effort of the volunteers from different universities was clearly demonstrated.

linuxatuni_05Contributors from GNOME and FEDORA were online to share their expertise with Peru.  Thanks Bastian from Denmark, Fefa from Chile, Fabio from Chile and Nuritzi from U.S.

linuxatuni_10Fedora Latam ambassadors also were online on time. Thanks to Neville and Eduardo Mayorga from Nicaragua, echevemaster from Colombia and Athos from Brazil! who was a great surprise that day :3

linuxatuni_11It was an honor to have students from universities in provinces who came to the event.

linuxatuni_09The method called “trivias” was applied also in #LinuXatUNI, we prized people who answered our questions related to LinuX, GNOME and Fedora.

linuxatuni_04It is quite hard to find women in IT, moreover have Linux women and moreover Fedora and GNOME women. I hope this women can follow the Linux way :3

linuxatuni_07People have to work to survive, and spending time in personal life is also crucial that doing volunteer activities can be sometimes a “special mission”. The following people are more than amazing because besides their works, they really helped with a smile and enthusiasm in order to have a successful event!  Thanks Leyla Marcelo & Raul Mucha from UPN,  and Martin Vuelta from UNMSM.

linuxatuni_12Even I tried to plan carefully (after all these years of experience in doing events), I did fail in some aspects like in sound, video and times scheduling. The idea of the “Genius bar” took place “in the floor”, because it was not allowed to have drinks inside the auditorium neither locate tables inside preventing accidents in case of earthquake. I really had a great DJ, but I did not consider the music inside during the test. Also my laptop has more than 5 years and maybe the sound incoming and delivered was not the best. I should invest in a better one. Lastly, connectors and cables did not let us work partially or totally onsite.

linuxatuni_13Please see more pictures here, and I hope you enjoy them! 🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: CTIC, CTIC-UNI, fedora, GNOME, Julita Inca, Julita Inca Chiroque, Linux at UNI, LinuXatUNI

November 29, 2016

Whereabouts at the CoreApps Hackfest

For the past three days I have been to the Core Apps Hackfest in Berlin. It’s been nice and cozy! Kinvolk has some nice facilities that we could borrow and it’s been productive for me even if I missed the first day as anticipated.

img_20161127_120240

The upcoming Newcomer Guide Revamp

At the hackfest I met with Carlos Soriano. We discussed Carlos’ experience doing the Bucarest Hackathon with Rares and Razvan. Talked about the issues that the students had and the questions they asked. The most general problem is that there is too much text in the newcomer guide. All information is useful but we need to prioritize what we present first. The students have only so much energy, and our job as guide writers is to ensure that no energy is wasted.

Since the first revamp where GNOME Love turned into the Newcomer Initiative there has also been many projects participating which newcomers can choose between. I’m super excited that so many projects care about getting newcomers but the list is also getting very long again now. We’ll try to address this in the next revamp by introducing highlights and rotate the rest as necessary. Other issues include working with discoverability of newcomer bugs, making all terminal commands copy/paste-able, make sure newcomers get developer docs installed, and maintain consistency between the website of our newcomer apps.

To address these issues I’m experimenting with using less text, using more visuals and gamifying the experience with progress bars. More to come soon.

newcomer-revamp-mockups
work in progress mockup of the newcomer guide.

I made some experiments turning this mockup into reality using Tom’s new custom CSS for the moinmoin wiki. There are still a few things to resolve before we can migrate, but we are getting closer.

LinuxAtUNI

Julita asked me to give a talk about contributing to GNOME Design for her LinuxAtUNI event in Peru. So during the hackfest in Berlin I gave this talk by video conference. I’m super excited for the events there!

15194536_10209175162268671_5969187234075386538_o

15235766_10209175158628580_7153783559538918808_o

Monday

The hackfest officially ended Sunday but I stayed a day longer with Florian Muellner, working on Polari. Together with Andreas Nilsson I finalized some new iterations of various design ideas I had been playing with and managed to file a bunch of bugs. Here’s some highlights:

room-status-indication
Mockup showing the design for room status indication, error handling for rooms and indicating prolonged waiting for rooms. See bug 775257, bug .

offline-status-rev3
We finalized offline indication through using an infobar in the sidebar. bug 760833.

mockup-use-server-password
Andreas made a mockup of how we could expose server passwords for custom networks. Bug 775225.

Florian worked on moving our soon-to-land roomlist in the join dialog over to a GtkTreeView as we had performance issues with the GtkListBox. We also discussed things like the nickname renaming behavior, how should error messages behave in connection properties dialog and future plans.

Again, thanks to GNOME Foundation for partially sponsoring me, it’s been a great hackfest and I really enjoyed it!

sponsored-badge-simple

Core Apps Hackfest – A success!

The GNOME Core Apps Hackfest just finished, I’m happy to say that it was a success!

Many people from different backgrounds were able to come, either from the community or from companies like Red Hat, Endless, Kinvolk, etc. all of us involved in different parts of the GNOME project.

The first we did was to write down what points  must be discussed during the 3 days, along with identifying the people involved in those topics. This was the trick to be productive in the hackfest, and all of us agreed it worked.

first day hackfest.jpgFirst day early in the morning, before all people joined
IMG_20161127_175628.jpgRoadmap for the hackfest

Since it’s hard to see what’s written:

  • Tracker
    • System vs apps (containerisation)
    • Performance
    • Removable devices
  • Maps
  • Usage app (one  is being created)
  • Control center redesign
    • Network panel
    • Details panel
  • Calendar
    • Week view
    • Recurring events
    • Location checks/sugestions
  • Books
  • Software
    • Categories
    • Offline experience
    • Rollback
    • Shell extensions
    • Performance
    • Flatpaks installation
    • System vs user installs
    • EULA/Licenses
  • Sharing portal
  • Content selection portal
  • Libgd
    • Flowbox view
    • Tagged entry
  • Background apps
  • Files
    • Action bar
    • Bookmarks / XDG folders
    • External devices
  • Opening files with content apps
  • Extras
    • Music
    • Videos (series view)
    • Newcomers initiative, new revamp planning

Quite a lot! From that list we identified the most important items and the optional ones that are short enough to allocate some time for. We also identified which items require the most amount of people from different backgrounds to be together so they can be handled as best as possible.

Some topics were very well covered. GNOME Software is now important for a few companies and distributions, so it took quite a lot of discussion during the hackfest . Another topic that was discussed extensively was opening files with content applications, basically using Music, Photos, etc. to open files from Files.

But we also discussed more technical items, thanks to having gtk+ maintainers in the hackfest we were able to talk about gtk4, it’s new OpenGL based drawing model, containers API, GtkListBo, GtkFlowBox and essentially all what we need for our applications and 3rd party developers in the upcoming months.

Some of us will write about the specific items in the upcoming days with blog posts, so keep an eye on planet.gnome.org to see all the discussion we had and what solutions and decisions we came up with for those.

I want to say a big thank you for the excellent organisation of Joaquim Rocha, and for hosting us to Kinvolk, especially to Chris Kühl

kinvolk_logo

and Collabora for sponsoring an excellent dinner on the first day of the hackfest

collabora-logo-1

We had a great one!

DSC01798.JPGRelaxing a bit in the Collabora sponsored dinner

Also to Red Hat and Endless for sending quite a few employees, and last but not least, to the GNOME Foundation for sponsoring the community members, who were essential in our discussions.

Hope you had fun!


libinput now requires axis resolutions for graphics tablets

I pushed the patch to require resolution today, expect this to hit the general public with libinput 1.6. If your graphics tablet does not provide axis resolution we will need to add a hwdb entry. Please file a bug in systemd and CC me on it (@whot).

How do you know if your device has resolution? Run sudo evemu-describe against the device node and look for the ABS_X/ABS_Y entries:


# Event code 0 (ABS_X)
# Value 2550
# Min 0
# Max 3968
# Fuzz 0
# Flat 0
# Resolution 13
# Event code 1 (ABS_Y)
# Value 1323
# Min 0
# Max 2240
# Fuzz 0
# Flat 0
# Resolution 13
if the Resolution value is 0 you'll need a hwdb entry or your tablet will stop working in libinput 1.6. You can file the bug now and we can get it fixed, that way it'll be in place once 1.6 comes out.

git commit –amend –date

I accidentally committed something with git with the wrong timezone set in my VM. How to fix it? The help message for git commit says:

--date <date>         override date for commit

OK that’s great, but I don’t really want to figure out the input format for feeding in a date. I blindly tried:

git commit --amend --date now

And blindly typing “now” actually worked as intended. git is smart. Thanks git developers!

November 28, 2016

Rustifying IronFunctions

Rustifying IronFunctions

As mentioned in my previous blog post there is new open-source, lambda compatible, on-premise, language agnostic, server-less compute service called IronFunctions.
Rustifying IronFunctions

While IronFunctions is written in Go. Rust is still very much admired language and it was decided to add support for it in the fn tool.

So now you can use the fn tool to create and publish functions written in rust.

Using rust with functions

The easiest way to create a iron function in rust is via cargo and fn.

Prerequisites

First create an empty rust project as follows:

$ cargo init --name func --bin

Make sure the project name is func and is of type bin. Now just edit your code, a good example is the following "Hello" example:

use std::io;  
use std::io::Read;

fn main() {  
    let mut buffer = String::new();
    let stdin = io::stdin();
    if stdin.lock().read_to_string(&mut buffer).is_ok() {
        println!("Hello {}", buffer.trim());
    }
}

You can find this example code in the repo.

Once done you can create an iron function.

Creating a function

$ fn init --runtime=rust <username>/<funcname>

in my case its fn init --runtime=rust seiflotfy/rustyfunc, which will create the func.yaml file required by functions.

Building the function

$ fn build

Will create a docker image <username>/<funcname> (again in my case seiflotfy/rustyfunc).

Testing

You can run this locally without pushing it to functions yet by running:

$ echo Jon Snow | fn run
Hello Jon Snow  

Publishing

In the directory of your rust code do the following:

$ fn publish -v -f -d ./

This will publish you code to your functions service.

Running it

Now to call it on the functions service:

$ echo Jon Snow | fn call seiflotfy rustyfunc 

which is the equivalent of:

$ curl -X POST -d 'Jon Snow' http://localhost:8080/r/seiflotfy/rustyfunc

Next

In the next post I will be writing a more computation intensive rust function to test/benchmark IronFunctions, so stay tune :D

summing up 81

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

Computers for Cynics, by Ted Nelson

The computer world deals with imaginary, arbitrary, made up stuff that was all made up by somebody. Everything you see was designed and put there by someone. But so often we have to deal with junk and not knowing whom to blame, we blame technology.

Everyone takes the structure the computer world as god-given. In a field reputedly so innovative and new, the computer world is really a dumbed down imitation of the past, based on ancient traditions and modern oversimplification that people mistake for the computer itself.

it is quite easy to get the idea that the current state of the computer world is the climax of our great progress. and it's really not. ted nelson, one of the founding fathers of personal computing and the man who invented hypertext, presents his cynical, amusing and remarkably astute overview of the history of the personal computer - after all he's been there since the beginnings. it is especially interesting in contrast with our current view on computers, information and user experience.

Deep-Fried Data, by Maciej Cegłowski

A lot of the language around data is extractive. We talk about data processing, data mining, or crunching data. It’s kind of a rocky ore that we smash with heavy machinery to get the good stuff out.

In cultivating communities, I prefer gardening metaphors. You need the right conditions, a propitious climate, fertile soil, and a sprinkling of bullshit. But you also need patience, weeding, and tending. And while you're free to plant seeds, what you wind up with might not be what you expected.

This should make perfect sense. Human cultures are diverse. It's normal that there should be different kinds of food, music, dance, and we enjoy these differences. But online, our horizons narrow. We expect domain experts and programmers to be able to meet everyone's needs, sight unseen. We think it's normal to build a social network for seven billion people.

we hear a lot about artificial intelligence, big data or deep learning these days. they are all referring to the same generic approach of training a computer with lots of data and it learns to recognize structure. these techniques are effective, no doubt, but what we often overlook is that you only get out what you put into it.

Programming and Scaling, by Alan Kay

Leonardo could not invent a single engine for any of his vehicles. Maybe the smartest person of his time, but he was born in the wrong time. His IQ could not transcend his time. Henry Ford was nowhere near Leonardo, but he happened to be born in the right century, a century in which people had already done a lot of work in making mechanical things.

Knowledge, in many many cases, trumps IQ. Why? This is because there are certain special people who invent new ways of looking at things. Henry Ford was powerful because Issac Newton changed the way Europe thought about things. One of the wonderful things about the way knowledge works is if you can get a supreme genius to invent calculus, those of us with more normal IQs can learn it. So we're not shut out from what the genius does. We just can't invent calculus by ourselves, but once one of these guys turns things around, the knowledge of the era changes completely.

we often ignore the context we create a digital product in. however the context defines the space of possible solutions. and not only that, it also defines the borders of our world. what is so interesting about this thought is that you don't need a massive brain, but you need to be able to see and connect ideas in order to advance humanity.

2016-11-28 Monday.

  • Up lateish, practices with babes; mail chew. Team calls variously, chat with Georg. Reviewed some online QA pieces.

Linux communities, we need your help!

There are a lot of Linux communities all over the globe filled with really nice people who just want to help others. Typically these people either can’t (or don’t feel comfortable) coding, and I’d love to harness some of that potential by adding a huge number of new application reviews to the ODRS. At the moment we have about 1100 reviews, mostly covering the more popular applications, and also mostly written in English.

What I would love is for a few groups of people to come together for their next LUG/outreach/InstallFest and sit down together somewhere cozy and write a few reviews. Bonus points if you use a less-well-known application, and even more points if you can write in a language other than English. Submitting a review is easy; just open up GNOME Software, find the application, and click ‘Write a Review‘ at the bottom of the page.

Application reviews help new users what to install, and the star ratings you give means we can return useful search results full of great applications. Please write an email, ask about helping the ODRS, and perhaps you can help a lot of new users next time you meet with your Linuxy friends.

Thanks!

This week in GTK+ – 26

In this last week, the master branch of GTK+ has seen 40 commits, with 1551 lines added and 1998 lines removed.

Planning and status
  • Matthias Clasen released the first GTK+ 3.89 development snapshot
  • The GTK+ road map is available on the wiki.
Notable changes

On the master branch:

  • Andrew Chadwick landed a series of fixes for graphic tablets support on Windows
  • Benjamin Otte removed the gtk_cairo_should_draw_window() utility function; the function was introduced for compatibility in the 3.x API, but now it’s not necessary any more
  • Benjamin also removed gdk_window_process_updates() and gdk_window_process_all_updates(); GDK has long since been switched to a frame clock; additionally, only top level GdkWindow can be used as a rendering surface
  • Lapo Calamandrei updated the High Contrast and Adwaita theme with the recent round of CSS improvements for progress bars and gradients
Bugs fixed
  • 774114 Window shadows are repainted even if only the contents of the window change
  • 774695 GtkProgressbar needs full and empty classes
  • 774265 No tilt for wintab devices
  • 774699 [wintab, potential segfault]: list iteration regression causes odd-indexed devices to be ignored during lookup & e.g. present no pressure
  • 775038 Build: Add wayland to GSKs dependencies
  • 774917 [wayland] child subsurfaces need to be placed relative to their parent
  • 774893 Application font sizes scaling gets clamped to 1.00 when starting GtkInspector
  • 774939 GtkLabelAccessible: Initialize link before setting parent
  • 774760 inspector: ensure controller is a GtkGesture
  • 774686 GtkMenu does not unref all GtkCheckMenuItem it creates
  • 774743 GtkNotebook does not unref all GtkBuiltinIcon it creates
  • 774790 GtkTextHandle does not unref all GtkAdjustment it references
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

November 27, 2016

2016-11-27 Sunday.

  • NCC in the morning, back for lunch with Peter, Dianne & Lydia, lovely to see them. Took N. and E. to a pre-exam concert in the afternoon, followed by much slugging; feeling increasingly unwell unfortunately.

libopenraw 0.1.0

I just released libopenraw 0.1.0. It is to be treated as a snapshot as it hasn't reached the level of functionality I was hoping for and it has been 5 years since last release.

Head on to the download page to get a tarball.

Several new API, some API + ABI breakage. Now the .pc files are parallel installable.

November 25, 2016

Plans for Core Apps Hackfest

I have requested and received a partial sponsorship for a trip to Berlin to participate in the Core Apps Hackfest. This is a great opportunity for me to meet up with other from the GNOME community and immerse myself in contributing. I’ll be going on Saturday the 26th and leave again Tuesday the 29th. Some things I anticipate I’ll be doing includes:

Polari Whereabouts

The hackfest will be an opportunity to meet up with Florian and discuss design for Polari 3.24. Danny, a good friend of mine is currently working on Initial Setup based on mockups by Allan Day. For Polari 3.24 Rares Visalom might be looking into implementing blank states and we also hope to land room lists this cycle.

initial-setup-thumb-aday

Quality Assurance on listed Newcomer Applcations

The hackfest is a great opportunity to meet a lot of maintainers of different core applications of which are listed as Newcomer applications. I think it could be a useful opportunity to go over the newcomer bugs and identify ways we can improve the in-flow of bugs, the expected level of difficulty and so forth.

Revamp Newcomer Applications to the new TemplateFancy

I have babbled a long time about wanting to get rid of the ugly HTML tables we currently use to make “fancy” layouts in the GNOME wiki. As part of the Newcomer initiative I made the TemplateFancy template which is now required to be used by applications listed in the Newcomer guide. I investigated how to add custom CSS classes with MoinMoin wiki and Tom Tryfonidis has managed to complete convert the template from using HTML tables to custom CSS classes. This has resulted in a webpage which has easier-to-read MoinMoin markup, separation of content and layout and which possibly is more mobile-friendly.

Furthermore, these custom classes are generic and can be applied anywhere on the GNOME wiki, providing much better opportunity to make visually pleasing documentation. This really excites me! We can be much more flexible in terms of how we want to visualize information and I’m hoping this can lead to a more pleasing experience. These custom classes has already landed so I’ll looking at converting newcomer application pages to the new template markup this weekend. :-)

If I have time this weekend I’ll also look into getting these classes and examples of how to use them documented on the wiki. But most of all I’m looking forward to getting my batteries recharged and participate in the many discussions listed in the agenda on how to bring GNOME’s core applications forward. Thanks to GNOME Foundation for sponsoring part of this trip!

sponsored-badge-simple

November 24, 2016

Writing GStreamer Elements in Rust (Part 3): Parsing data from untrusted sources like it’s 2016

And again it took quite a while to write a new update about my experiments with writing GStreamer elements in Rust. The previous articles can be found here and here. Since last time, there was also the GStreamer Conference 2016 in Berlin, where I had a short presentation about this.

Progress was rather slow unfortunately, due to work and other things getting into the way. Let’s hope this improves. Anyway!

There will be three parts again, and especially the last one would be something where I could use some suggestions from more experienced Rust developers about how to solve state handling / state machines in a nicer way. The first part will be about parsing data in general, especially from untrusted sources. The second part will be about my experimental and current proof of concept FLV demuxer.

Parsing Data

Safety?

First of all, you probably all saw a couple of CVEs about security relevant bugs in (rather uncommon) GStreamer elements going around. While all of them would’ve been prevented by having the code written in Rust (due to by-default array bounds checking), that’s not going to be our topic here. They also would’ve been prevented by using various GStreamer helper API, like GstByteReader, GstByteWriter and GstBitReader. So just use those, really. Especially in new code (which is exactly the problem with the code affected by the CVEs, it was old and forgotten). Don’t do an accountant’s job, counting how much money/many bytes you have left to read.

But yes, this is something where Rust will also provide an advantage by having by-default safety features. It’s not going to solve all our problems, but at least some classes of problems. And sure, you can write safe C code if you’re careful but I’m sure you also drive with a seatbelt although you can drive safely. To quote Federico about his motivation for rewriting (parts of) librsvg in Rust:

Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it’s all due to using C. We’ve gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That’s the kind of 1970s bullshit that Rust prevents.

You can directly replace the word librsvg with GStreamer here.

Ergonomics

The other aspect with parsing data is that it’s usually a very boring aspect of programming. It should be as painless as possible, as easy as possible to do it in a safe way, and after having written your 100th parser by hand you probably don’t want to do that again. Parser combinator libraries like Parsec in Haskell provide a nice alternative. You essentially write down something very close to a formal grammar of the format you want to parse, and out of this comes a parser for the format. Other than parser generators like good, old yacc, everything is written in target language though, and there is no separate code generation step.

Rust, being quite a bit more expressive than C, also made people write parser generator libraries. They are all not as ergonomic (yet?) as in Haskell, but still a big improvement over anything else. There’s nom, combine and chomp. All having a slightly different approach. Choose your favorite. I decided on nom for the time being.

A FLV Demuxer in Rust

For implementing a demuxer, I decided on using the FLV container format. Mostly because it is super-simple compared to e.g. MP4 and WebM, but also because Geoffroy, the author of nom, wrote a simple header parsing library for it already and a prototype demuxer using it for VLC. I’ll have to extend that library for various features in the near future though, if the demuxer should ever become feature-equivalent with the existing one in GStreamer.

As usual, the code can be found here, in the “demuxer” branch. The most relevant files are rsdemuxer.rs and flvdemux.rs.

Following the style of the sources and sinks, the first is some kind of base class / trait for writing arbitrary demuxers in Rust. It’s rather unfinished at this point though, just enough to get something running. All the FLV specific code is in the second file, and it’s also very minimal for now. All it can do is to play one specific file (or hopefully all other files with the same audio/video codec combination).

As part of all this, I also wrote bindings for GStreamer’s buffer abstraction and a Rust-rewrite of the GstAdapter helper type. Both showed Rust’s strengths quite well, the buffer bindings by being able to express various concepts of the buffers in a compiler-checked, safe way in Rust (e.g. ownership, reability/writability), the adapter implementation by being so much shorter (it’s missing features… but still).

So here we are, this can already play one specific file (at least) in any GStreamer based playback application. But some further work is necessary, for which I hopefully have some time in the near future. Various important features are still missing (e.g. other codecs, metadata extraction and seeking), the code is rather proof-of-concept style (stringly-typed media formats, lots of unimplemented!() and .unwrap() calls). But it shows that writing media handling elements in Rust is definitely feasible, and generally seems like a good idea.

If only we had Rust already when all this media handling code in GStreamer was written!

State Handling

Another reason why all this took a bit longer than expected, is that I experimented a bit with expressing the state of the demuxer in a more clever way than what we usually do in C. If you take a look at the GstFlvDemux struct definition in C, it contains about 100 lines of field declarations. Most of them are only valid / useful in specific states that the demuxer is in. Doing the same in Rust would of course also be possible (and rather straightforward), but I wanted to try to do something better, especially by making invalid states unrepresentable.

Rust has this great concept of enums, also known as tagged unions or sum types in other languages. These are not to be confused with C enums or unions, but instead allow multiple variants (like C enums) with fields of various types (like C unions). But all of that in a type-safe way. This seems like the perfect tool for representing complicated state and building a state machine around it.

So much for the theory. Unfortunately, I’m not too happy with the current state of things. It is like this mostly because of Rust’s ownership system getting into my way (rightfully, how would it have known additional constraints I didn’t know how to express?).

Common Parts

The first problem I ran into, was that many of the states have common fields, e.g.

enum State {
    ...
    NeedHeader,
    HaveHeader {header: Header, to_skip: usize },
    Streaming {header: Header, audio: ... },
    ...
}

When writing code that matches on this, and that tries to move from one state to another, these common fields would have to be moved. But unfortunately they are (usually) borrowed by the code already and thus can’t be moved to the new variant. E.g. the following fails to compile

match self.state {
        ...
        State::HaveHeader {header, to_skip: 0 } => {
            
            self.state = State::Streaming {header: header, ...};
        },
    }

A Tree of States

Repeating the common parts is not nice anyway, so I went with a different solution by creating a tree of states:

enum State {
    ...
    NeedHeader,
    HaveHeader {header: Header, have_header_state: HaveHeaderState },
    ...
}

enum HaveHeaderState {
    Skipping {to_skip: usize },
    Streaming {audio: ... },
}

Apart from making it difficult to find names for all of these, and having relatively deeply nested code, this works

match self.state {
        ...
        State::HaveHeader {ref header, ref mut have_header_state } => {
            match *have_header_state {
                HaveHeaderState::Skipping { to_skip: 0 } => {
                    *have_header = HaveHeaderState::Streaming { audio: ...};
                }
        },
    }

If you look at the code however, this causes the code to be much bigger than needed and I’m also not sure yet how it will be possible nicely to move “backwards” one state if that situation ever appears. Also there is still the previous problem, although less often: if I would match on to_skip here by reference (or it was no Copy type), the compiler would prevent me from overwriting have_header for the same reasons as before.

So my question for the end: How are others solving this problem? How do you express your states and write the functions around them to modify the states?

Update

I actually implemented the state handling as a State -> State function before (and forgot about that), which seems conceptually the right thing to do. It however has a couple of other problems. Thanks for the suggestions so far, it seems like I’m not alone with this problem at least.

Dig Yourself Out of a 'Git Commit Amend' Hole With Reflog

Raise your hand if you’ve ever git commit’d something you shouldn’t have. (It’s okay, this is a judgement-free space.)

And raise your hand if you’ve ever used git commit --amend --no-edit1 to try and hide your terrible, terrible shame. (We’re not even gonna talk about git push -f origin master. Don’t do it, kids.)

And raise your hand one last time if you’ve ever git commit --amend --no-edit’d and then paused and looked at your computer and were suddenly struck by the realization that you’d ruined everything.

That last one might be just me, but I’m going to pretend it happens to other people to make myself feel better. (Like all of those times I thought I was fixing a slightly incorrect commit, only to realize I had instead wiped out all of my latest work. Whoooops.)

Well, I put in an appearance at Git Merge 2016 (an all-around delightful event), and this gem was among the many things I learned there. This gem, friends, is the reflog and HEAD@{x}.

The reflog is… well, it’s a log of your refs. Refs being references to commits, which might be things like branch names (because recall that branch names are just human-readable references to commits) or this HEAD thing, which is a pointer to the commit you’re on right now. In fact, if you went into a folder that was a git repo and looked at .git/refs/heads/master, you’d see a file with a single commit hash in it–that’s the current tip of master, i.e. the commit that your “master” ref is pointing to.

Now, refs in and of themselves aren’t gonna solve your git commit --amend debacle, but it turns out that git is really smart sometimes. In this particular case, the smart thing that git does is keep track of everywhere your HEAD has been pointing. This info is stored in .git/logs/HEAD, and looks something like this:

1
2
3
4
5
0000000000000000000000000000000000000000 5a90f86dbb681f914790fbe494cbc5680ce372cc Maia <maia.mcc@gmail.com> 1461979447 -0400    commit (initial): add a file with some stuff
5a90f86dbb681f914790fbe494cbc5680ce372cc fdaec86d18b70bf8b9f87e74b473dcdb53d5b814 Maia <maia.mcc@gmail.com> 1461979493 -0400    commit: totally innocuous change
fdaec86d18b70bf8b9f87e74b473dcdb53d5b814 d77508cfe5df412158ad8a19540aca0ba195348f Maia <maia.mcc@gmail.com> 1461979518 -0400    commit (amend): totally innocuous change
d77508cfe5df412158ad8a19540aca0ba195348f fdaec86d18b70bf8b9f87e74b473dcdb53d5b814 Maia <maia.mcc@gmail.com> 1461979572 -0400    reset: moving to HEAD@{1}
fdaec86d18b70bf8b9f87e74b473dcdb53d5b814 514dd505826ddc1276823506e7682b33b64547b6 Maia <maia.mcc@gmail.com> 1461980303 -0400    commit (merge): Merge commit 'd77508c'

If you find that a little hard to parse (and you probably do), you can (and should) get at it in a more human-readable form with the command git reflog show:

1
2
3
4
5
6
7
8
9
fdaec86 HEAD@{2}: commit (merge): Merge commit 'd77508c'514dd505826ddc1276823506e7682b33b64547b6 fdaec86d18b70bf8b9f87e74b473dcdb53d5b814 Maia <maia.mcc@gmail.com> 1461982854 -0400    checkout: moving from master to head^
fdaec86 HEAD@{3}: checkout: moving from d77508cfe5df412158ad8a19540aca0ba195348f to master
d77508c HEAD@{4}: checkout: moving from master to HEAD@{3}
fdaec86 HEAD@{5}: reset: moving to HEAD@{1}
d77508c HEAD@{6}: checkout: moving from fdaec86d18b70bf8b9f87e74b473dcdb53d5b814 to master
fdaec86 HEAD@{7}: checkout: moving from master to fdaec86d18b70bf8b9f87e74b473dcdb53d5b814
d77508c HEAD@{8}: commit (amend): totally innocuous change
fdaec86 HEAD@{9}: commit: totally innocuous change
5a90f86 HEAD@{10}: commit (initial): add a file with some stuff

So I had always thought that git commit --amend amended your current commit–wrote all of your changes onto the same commit and called it a day. But it turns out that it doesn’t; rather, it creates a whole new commit in which to store your amended changes. Like, look, you can see it right there in the reflog: the same commit message, before and after amend, with two different hashes, whoadamn! So Whatever my commit looked liked before I mistakenly amended is still out there somewhere in the void, and with reflog, I can get that hash! From here, getting back your lost work is simple: git checkout [lost-commit-hash], git reset --hard [lost-commit-hash], what have you.

But there’s one more nifty thing here: all the HEAD@{x} numbers in the reflog are shortcuts to those commits. Much the same way that you can use HEAD^^^ to point to the commit three generations up from your current head, you can use HEAD@{3} to point to the commit from three movements of HEAD ago. That makes “oh crap, I need to get back to the last commit I was on before I did [stupid thing]” even easier–instead of having to go to the reflog and find the commit, you can just git checkout HEAD@{1} to get to whatever commit your head was previously on. (The commit your head is currently on, of course, being HEAD@{0}.)

So, there you go: a cool git thing I learned recently. Nothing earth-shattering, but hopefully a useful tip for someone out there. Happy gitting!


  1. For those of you who don’t know, this is git commit --amend’s older and better-looking cousin: it’s git commit --amend except that it automatically reuses the commit message of the commit you’re amending, rather than prompting you for a new one.

FreeDOS 1.2 Release Candidate 2

We started FreeDOS in 1994 to create a free and open source version of DOS that anyone could use. We've been slow to make new releases, but DOS isn't exactly a moving target anymore. New versions of FreeDOS are mostly about updating the software and making FreeDOS more modern. We made our first Alpha release in 1994, and our first Beta in 1998. In 2006, we finally released FreeDOS 1.0, and updated to FreeDOS 1.1 in 2012. And all these years later, it's exciting to see so many people using FreeDOS in 2016.

If you follow my work on the FreeDOS Project, you should know that we are working towards a new release of FreeDOS. You should see the official FreeDOS 1.2 release on December 25, 2016.

We are almost ready for the new FreeDOS 1.2 release! Please help us to test this new version. Download the FreeDOS 1.2 RC2 ("Release Candidate 2") and try it out. If you already have an operating system on your computer (such as Linux or Windows) we recommend you install FreeDOS 1.2 RC2 in a PC emulator or "virtual machine." Report any issues to the freedos-devel email list.

You can download FreeDOS 1.2 RC2 from our Download page or at ibiblio.

Here's what you'll find:

  • Release notes
  • Changes from FreeDOS 1.1
  • FD12CD.iso (full installer CDROM) —If you have problems with this image, try FD12LGCY.iso
  • FD12FLOPPY.zip (boot floppy for CDROM)
  • FD12FULL.zip (full installer USB image)
  • FD12LITE.zip (minimal installer USB image)

Thanks to everyone in the FreeDOS Project for their work towards this new release! There are too many of you to recognize individually, but you have all helped enormously. Thank you!
The last time I posted about FreeDOS 1.2 RC1, several news outlets picked up the story from this blog. Feel free to link here again, but please also link to the official announcement at www.freedos.org or www.freedos.org/download.

Also, if you'd like an interview about the FreeDOS Project and our upcoming FreeDOS 1.2 release, you can email me at jhall@freedos.org.

Importing AWS Lambda to IronFunctions

Imagine AWS Lambda being:

  • On-Premise (host it anywhere)
  • Language Agnostic (writing lambda functions in any language: Go, Rust, Python, Scala, Elixir you name it...)
  • Horizontally Scalable
  • Open-Source

Would be nice wouldn't it?

IronFunction.

Well fear not Iron.io released IronFunctions last week and its all that.

IronFunction supports a simple stdin/stdout API, as well as being able to import your existing functions directly from AWS Lambda.

Getting started!

You can grab the latest code from GitHub or just run it in docker:

docker run --rm -it --name functions --privileged -v $PWD/data:/app/data -p 8080:8080 iron/functions  

Currently IronFunctions supports importing the following languages from AWS Lambda:

The almighty fn tool

The fn tool includes a set of commands to act on Lambda functions. Most of these are described in the getting-started in the repository. One more subcommand is aws-import.
If you have an existing AWS Lambda function, you can use this command to automatically convert it to a Docker image that is ready to be deployed on other platforms.

Credentials

To use this, either have your AWS access key and secret key set in config files, or in environment variables. In addition, you'll want to set a default region. You can use the aws tool to set this up. Full instructions are in the AWS documentation.

Importing

The aws-import command is constructed as follows:

fn lambda aws-import <arn> <region> <image>  
  • arn: describes the ARN formats which uniquely identify the AWS lambda resource
  • region: region on which the lambda is hosted
  • image: the name of the created docker image which should have the format /

Assuming you have a lambda with the following arn arn:aws:lambda:us-west-2:123141564251:function:my-function, the following command:

fn lambda aws-import arn:aws:lambda:us-west-2:123141564251:function:my-function us-east-1 user/my-function  

will import the function code from the region us-east-1 to a directory called ./user/my-function. Inside the directory you will find the function.yml, Dockerfile, and all the files needed for running the function.

Using Lambda with Docker Hub and IronFunctions requires that the Docker image be named <Docker Hub username>/<image name>. This is used to uniquely identify images on Docker Hub. You should use the <Docker Hub username>/<image name> as the image name with aws-import to create a correctly named image.

Publish and Voila!

You can then publish the imported lambda as follows:

./fn publish -d ./user/my-function

Now the function can be reached via http://$HOSTNAME/r/user/my-function

Make sure you check out the importing documentation, and feel free to hang out and ask question on the slack channel.
Join Community

Stay tuned for my next blog post on adding Cargo and Rust support to the Fn Tools :D

November 23, 2016

I’m going to the Core Apps Hackfest

In this exact moment, I’m packing up my stuff to attend the Core Apps Hackfest organized by Carlos Soriano and kindly hosted by Kinvolk. It’ll happen in Berlin, German.

I have mixed feelings about this trip.

The Good:

  • I’m super excited to see old friends again. Do you guys remember Carlos was my GSoC mentor? Kudos for him for he organized this event, I want to see you again dude!
  • Looks like Berlin is a great place to be and, even though this is a focused trip, I want to quickly visit one or two historical places.
  • This hackfest has an incredibly high potential to be productive. Major core contributors will be there.
  • I’m positively surprised that my sponsorship request was accepted. Traveling from Brazil is very expensive. I can’t thank the GNOME Foundation enough for that. I really wouldn’t be able to attend without it.
  • That’s a great oportunity to get some fresh winter German air. There’s a chance I’ll see snow for the first time ever (ya’ know, Brazil has no snow…).
  • Even if the average German consumes a lot of meat, appearently there is an unbelievably high number of veg(etari)an restaurants nearby.

The Bad:

  • It’s cold. I lived my entire life in a place where the worst possible weather is worth a jacket. I’m talking about the possibility of snow. Frozen water falling from the damn sky.
  • I really hope I can be productive. The last few weeks of my life were insane, and my performance is not at its peak. I want every single cent spent by the GNOME Foundation on me to be worth it.
  • I also hope that Endless benefits from this hackfest’s outcome.
  • I don’t speak Dutch. I don’t know how receptive they’ll be with a foreigner in their country that doesn’t speak a single word in their language. (Sorry folks.)
  • I’ll miss my wife!

Of course, even if I have some fears, I’m extremely happy with this hackfest. Thanks again for the GNOME Foundation for sponsoring me, and thanks Endless for allowing me to attend it.

banner down

sponsored-badge-shadow

November 22, 2016

First Rust+GObject coding session

Last Wednesday, Nicholas Matsakis from Rust/Mozilla and I sat down on a video chat to start getting our hands dirty on moving ahead with making Rust a part of the GNOME ecosystem.

While there are two efforts to produce GNOME bindings for Rust, gi-rust and gtk-rs, however none of them provided the means to emit GObject subclases so we decided to tackle this problem.

During the first half of the session we reviewed how GObjects are created from C, I used Vala’s output to illustrate how this is done. We didn’t dive on too much detail on how properties and signals are emitted nor interfaces, however we focused on detail on how to inherit from a GObject.

After that we went ahead and wrote what, probably, is the first piece of Rust code that emits a GObject in this playground repo. Nothing too fancy, but a starting point to get us somewhere.

The next step is to figure out how to take this and turn it into something that a Rust developer would find Rust-like. One of the options we’re exploring is to use Rust macros and try something like this:

gobject!{
  class Foo : Bar {
    fn method_a () -> u8 {
      ...
    }
  }
}

The other alternative would be to decorate structs, traits and whatnot, but this would be a bit more cumbersome. Turns out that Rust’s macro system is quite powerful and allows you to define a custom syntax.

That’s the progress so far, the next step would be to try to start implementing some bits and pieces for the macro and see how far we can get. I’d like to thank Mozilla and Nicholas specially for the amount of attention and effort they are putting into this, I must say I’m quite excited and looking forward to make progress on this effort.

Stay tuned, and if you’d like to help don’t hesitate to reach out to us!

November 21, 2016

A tale of cylinders and shadows

Like I wrote before, we at Collabora have been working on improving WebKitGTK+ performance for customer projects, such as Apertis. We took the opportunity brought by recent improvements to WebKitGTK+ and GTK+ itself to make the final leg of drawing contents to screen as efficient as possible. And then we went on investigating why so much CPU was still being used in some of our test cases.

The first weird thing we noticed is performance was actually degraded on Wayland compared to running under X11. After some investigation we found a lot of time was being spent inside GTK+, painting the window’s background.

Here’s the thing: the problem only showed under Wayland because in that case GTK+ is responsible for painting the window decorations, whereas in the X11 case the window manager does it. That means all of that expensive blurring and rendering of shadows fell on GTK+’s lap.

During the web engines hackfest, a couple of months ago, I delved deeper into the problem and noticed, with Carlos Garcia’s help, that it was even worse when HiDPI displays were thrown into the mix. The scaling made things unbearably slower.

You might also be wondering why would painting of window decorations be such a problem, anyway? They should only be repainted when a window changes size or state anyway, which should be pretty rare, right? Right, that is one of the reasons why we had to make it fast, though: the resizing experience was pretty terrible. But we’ll get back to that later.

So I dug into that, made a few tries at understanding the issue and came up with a patch showing how applying the blur was being way too expensive. After a bit of discussion with our own Pekka Paalanen and Benjamin Otte we found the root cause: a fast path was not being hit by pixman due to the difference in scale factors on the shadow mask and the target surface. We made the shadow mask scale the same as the surface’s and voilà, sane performance.

I keep talking about this being a performance problem, but how bad was it? In the following video you can see how huge the impact in performance of this problem was on my very recent laptop with a HiDPI display. The video starts with an Epiphany window running with a patched GTK+ showing a nice demo the WebKit folks cooked for CSS animations and 3D transforms.

After a few seconds I quickly alt-tab to the version running with unpatched GTK+ – I made the window the exact size and position of the other one, so that it is under the same conditions and the difference can be seen more easily. It is massive.

Yes, all of that slow down was caused by repainting window shadows! OK, so that solved the problem for HiDPI displays, made resizing saner, great! But why is GTK+ repainting the window even if only the contents are changing, anyway? Well, that turned out to be an off-by-one bug in the code that checks whether the invalidated area includes part of the window decorations.

If the area being changed spanned the whole window width, say, it would always cause the shadows to be repainted. By fixing that, we now avoid all of the shadow drawing code when we are running full-window animations such as the CSS poster circle or gtk3-demo’s pixbufs demo.

As you can see in the video below, the gtk3-demo running with the patched GTK+ (the one on the right) is using a lot less CPU and has smoother animation than the one running with the unpatched GTK+ (left).

Pretty much all of the overhead caused by window decorations is gone in the patched version. It is still using quite a bit of CPU to animate those pixbufs, though, so some work still remains. Also, the overhead added to integrate cairo and GL rendering in GTK+ is pretty significant in the WebKitGTK+ CSS animation case. Hopefully that’ll get much better from GTK+ 4 onwards.

Code review in open source projects: Influential factors and actions

Coming from “Prioritizing volunteer contributions in free software development”, the Wikimedia Foundation allowed me to spend time on research about code review (CR) earlier in 2016. The theses and bullet points below incorporate random literature and comments from numerous people.
While the results might also be interesting for other free and open source software projects, they might not apply to your project for various reasons.

In Wikimedia we would like to review and merge better code faster. Especially patches submitted by volunteers. Code Review should be a tool and not an obstacle.
Benefits of Code Review are knowledge transfer, increased team awareness, and finding alternative solutions. Good debates help to get to a higher standard of coding and drives quality.[A1]

I see three dimensions of potential influential factors and potential actions (that often cannot be cleanly separated):

  • 3 aspects: social, technical, organizational.
  • 2 roles: contributor, reviewer.
  • 3 factors: Patch-Acceptance/Positivity-Likeliness, Patch-Time-to-review/merge, Contributor onboarding (not covered here).

In general, “among the factors we studied, non-technical (organizational and personal) ones are betters predictors” (means: possible factors that might affect the outcome and interval of the code review process) “compared to traditional metrics such as patch size or component, and bug priority.”[S1]

Note that Wikimedia plans to migrate its code review infrastructure from Gerrit to Phabricator Differential at some point.

Unstructured review approach

An unstructured review approach potentially demotivates first patch contributors, but fast and structured feedback is crucial for keeping them engaged.

Set up and document a multi-phase, structured patch review process for reviewers: Three steps proposed by Sarah Sharp for maintainers / reviewers[A2], quoting:

  • Fast feedback whether it is wanted: Is the idea behind the contribution sound? / Do we want this? Yes, no. If the contribution isn’t useful or it’s a bad idea, it isn’t worth reviewing further. Or “Thanks for this contribution! I like the concept of this patch, but I don’t have time to thoroughly review it right now. Ping me if I haven’t reviewed it in a week.” The absolute worst thing you can do during phase one is be completely silent.[A2]
  • Architecture: Is the contribution architected correctly? Squash the nit-picky, perfectionist part of yourself that wants to comment on every single grammar mistake or code style issue. Instead, only include a sentence or two with a pointer to coding style documentation, or any tools they will need to run their contribution through.[A2]
  • Polishing: Is the contribution polished? Get to comment on the meta (non-code) parts of the contribution. Correct any spelling or grammar mistakes, suggest clearer wording for comments, and ask for any updated documentation for the code[A2]

Lack of enough skillful, available, confident reviewers and mergers

Not enough skillful or available reviewers and potential lack of confident reviewers[W1]? Not enough reviewers with rights to actually merge into the codebase?

  • Capacity building: Discuss consider handing out code review rights to more (trusted) volunteers by recognizing active users who mark patches as good-to-go or needs-improvement (based on statistics)? Encourage them to become habitual and trusted reviewers; actively nominate to become maintainers[W2]? Potentially recognize people not executing their code review rights anymore. Again this requires statistics (to identify very active reviewers) and stakeholders (to decide on nominations).
  • Review current code review patch approval handout practice (see Wikimedia’s related documentation about +2 rights in Gerrit).
  • Consider establishing prestigious roles for people, like “Reviewers”?[W3]
  • “we recommend including inexperienced reviewers so that they can gain the knowledge and experiences required to provide useful comments to change authors”[S2]; Reviewers who have prior experience give more useful comments as they have more knowledge about design constraints and implementation.[S2]

Under-resourced or unclear responsibilities

Lack of repository owners / maintainers, or under-resourced or unclear responsibilities when everyone expects someone else to review. (For the MediaWiki core code repository specifically, see related tasks T115852 and T1287.)

“Changes failing to capture a reviewer’s interest remain unreviewed”[S3] due to self-selecting process of reviewers, or everybody expects another person in the team to review. “When everyone is responsible for something, nobody is responsible”[W4].

  • Have better statistics (on proposed patches waiting for review for a long time) to identify unmaintained areas within a codebase or codebases with unclear maintenance responsibilities.
  • Define a role to “Assign reviews that nobody selects.”[S3] (There might be (old) code areas that only one or zero developers understand.) Might need an overall “Code Review wrangler” position similar to a Bugwrangler/Bugmaster.
  • Clarify and centrally document which Engineering/Development/Product teams are responsible for which codebases, and Team/Maintainer ⟷ Codebase/Repository relations (Example: “How Wikimedia Foundation’s Reading team manages extensions”)
  • Actively outreach to volunteers for unmaintained codebases via Requesting repository ownership? Might need an overall “Code Review wrangler” position similar to a Bugwrangler/Bugmaster.
  • Advertise a monthly “Project in need of a maintainer” campaign on a technical mailing list and/or blog posts?

Hard to identify good reviewer candidates

Hard for new contributors to identify and add good reviewers.

“choice of reviewers plays an important role on reviewing time. More active reviewers provide faster responses” but “no correlation between the amount of reviewed patches on the reviewer positivity”.[S1]

  • Check “owners” tool in Phabricator “for assigning reviewers based on file ownership”[W5] so reviewers get notified of patches in their areas of interest. In Gerrit this exists but is limited.
  • Encourage people to become project members/watchers.[W6]
  • Organization specific: Either have automated updating of outdated manual list of Developers/Maintainers, or replace individual names on the list of Developers/Maintainers by links to Phabricator project description pages.
  • In the vague technical future, automatic reviewer suggestion systems could help[S2], like automatically listing people who lately touched code in a code repository or related tasks in an issue tracking system and the length of their current review queue. (Proofs of concept have been published in scientific papers but code is not always made available.)

Unhelpful reviewer comments

Due to unhelpful reviewer comments, contributors spend time on creating many revisions/iterations before successful merge.

  • Make sure documentation for reviewers states:
    • Reviewers’ CR comments considered useful by contributors: identifying functional issues; identifying corner cases potentially not covered; suggestions for APIs/designs/code conventions to follow.[S2]
    • Reviewers’ CR comments considered somewhat useful by contributors: coding guidelines; identifying alternative implementations or refactoring[S2]
    • Reviewers’ CR comments considered not useful by contributors: Authors consider reviewers praising on code segments, reviewers asking questions to understand the implementation, and reviewers pointing out future issues not related to the specific code (should be filed as tasks) as not useful.[S2]
    • Avoid negativity and ask the right questions the right way. As a reviewer, ask questions instead of making demands to foster a technical discussion: “What do you think about…?” “Did you consider…?” “Can you clarify…?” “Why didn’t you just…” provides a judgement, putting people on the defensive. Be positive.[A1]
    • If you learned something or found something particular well, give compliments. (As code review is often about critical feedback only.)[A1]
    • Tool specific: Agree and document how to use Gerrit’s negative review (CR-1): “Some people tend to use it in an ‘I don’t like this but go ahead and merge if you disagree’ sense which usually does not come across well. OTOH just leaving a comment makes it very hard to keep track – I have been asked in the past to -1 if I don’t like something but don’t consider it a big deal, because that way it shows up in Gerrit as something that needs more work.”[W7]
    • Stakeholders with different expertise areas to review aspects need to split reviewing parts of a larger patch.

Weak review culture

Prioritization / weak review culture: more pressure to write new code than to review patches contributed? Code review “application is inconsistent and enforcement uneven.”[W8]

  • Introduce and foster routine and habit across developers to spend a certain amount of time each day for reviewing patches (or part of standup), and team peer review on complex patches[A1].
  • Write code to display “a prominent indicator of whether or not you’ve pushed more changesets than you’ve reviewed”[W9]?
  • Technical: Allow finding / explicitly marking first contributions by listing recent first contributions and their time to review on korma’s code_contrib_new_gone in T63563. Someone responsible to ping, follow up, and (with organizational knowledge) to add potential reviewers to such first patches. Might need an overall “Code Review wrangler” position similar to a Bugwrangler/Bugmaster.
  • Organization specific: Contact the WMF Team Practices Group about their thoughts how this can be fostered?

Workload of existing reviewers

Workload of existing reviewers; too many items on their list already.

Reviewer’s Queue Length: “the shorter the queue, the more likely the reviewer is to do a thorough review and respond quickly” and the longer the more likely it takes longer but “better chance of getting in” (due to more sloppy review?)[S1].

  • Code review tool support to propose reviewers or display on how many unreviewed patches a reviewer is already added so the author can choose other reviewers. Proposal to add reviewers to patches[W2] but this requires already good knowledge of the community members as otherwise it just creates more noise.
  • Potentially document that “two reviewers find an optimal number of defects – the cost of adding more reviewers isn’t justified […]”[S3]
    • Documentation for reviewers: “we should encourage people to remove themselves from reviewers when they are certain they won’t review the patch. A lot of noise and wasted time is created by the fact that people are unable to keep their dashboards clean”[WA]
  • Tool specific: Gerrit’s negative review (CR-1) gets lost when a reviewer removes themselves (bug report) hence Gerrit lists (more) items which look unreviewed. Check if same problem exists in Phabricator Differential?
  • Tool specific: Agree whether ‘Patch cannot be merged due to conflicts; needs rebasing’ should be a reason to give CR-1[WB] in order to get a ‘cleaner’ list? (But depending on the Continuous Integration infrastructure tools of your project, such rejection via static analysis might happen automatically anyway.)

Poor quality of contributors’ patches

Due to poor quality of contributors’ patches, reviewers spend time on reviewing many revisions/iterations before successful merge. Might make reviewers ignore instead of reviewing again and again giving yet another negative CR-1 review.

  • Make sure documentation for contributors states:
    • Small, independent, complete patches are more likely to be accepted.[S4]
    • “[I]f there are more files to review [in your patch], then a thorough review takes more time and effort”[S2] and “review effectiveness decreases with the number of files in the change set.”[S2]
    • Small patches (a maximum of 4 lines changed) “have a higher chance to be accepted than average, while large patches are less likely to be accepted” (probability) but “one cannot determine that the patch size has a significant influence on the time until a patch is accepted” (time)[S5]
    • Patch Size: “Review time [is] weakly correlated to the patch size” but “Smaller patches undergo fewer rounds of revisions”[S1]
    • Reasons for rejecting a patch (not all are equally decisive; “less decisive reasons are usually easier to judge” when it comes to costs explaining rejections):[S6]
      • Problematic implementation or solution: Compilation errors; Test failures; Incomplete fix; Introducing new bugs; Wrong direction; Suboptimal solution works but there is a more simple or efficient way); Solution too aggressive for end users; Performance; Security
      • Difficult to read or maintain: Including unnecessary changes (to split into separate patch); Violating coding style guidelines; Bad naming (e.g. variable names); Patch size too large (but rarely matters as it’s ambiguous – if necessary it’s not a problem); Missing docs; Inconsistent or misleading docs; No accompanied test cases (When should “No accompanied test cases” be a reason for a negative review? In which cases do we require unit tests?[W4] This should be more deterministic); Integration conflicts with existing code; Duplication; Misuse of API; risky changes to internal APIs; not well isolated
      • Deviating from the project focus or scope: Idea behind is not of core interest; irrelevant or obsolete
      • Affecting the development schedule / timing: Freeze; low urgency; Too late
      • Lack of communication or trust: Unresponsive patch authors; no discussion prior to patch submission; patch authors’ expertise and reputation[S6]
      • cf. Reasons of the Phabricator developers why patches can get rejected
    • There is a mismatch of judgement: Patch reviewers consistently consider test failures, incomplete fix, introducing new bugs, suboptimal solution, inconsistent docs way more decisive for rejecting than authors.[S6]
    • Propose guidelines for writing acceptable patches:[S6]
      • Authors should make sure that patch is in scope and relevant before writing patch
      • Authors should be careful to not introduce new bugs instead of only focussing on the target
      • Authors should not only care if the patch works well but also whether it’s an optimal solution
      • Authors should not include unnecessary changes and should check that corner cases are covered
      • Authors should update or create related documentation[S6] (for Wikimedia, see Development policy)
    • Patch author experience is relevant: Be patient and grow. “more experienced patch writers receive faster responses” plus more positive ones. In WebKit, contributors’ very first patch is likely to get positive feedback while for their 3rd to 6th patch it is harder.[S1]
  • Agree on who is responsible for testing and document responsibility. (Tool specific: Phabricator Differential can force patch authors to fill out a test plan.)[W7]

Likeliness of patch acceptance depends on: Developer experience, patch maturity; Review time impacted by submission time, number of code areas affected, number of suggested reviewers, developer experience.[S7]

Hard to realize a repository is unmaintained

Hard to realize how (in)active a repository is for a potential contributor.

  • Implement displaying “recent activity” information somewhere in the code repository browser and code review tool, to communicate expectations.
  • Have documentation that describe steps how to ask for help and/or take over maintainership, to allow contributors to act if interested in the code repository. For Wikimedia these docs are located at Requesting repository ownership.

No culture to improve changesets by other contributors

Changesets are rarely picked up by other developers[WB]. After merging, “it is very difficult to revert it or to get original developers to help fix some broken aspect of a merged change”[WB] regarding followup fixing culture.

  • Document best practices to amend a change written by another contributor if you are interested in bringing the patch forward.

Hard to find related patches

Hard to find existing “related” patches in a certain code area when working on your own patch in that area, or when reviewing several patches in the same code area. (Hence there might also be some potential rebase/merge conflicts[WB] to avoid if possible.)

  • Phabricator Differential offers “Recent Similar Open Revisions”.[WC] Gerrit might have such a feature in a newer version.[WD]

Lack of synchronization between teams

Lack of synchronization between developer teams: team A stuck because team B doesn’t review their patches?

  • Organization specific: Wikimedia has regular “Scrum of Scrum” meetings of all scrum masters across teams, to communicate when the work of a team is blocked by another team.

Comment which important factors that you have experienced are missing!

References

This week in GTK+ – 25

In this last week, the master branch of GTK+ has seen 167 commits, with 8048 lines added and 6858 lines removed.

Planning and status
Notable changes

On the master branch:

  • The default value of the GtkFileChooser:local-only property is now FALSE, which means that file selections dialogs will automatically show non-local resources.
  • Benjamin Otte introduced the GtkSnapshot API, which works as a GskRenderNode builder for widgets, and aims to replace the immediate mode gtk_render_* family of functions.
  • Benjamin also changed the GtkDrawingArea API, which now works using an explicit callback function, instead of a the generic GtkWidget::draw signal.
  • Finally, Benjamin has implemented support for 3D CSS transformations in GTK+.
  • The GDK API to read back the contents of a GdkWindow into a GdkPixbuf have been removed, as their behavior and result are platform-dependent.
  • Matthias Clasen updated the GTK+ 3.x → 4.x porting guide, and the API reference, with the newest API additions.
  • Simon Steinbeiss updated the CSS styling for GtkProgressbar to add the empty and full classes when the progress is set to 0.0 or 1.0, respectively.
Bugs fixed
  • 774475 wayland: gtk+ prevents using subsurfaces if the parent is not root
  • 774476 surfaces with no outputs get scale factor reset
  • 774634 GtkPlacesView does not unref all GDaemonFileEnumerator it references
  • 773007 GtkFilechooser gives completion for non-matching extensions
  • 774609 small fix to foreign drawing spinbutton demo
  • 773587 [PATCH] recent-manager: Add a limit to the list’s size
  • 774352 GtkAppChooserWidget does not unref all GAppInfo it references
  • 774347 Fails to build: unknown type name GdkColor
  • 773601 Display size detected as 0x0 pixels when RANDR is not available
  • 774614 Wrong #include in Print docs
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

Last batch of ColorHugALS

I’ve got 9 more ColorHugALS devices in stock and then when they are sold they will be no more for sale. With all the supplier costs going recently up my “sell at cost price” has turned into “make a small loss on each one” which isn’t sustainable. It’s all OpenHardware, both hardware design and the firmware itself so if someone wanted to start building them for sale they would be doing it with my blessing. Of course, I’m happy to continue supporting the existing sold devices into the distant future.

colorhug-als1-large

In part the original goal is fixed, the kernel and userspace support for the new SensorHID protocol works great and ambient light functionality works out of the box for more people on more hardware. I’m slightly disappointed more people didn’t get involved in making the ambient lighting algorithms more smart, but I guess it’s quite a niche area of development.

Plus, in the Apple product development sense, killing off one device lets me start selling something else OpenHardware in the future. :)

GNOME Days in Bucharest

From Wednesday up to Saturday we had the pleasure of hosting some very interesting events related to GNOME and open-source.

First of them was the GSoC presentation where previous GSoC students shared their experience with those eager to try it next summer. We had a lot of people interested, thus we nearly filled a whole amphitheater.

The next event (Thursday) was about why open-source is so important not only for individuals, but for the whole world. Here we had Carlos as a speaker and I must say that he did a really great job. He gave away some RedHat swag for those that asked/answered questions.

carlos1Carlos talking about open-source

Apart from the fact that it was 19:00 and everyone must have had a tiring day at the university, we still had students interested.

carlos2

Compared to what was going to happen during the third event, the first two ones were a piece of cake🙂.

The third event, which was on Saturday, was a small hackfest (we thought that it was going to be small. We couldn’t have been more wrong). So we announced the event, tried to advertise it as well as we could (we got help from our teacher, Răzvan Deaconescu, he also helped us with the space needed for the event and a lot of other stuff, so we are very thankful to him🙂 ).

Then Saturday came. And then the students came. And then we filled a whole room. Then two rooms. Then three rooms. Then we even needed a fourth one that was half full. Sooo… yeah. We were just amazed.

hack1Here we can see Carlos swimming in a room full of open-source loving students.

The main goal of our event was to at least build a project. You might say that we didn’t ask for that much, right? I mean come on, how hard can it be to follow some instructions and build a project, right? Well, thanks to JHbuild, and it’s wonderful way of ruining our hope, we spent about 3 to 4 hours trying to just build something.

Needless to say that the majority of the students didn’t manage to build anything. But it wasn’t their fault. Since things weren’t that great, we decided that the most we could do was a live demo of how the normal workflow should have been.

demoThe projector was actually working, really.

After Carlos filed a test bug (for Nautilus) on bugzilla, Răzvan went on and fixed it. Thus, we showed them how a bug is found, filed, fixed and pushed on bugzilla for review.

trollingcarlosI don’t know what he was taling about, but i tried to troll him🙂

Afterwards, Carlos asked the audience if at least some of them wanted to go back to the rooms and try to further build their projects OR (pay attention) go grab something to drink. They chose to go back and try to build even more🙂 (it really happened, yeah). And believe it or not, some of them even fixed some newcomer bugs.

It was an amazing week, an amazing experience and I bet some of the students will still continue to build (a process that will surely take about two more weeks of their innocent lives, thanks to JHbuild, but their perseverance will prevail) and then even contribute to GNOME. We promised some more such events in the future, so get that FlatPak done, PLEASE!

Special thanks to Carlos Soriano and Răzvan Deaconescu, the ones that made this possible! Also, Alexandru Căciulescu helped us with promoting the events so we owe him one🙂.


November 20, 2016

Talking at Def.camp 2016 in Bucharest, Romania

Just at the beginning of this month I was invited to going to Bucharest, Romania, for giving a talk on GNOME at this year’s def.camp. The conference seems to be an established event in the Romanian security community and has been organised quite well. As I said in my talk I was happy to be there to tell those people about Free Software. I saw many people running around with their proprietary systems. It seems that certain parts of the security community does not believe that the security of a system greatly increases when it’s based on Free Software. In fairness, the event seemed to be a bit on the suit-and-tie-side where Windows is probably much more common than people want.

Andrei Avădănei opened the conference by saying how happy he was that, even at that unholy hour (09:00 in the morning…) he counted 1100 people from 30 countries and he expected that number to grow over the following hours. It didn’t feel that big, but the three halls were quite large indeed. One of those halls was the “hacking village” in which participants can practise real life “problem solving skills”. The hacking village was more of an expo where vendors had there booths but also some interesting security challenges. My favourite booth was the Virtual Reality demo. Someone brought an HTC VR system and people could play a simple game. I’ve tried an Oculus Rift before in which I road a roller coaster. With the HTC system, I also had some input methods which really enhanced the experience. Very immersive.

Anyway, Andrei mentioned, how happy he was to have the biggest security event in Romania being very grassroots- and community driven. Unfortunately, he then let some representative from Orange, the main sponsor, talk. Of course, you cannot run a big event like that without having enough financial backup. But then giving the main stage, the prime opening spot to the main sponsor does not leave the impression that they are community driven… I expected the first talk after the opening to be setting the theme for the conference. In this case, it was a commercial. That doesn’t actually fit the conference too badly, because out the 32 talks I counted 13 (or 40%) being delivered from sponsors. With sponsors I mean all companies listed on the homepage for their support. It may very well be that I am mistaking grassrooty supporters for commercial sponsors.

The Orange CTO mentioned that connectivity is the new electricity which shapes countries and communities. For them, a telco, in order to ensure connectivity, they need to maintain security, he said. The Internet of connected devices (IoT) is growing exponentially and so are the threats. Orange has to invest in order to maintain security for its client. And they do, it seems. He showed a fancy looking “threat map” which showed attacks in real-time. Probably a Snort (or whatever IDS is currently the en-vogue) with a map showing arrows from Geo-IP locations pointing towards Romania.

Next up was Jason Street who talked about how he failed doing his job. He was a blue team security guy, he said, and worked for a bank as security information officer. He was seen by the people as the bad guy making your life dreadful. That was bad, he said, because he didn’t teach the people the values and usefulness of information security. Instead he taught them that they better not get to meet him. The better approach, he said, is trying to be part of a solution not looking for problems. Empower the employees in what information security is doing or trying to do. It was a very entertaining presentation given by a very engaged speaker. I couldn’t get so much from the content though.

Vlad from Orange talked about their challenges providing an open, easy to use, and yet secure WiFi infrastructure. He referred on the user expectations and the business requirements. Users expect to be able to just connect without much hassle. The business seems to be wanting to identify the user and authorise usage. It was mainly on a high level except for a few runs of authentication protocol. He mentioned EAP-SIM and EAP-AKA as more seamless authentication protocols compared to, say, a captive Web portal. I didn’t know that it’s possible to use your perfectly valid shared secret in your SIM for authentication. It makes perfect sense. Even more so for a telco such as Orange.

Mihai from Bitdefender talked about Browser instrumentation for exploit analysis. That means, as I found out after the talk, to harness the Browser’s internals to analyse malicious payloads. He showed how your Browser (well… Internet Explorer with Flash) is exploited nowadays. He ran a “Cerber” demo of exploiting an Internet Explorer with some exploit kit. He showed fiddler and process explorer which displayed the HTTP traffic and the spawned processes. After visiting a simple Web page the malicious payload was delivered, exploited the IE, and finally crashed it. The traffic in fiddler revealed that the malware was delivered via a crafted Flash program. He used a Flash decompiler to look at the files. But he didn’t really find the exploit itself, probably because of some obfuscation. So what is the actual exploit? In order to answer that properly, you need to inspect the memory during runtime, he said. That’s where Browser instrumentation comes into play. I think he interposed several functions, such as document.write, eval, object parameters, Flash’s LoadBytes, etc to analyse what goes in and out. All that information was then saved to disk in separate files, i.e. everything that went to document.write was written to c:\share\document.write, everything that Flash’s loadbytes took, was written to c:\shared\loadbytes. He showed another demo with the Sundown exploit delivery framework which successfully exploited his browser. He then showed the filesystem containing the above mentioned information which made it easier to spot to actual exploit and shellcode. To prevent such exploits, he recommended to use Windows 10 and other browsers than Internet Explorer. Also, he recommended to use AdBlock to stop “malvertising”. That is in line with what I recommended several moons ago when analysing embedded JavaScripts being vulnerable for DOM-based XSS. The method is also very similar to what I used back in the day when hacking on Chromium and V8, so I found the presentation quite good. Except for the speaker :-/ He was looking at his slides with his back to the audience often and the audio wasn’t really good. I respect him for having shown multiple demos with virtual machine snapshots. I wouldn’t have done it, because demos usually fail! ;-)

Inbar Raz talked about Tinder bots. He said he was surprised to find so many “matches” when being in Sweden. He quickly noticed that he was chatted up by bots, though, because he got sent the very same message from different profiles. These profiles also don’t necessarily make sense. For example, the name and the age shown on the Tinder profile did not match the linked Instagram or Facebook profiles. The messages he received quickly included a link to a dodgy Web site. When asking whois about the ownership he found many more shady domains being used for dragging people to porn sites. The technical details weren’t overly elaborate, but the talk was quite entertaining.

Raul Alvarez talked about reverse engineering polymorphic ransom ware. I think he mentioned those Locky type pieces of malware which lock your computer or files. Now you might want to know how that malware actually works. He mentioned Ollydbg, immunity debugger, and x64dgb as tools to use for reverse engineering your files. He said that malware typically includes an unpacker which you need to survive first before you’re able to see the actual malware. He mentioned on-demand polymorphic functions which are being called during the unpacking stage. I guess that the unpacker decrypts or uncompresses to different bytes everytime it’s run. The randomness is coming from the RDTSC call, he said. The way I understand that mechanism, the unpacker only modified a few bytes at a time and potentially modifies irrelevant bytes. Imagine code that jumps over a few bytes. These bytes could be anything, because they are never used let alone executed. But I’m not sure whether this is indeed the gist of what he described in a rather complicated fashion. His recommendation for dealing with metamorphic code is to catch it right when it finished decrypting the payload. I think everybody wishes to be able to do that indeed… He presented a general method for getting rid of malware once it hit you: Start in safe mode and remove suspicious registry entries for the “run” key. That might not be interesting to Windows people, but now I, being very ignorant about Windows, have learned something :-)

Chris went on to talk about securing a mobile cryptocoin wallet. If you ask me, he really meant how to deal with the limitation of the platform of his choice, the iPhone. He said that sometimes it is very hard to navigate the solution space, because businesses are not necessarily compatible with blockchains. He explained some currencies like Bitcoin, stellar, ripple, zcash or ethereum. The latter being much more flexible to also encode contracts like “in the event of X transfer Y amount of money to account Z”. Financial institutions want to keep their ledgers private, but blockchains were designed to run in public, he said. In addition, trust between financial institutions is low. Bitcoin is hard to use, he said, because the cryptography itself is hard to understand and to use. A balance has to be struck between usability and security. Secrets, he said, need to be kept secret. I guess he means that nobody, not even the user, may access the secret an application needs. I fundamentally oppose. I agree that secrets need to be kept as securely as possible. But secrets must not be known by anyone else but the users who are supposed to benefit from them. If some other entity controls my secret, I am not really in control. Anyway, he looked at existing bitcoin wallet applications: Bither and Breadwallet. He thinks that the state of the art can be improved if you are willing to break the existing protocol. More precisely, he wants to leverage the “security hardware” present in current mobile devices like Biometric sensors or “enclaves” in modern CPUs to perform the operations based on the secret unextractibly stored in hardware. With such an enclave, he wants to generate a key there and use it to sign data without the key ever leaving the enclave. You need to change the protocol, he said, because Apple’s enclave uses secp256r1, but Bitcoin uses secp256k1.


My own talk went reasonably well, I think. I am not super happy but happy enough. But I’ve realised a few times now that I left out things I wanted to mention or how I could have better explained what I wanted. Then again, being perfect would be boring, so better leave some room for improvement ;-) I talked about how I think GNOME is a good vendor of security software. It’s focus on user experience is it’s big advantage. The system should make informed decisions as much as possible and try to leave the user out as much as possible. Security should be an inherent feature, not something that you need to actively care about. I expected a more extreme reaction from the security focused audience, but it seemed people mostly agreed. In my mind, “these security people” translate security with maximum control placed in users’ hands which has to manifest itself in being able to control each and every aspect of a solution. That view is not compatible with trying to leave the user out of the security equation. It may be that I am doing “these security people” wrong. Or that they have changed. Or simply that the audience was not composed of the people I thought they were. I was hoping for developers creating security software and I mentioned that GNOME libraries would perform great for their tasks. Let’s see whether anyone actually takes my word for it and complains to me ;-)

Matt Suiche followed “the money of security companies, IPOs, and M&A”. In 2016, he said, the situation is not very different from the 90s: Software still has bugs, bad configuration is still a problem, default passwords are still being used… The newly founded infosec companies reported by Crunchbase has risen a lot, he said. If you multiply that number with dollars, you can see 40 billion USD being raised since 1998. What’s different nowadays, according to him, is that people in infosec are now more business oriented rather than technically. We have more “cyber” now. He referred to buzzwords being spread. Also we have bug bounty programmes luring people into reporting vulnerabilities. For example, JP Morgan is spending half a billion USD on cyber security, he said. Interestingly, he showed that the number of vulnerabilities, i.e. RCE CVEs has increased, but the number of actual exploitations within the first 30 days after a patch has decreased. He concluded that Microsoft got more efficient at mitigating vulnerabilities. I think you can also conclude other things like that people care less about exploitation or that detection of exploitation has gotten worse. He said that the cost of an exploit has increased. It wasn’t long ago here you could cook up an exploit within two weeks. Now you need several people for three months at least. It’s been a well made talk, but a bit too fluffy for my taste.

Stefan and David from Kaspersky talked off-the-record (i.e. without recordings) about “read-world lessons about spies every security researcher should know”. They have been around the industry for more than a decade and they have started to notice patterns, they said. Patterns of weird things that happen which might not be easy to explain at first. It all begins with the realising that we live in a world, whether we want it or not, where we have certain control over the success of espionage attacks. Right now people reverse engineer malware which means that other people’s operations are being disrupted. In fact, he claimed that they reverse engineer and identify the world’s most advanced persistent threats like Duqu, Flame, Hellsing, or many others and that their company is getting better and better at identifying other people’s operations. For the first time in history, he said, we as geeks have an influence about espionage. That makes some entities not very happy and they let certain people visit you. These people come in various types. The profile of a typical government employee is that they are very open and blunt about their desires. Mostly, they employ patriotism to persuade you. Another type is the impersonator, they said. That actor is not perfectly honest with you. He gave an example of him meeting another person who identified with the very same name as him. It got strange, he said, when he met that person on a different continent a few months later and got offered to perform a highly paid training. Supposedly only to build up a relationship. These people have enough budget to get closer to you, they said, Another type of attacker is the “Banya Girl”. Geeks, they said, who sat most of their life in front of the computer are easily attracted by girls. They have it easier to get into your room or brain. His example took place one year ago: He analysed a satellite exploiting malware later known as Turla when he met this super beautiful girl in the hotel who sat there everyday when he went to the sauna. The day they released the results about Turla they went for dinner together and she listened to a phone call he had with a reporter. The girl said something like “funny that you call it Turla. We call it Uroboros”. Then he got suspicious and asked her about who “they” are. She came up with stories he found weird and seemed to be convinced that she knows more than she was willing to reveal. In general, they said, asking for a selfie or a Facebook friend request can be an effective counter measure to someone spying on you. You might very well ask what to do when you think you’re targeted. It’s probably best to do nothing, they said. It’s their game, you better not start playing it even if you wake up in the middle of it. You can try to take care about your OpSec to protect against certain data being collected or exfiltrated. After all, people are being killed based on metadata. But you should also try to not get yourself into trouble. Sex and money are probably the oldest weapons people employ against you. They also encouraged people to define trust and boundaries for existing and upcoming relationships. If you become too paranoid, you’ve already lost the battle, they said. Keep going to conferences, keep meeting people, and don’t close yourself down.

It were two busy days in Bucharest. I’m happy to have gone and I hope I will have another chance to visit the lovely city :-) By that time the links here in this post will probably be broken ;-) I recommended using the “archive” URLs, i.e. https://def.camp/archives/2016/ already now, but nobody is listening to me… I can also not link to the individual talks, because the schedule page is relatively click-intensive, i.e. not deep-linkable :-(

Travis needs our help

Long time GNOME contributor and foundation member Travis Reitter had a medical emergency earlier this month. You might consider donating to the gofundme to help him and his family out during the marathon to recovery.

Let’s remind everyone what GNOME is about, people!

Feeds