- Tidied up, and setup hardware variously; prodded routers etc. Mail chew, lunch, customer call. Registered myself and Giovani for OSCON EU in London Monday/Tuesday - to run a LibreOffice booth - come and see us.
October 14, 2016
October 13, 2016
- Into Cambridge by train, C'bra Quarterly mgmt meetings, lunch, ESC call, out for dinner in the evening with the lads & lasses - fun. Back to the office, played with the vive in a larger space; packed up servers to re-locate them via Taxi with Matt; home, bed v. late.
In recent months, I’ve found myself discussing the pros and cons of different approaches used for building complete operating systems (desktops or appliances), or lets say software build topologies. What I’ve found, is that frequently I lack vocabulary to categorize existing build topologies or describe some common characteristics of build systems, the decisions and tradeoffs which various projects have made. This is mostly just a curiosity piece; a writeup of some of my observations on different build topologies.
Self Hosting Build Topologies
Broadly, one could say that the vast majority of build systems use one form or another of self hosting build topology. We use this term to describe tools which build themselves, wikipedia says that self hosting is:
the use of a computer program as part of the toolchain or operating system that produces new versions of that same program
While this term does not accurately describe a category of build topology, I’ve been using this term loosely to describe build systems which use software installed on the host to build the source for that same host, it’s a pretty good fit.
Within this category, there are, I can observe 2 separate topologies used, lets call these the Mirror Build and the Sequential Build for lack of any existing terminology I can find.
The Mirror Build
This topology is one where the system has already been built once, either on your computer or another one. This build process treats the bootstrapping of an operating system as an ugly and painful process for the experts, only to be repeated when porting the operating system to a new architecture.
The basic principle here is that once you have an entire system that is already built, you can use that entire system to build a complete new set of packages for the next version of that system. Thus the next version is a sort of reflection of the previous version.
One of the negative results of this approach is that circular dependencies tend to crop up unnoticed, since you already have a complete set of the last version of everything. For example: it’s easy enough to have perl require autotools to build, even though you needed perl to build autotools in the first place. This doesn’t matter because you already have both installed on the host.
Of course circular dependencies become a problem when you need to bootstrap a system like this for a new architecture, and so you end up with projects like this one, specifically tracking down cropped up circular dependencies to ensure that a build from scratch actually remains possible.
One common characteristic of build systems which are based on the Mirror Build is that they are usually largely non-deterministic. Usually, whatever tools and library versions happen to be lying around on the system can be used to build a new version of a given module, so long as each dependency of that module is satisfied. A dependency here is usually quite loosely specified as a lower minimal bound dependency: the oldest version of foo which can possibly be used to build or link against, will suffice to build bar.
This Mirror Build is historically the most popular, born of the desire to allow the end user to pick up some set of sources and compile the latest version, while encountering the least resistance to do so.
While the famous RPM and Debian build systems have their roots in this build topology, it’s worth noting that surrounding tooling has since evolved to build RPMs or Debian packages under a different topology. For instance, when using OBS to build RPMs or Debian packages: each package is built in sequence, staging only the dependencies which the next package needs from previous builds into a minimal VM. Since we are bootstrapping often and isolating the environment for each build to occur in sequence from a predefined manifest of specifically versioned package, it is much more deterministic and becomes a Sequential Build instead.
The Sequential Build
The Sequential Build, again for the lack of any predefined terminology I can find, is one where the entire OS can be built from scratch. Again and again.
The LFS build, without any backing build system per se, I think is a prime example of this topology.
This build can still be said to be self hosting, indeed; one previously built package is used to build the next package in sequence. Aside from the necessary toolchain bootstrapping: the build host where all the tools are executed is also the target where all software is intended to run. The distinction I make here is that only packages (and those package versions) which are part of the resulting OS are ever used to build that same OS, so a strict order must be enforced, and in some cases the same package needs to be built more than once to achieve the end result, however determinism is favored.
It’s also noteworthy that this common property, where host = target, is what is generally expected by most project build scripts. While cross compiles (more on that below) typically have to struggle and force things to build in some contorted way.
While the Ports, Portage, and Pacman build systems, which encourage the build to occur on your own machine, seem to lend themselves better to the Sequential Build, this only seems to be true at bootstrap time (I would need to look more closely into these systems to say more). Also, these system are not without their own set of problems. With gentoo’s Portage, one can also fall into circular dependency traps where one needs to build a given package twice while tweaking the USE flags along the way. Also with Portage, package dependencies are not strictly defined but again loosely defined as lower minimal bound dependencies.
I would say that a Sequential Self Hosted Build lends itself better to determinism and repeatability, but a build topology which is sequential is not inherently deterministic.
Cross Compiles
The basic concept of Cross Compiling is simple: Use a compiler that runs on host and outputs binary to be later run on target.
But the activity of cross compiling an entire OS is much more complex than just running a compiler on your host and producing binary output for a target.
Direct Cross Build
It is theoretically possible to compile everything for the target using only host tools and a host installed cross compiler, however I have yet to encounter any build system which uses such a topology.
This is probably primarily because it would require that many host installed tools be sysroot aware beyond just the cross compiler. Hence we resort to a Multi Stage Cross Build.
Multi Stage Cross Build
This Multi Stage Cross Build, which can be observed in projects such as Buildroot and Yocto shares some common ground with the Sequential Self Hosted Build topology, except that the build is run in multiple stages.
In the first stage, all the tools which might be invoked during the cross build are built into sysroot prefix for host runnable tooling. This is where you will find your host -> target cross compiler along with autotools, pkg-config, flex, bison, and basically every tool you may need to run on your host during the build. These tools installed your host tooling sysroot are specially configured so that when these tools are run they find their comrades in the same sysroot but look for other payload assets (like shared libraries) in the eventual target sysroot.
Only after this stage, which may have involved patching some tooling to make it behave well for the next stage, do we really start cross compiling.
In the second stage we use only tools built into the toolchain’s sysroot to build the target. Starting by cross compiling a C library and a native compiler for your target architecture.
Asides from this defining property, that a cross compile is normally done in separate stages, there is the detail that pretty much everything under the sun besides the toolchain itself (which must always support bootstrapping and cross compiling) needs to be coerced into cooperation with added obscure environment variables, or sometimes beaten into submission with patches.
Virtual Cross Build
While a cross compile will always be required for the base toolchain, I am hopeful that with modern emulation, tools like Scratchbox 2 and approaches such as Aboriginal Linux; we can ultimately abolish the Multi Stage Cross Build topology entirely from existence. The added work involved in maintaining build scripts which are cross build aware and constant friction with downstream communities which insist on cross building upstream software is just not worth the effort when a self hosting build can be run in a virtual environment.
Some experimentation already exists, the Mer Project was successful in running OBS builds inside a Scratchbox 2 environment to cross compile RPMSs without having to deal with the warts of traditional cross compiles. I also did some experimentation this year building the GNU/Linux/GNOME stack with Aboriginal Linux.
This kind of virtual cross compile does not constitute a unique build topology since it in fact uses one of the Self Hosting topologies inside a virtual environment to produce a result for a new architecture.
Finally
In closing, there are certainly a great variety of build systems out there, all of which have made different design choices and share common properties. Not much vocabulary exists to describe these characteristics. This suggests that the area of building software remains somewhat unexplored, and that the tooling we use for such tasks is largely born of necessity, barely holding together with lots of applied duct tape. With interesting new developments for distributing software such as Flatpak, and studies into how to build software reliably and deterministically, such as the reproducible builds project, hopefully we can expect some improvements in this area.
I hope you’ve enjoyed my miscellaneous ramblings of the day.
The past month report will be short. Indeed Aryeom sprained the thumb from her drawing hand, as we already told a month ago. What we did not plan is that it would take that long to get better (the doctor initially said it should be better within 2 weeks… well she was wrong!). Aryeom actually tried to work again after 2-week rest (i.e. following doctor advice), but after a few days of work, the pain was pretty bad and she had to stop.
Later Aryeom has started working from the left hand. Below is her first drawing with her left hand:

Left-hand drawing by Aryeom for Simwoool magazine
I personally think it is very cool but she says it is not enough for professional work. Also she is a few times slower with this hand for the moment. Yet for ZeMarmot, she started animating again with the left hand (wouhou!), but not doing finale painting/render. She is waiting the right hand to get better for this.
In the meantime, she has regular sessions with a physiotherapist and Friday, she’ll do a radiograph of the hand to make sure everything is OK (since pain lasted longer than expected).
Because of this, the month was slow. We also decided to refuse a few conferences, and in particular the upcoming Capitole du Libre, quite a big event in France in November, because we wanted to focus on ZeMarmot instead, especially because of the lateness which this sprain generated on the schedule. We will likely participate to no public event until next year.
Probably now is a time when your support will matter more than ever because it has been pretty hard, on Aryeom in particular, as you can guess. When your hand is your main work tool, you can imagine how it feels to have such an issue. :-/
Do not hesitate to send her a few nice words through comments!
Next month, hopefully the news will be a lot better.
October 12, 2016
We aren't just making code. We are working in a shared workplace, even if it's an online place rather than a physical office or laboratory, making stuff together. The work includes not just writing functions and classes, but experiments and planning and coming up with "we ought to do this" ideas. And we try to make it so that anyone coming into our shared workplace -- or anyone who's working on a different part of the project than they're already used to -- can take a look at what we've already said and done, and reuse the work that's been done already.
We aren't just making code. We're making history. And we're making a usable history, one that you can use, and one that the contributor next year can use.
So if you're contributing now, you have to learn to learn from history. We put a certain kind of work in our code repositories, both code and notes about the code. git grep idea searches a code repository's code and comments for the word "idea", git log --grep="idea" searches the commit history for times we've used the word "idea" in a commit message, and git blame codefile.py shows you who last changed every line of that codefile, and when. And we put a certain kind of work into our conversations, in our mailing lists and our bug/issue trackers. We say "I tried this and it didn't work" or "here's how someone else should implement this" or "I am currently working on this". You will, with practice, get better at finding and looking at these clues, at finding the bits of code and conversation that are relevant to your question.
And you have to learn to contribute to history. This is why we want you to ask your questions in public -- so that when we answer them, someone today or next week or next year can also learn from the answer. This is why we want you to write emails to our mailing lists where you explain what you're doing. This is why we ask you to use proper English when you write code comments, and why we have rules for the formatting and phrasing of commit messages, so it's easier for someone in the future to grep and skim and understand. This is why a good question or a good answer has enough context that other people, a year from now, can see whether it's relevant to them.
Relatedly: the scientific method is for teaching as well as for troubleshooting. I compared an open source project to a lab before. In the code work we do, we often use the scientific method. In order for someone else to help you, they have to create, test, and prove or disprove theories -- about what you already know, about what your code is doing, about the configuration on your computer. And when you see me asking a million questions, asking you to try something out, asking what you have already tried, and so on, that's what I'm doing. I'm generally using the scientific method. I'm coming up with a question and a hypothesis and I'm testing it, or asking you to test it, so we can look at that data together and draw conclusions and use them to find new interesting questions to pursue.
Example:
- Expected result: doing run-dev.py on your machine will give you the same results as on mine.
- Actual observation: you get a different result, specifically, an error that includes a permissions problem.
- Hypothesis: the relevant directories or users aren't set up with the permissions they need.
- Next step: Request for further data to prove or disprove hypothesis.
In our coding work, it's a shared responsibility to generate hypotheses and to investigate them, to put them to the test, and to share data publicly to help others with their investigations. And it's more fruitful to pursue hypotheses, to ask "I tried ___ and it's not working; could the reason be this?", than it is to merely ask "what's going on?" and push the responsibility of hypothesizing and investigation onto others.
This is a part of balancing self-sufficiency and interdependence. You must try, and then you must ask. Use the scientific method and come up with some hypotheses, then ask for help -- and ask for help in a way that helps contribute to our shared history, and is more likely to help ensure a return-on-investment for other people's time.
So it's likely to go like this:
- you try to solve your problem until you get stuck, including looking through our code and our documentation, then start formulating your request for help
- you ask your question
- someone directs you to a document
- you go read that document, and try to use it to answer your question
- you find you are confused about a new thing
- you ask another question
- now that you have demonstrated that you have the ability to read, think, and learn new things, someone has a longer talk with you to answer your new specific question
- you and the other person collaborate to improve the document that you read in step 4 :-)
This helps us make a balance between person-to-person discussion and documentation that everyone can read, so we save time answering common questions but also get everyone the personal help they need. This will help you understand the rhythm of help we provide in livechat -- including why we prefer to give you help in public mailing lists and channels, instead of in one-on-one private messages or email. We prefer to hear from you and respond to you in public places so more people have a chance to answer the question, and to see and benefit from the answer.
We want you to learn and grow. And your success is going to include a day when you see how we should be doing things better, not just with a new feature or a bugfix in the code, but in our processes, in how we're organizing and running the lab. I also deeply want for you to take the lessons you learn -- about how a group can organize itself to empower everyone, about seeing and hacking systems, about what scaffolding makes people more capable -- to the rest of your life, so you can be freer, stronger, a better leader, a disruptive influence in the oppressive and needless hierarchies you encounter. That's success too. You are part of our history and we are part of yours, even if you part ways with us, even if the project goes defunct.
This is where I should say something about not just making a diff but a difference, or something about the changelog of your life, but I am already super late to go on my morning jog and this was meant to be a quick-and-rough braindump anyway...
I have lately been in the market for better concurrency facilities in Guile. I want to be able to write network servers and peers that can gracefully, elegantly, and efficiently handle many tens of thousands of clients and other connections, but without blowing the complexity budget. It's a hard nut to crack.
Part of the problem is implementation, but a large part is just figuring out what to do. I have often thought that modern musicians must be crushed under the weight of recorded music history, but it turns out in our humble field that's also the case; there are as many concurrency designs as languages, just about. In this regard, what follows is an incomplete, nuanced, somewhat opinionated history of concurrency facilities in programming languages, with an eye towards what I should "buy" for the Fibers library I have been tinkering on for Guile.
Modern machines have the raw capability to serve hundreds of thousands of simultaneous long-lived connections, but it’s often hard to manage this at the software level. Fibers tries to solve this problem in a nice way. Before discussing the approach taken in Fibers, it’s worth spending some time on history to see how we got here.
One of the most dominant patterns for concurrency these days is “callbacks”, notably in the Twisted library for Python and the Node.js run-time for JavaScript. The basic observation in the callback approach to concurrency is that the efficient way to handle tens of thousands of connections at once is with low-level operating system facilities like poll or epoll. You add all of the file descriptors that you are interested in to a “poll set” and then ask the operating system which ones are readable or writable, as appropriate. Once the operating system says “yes, file descriptor 7145 is readable”, you can do something with that socket; but what? With callbacks, the answer is “call a user-supplied closure”: a callback, representing the continuation of the computation on that socket.
Building a network service with a callback-oriented concurrency system means breaking the program into little chunks that can run without blocking. Whereever a program could block, instead of just continuing the program, you register a callback. Unfortunately this requirement permeates the program, from top to bottom: you always pay the mental cost of inverting your program’s control flow by turning it into callbacks, and you always incur run-time cost of closure creation, even when the particular I/O could proceed without blocking. It’s a somewhat galling requirement, given that this contortion is required of the programmer, but could be done by the compiler. We Schemers demand better abstractions than manual, obligatory continuation-passing-style conversion.
Callback-based systems also encourage unstructured concurrency, as in practice callbacks are not the only path for data and control flow in a system: usually there is mutable global state as well. Without strong patterns and conventions, callback-based systems often exhibit bugs caused by concurrent reads and writes to global state.
Some of the problems of callbacks can be mitigated by using “promises” or other library-level abstractions; if you’re a Haskell person, you can think of this as lifting all possibly-blocking operations into a monad. If you’re not a Haskeller, that’s cool, neither am I! But if your typey spidey senses are tingling, it’s for good reason: with promises, your whole program has to be transformed to return promises-for-values instead of values anywhere it would block.
An obvious solution to the control-flow problem of callbacks is to use threads. In the most generic sense, a thread is a language feature which denotes an independent computation. Threads are created by other threads, but fork off and run independently instead of returning to their caller. In a system with threads, there is implicitly a scheduler somewhere that multiplexes the threads so that when one suspends, another can run.
In practice, the concept of threads is often conflated with a particular implementation, kernel threads. Kernel threads are very low-level abstractions that are provided by the operating system. The nice thing about kernel threads is that they can use any CPU that is the kernel knows about. That’s an important factor in today’s computing landscape, where Moore’s law seems to be giving us more cores instead of more gigahertz.
However, as a building block for a highly concurrent system, kernel threads have a few important problems.
One is that kernel threads simply aren’t designed to be allocated in huge numbers, and instead are more optimized to run in a one-per-CPU-core fashion. Their memory usage is relatively high for what should be a lightweight abstraction: some 10 kilobytes at least and often some megabytes, in the form of the thread’s stack. There are ongoing efforts to reduce this for some systems but we cannot expect wide deployment in the next 5 years, if ever. Even in the best case, a hundred thousand kernel threads will take at least a gigabyte of memory, which seems a bit excessive for book-keeping overhead.
Kernel threads can be a bit irritating to schedule, too: when one thread suspends, it’s for a reason, and it can be that user-space knows a good next thread that should run. However because kernel threads are scheduled in the kernel, it’s rarely possible for the kernel to make informed decisions. There are some “user-mode scheduling” facilities that are in development for some systems, but again only for some systems.
The other significant problem is that building non-crashy systems on top of kernel threads is hard to do, not to mention “correct” systems. It’s an embarrassing situation. For one thing, the low-level synchronization primitives that are typically provided with kernel threads, mutexes and condition variables, are not composable. Also, as with callback-oriented concurrency, one thread can silently corrupt another via unstructured mutation of shared state. It’s worse with kernel threads, though: a kernel thread can be interrupted at any point, not just at I/O. And though callback-oriented systems can theoretically operate on multiple CPUs at once, in practice they don’t. This restriction is sometimes touted as a benefit by proponents of callback-oriented systems, because in such a system, the callback invocations have a single, sequential order. With multiple CPUs, this is not the case, as multiple threads can run at the same time, in parallel.
Kernel threads can work. The Java virtual machine does at least manage to prevent low-level memory corruption and to do so with high performance, but still, even Java-based systems that aim for maximum concurrency avoid using a thread per connection because threads use too much memory.
In this context it’s no wonder that there’s a third strain of concurrency: shared-nothing message-passing systems like Erlang. Erlang isolates each thread (called processes in the Erlang world), giving each it its own heap and “mailbox”. Processes can spawn other processes, and the concurrency primitive is message-passing. A process that tries receive a message from an empty mailbox will “block”, from its perspective. In the meantime the system will run other processes. Message sends never block, oddly; instead, sending to a process with many messages pending makes it more likely that Erlang will pre-empt the sending process. It’s a strange tradeoff, but it makes sense when you realize that Erlang was designed for network transparency: the same message send/receive interface can be used to send messages to processes on remote machines as well.
No network is truly transparent, however. At the most basic level, the performance of network sends should be much slower than local sends. Whereas a message sent to a remote process has to be written out byte-by-byte over the network, there is no need to copy immutable data within the same address space. The complexity of a remote message send is O(n) in the size of the message, whereas a local immutable send is O(1). This suggests that hiding the different complexities behind one operator is the wrong thing to do. And indeed, given byte read and write operators over sockets, it’s possible to implement remote message send and receive as a process that serializes and parses messages between a channel and a byte sink or source. In this way we get cheap local channels, and network shims are under the programmer’s control. This is the approach that the Go language takes, and is the one we use in Fibers.
Structuring a concurrent program as separate threads that communicate over channels is an old idea that goes back to Tony Hoare’s work on “Communicating Sequential Processes” (CSP). CSP is an elegant tower of mathematical abstraction whose layers form a pattern language for building concurrent systems that you can still reason about. Interestingly, it does so without any concept of time at all, instead representing a thread’s behavior as a trace of instantaneous events. Threads themselves are like functions that unfold over the possible events to produce the actual event trace seen at run-time.
This view of events as instantaneous happenings extends to communication as well. In CSP, one communication between two threads is modelled as an instantaneous event, partitioning the traces of the two threads into “before” and “after” segments.
Practically speaking, this has ramifications in the Go language, which was heavily inspired by CSP. You might think that a channel is just a an asynchronous queue that blocks when writing to a full queue, or when reading from an empty queue. That’s a bit closer to the Erlang conception of how things should work, though as we mentioned, Erlang simply slows down writes to full mailboxes rather than blocking them entirely. However, that’s not what Go and other systems in the CSP family do; sending a message on a channel will block until there is a receiver available, and vice versa. The threads are said to “rendezvous” at the event.
Unbuffered channels have the interesting property that you can select between sending a message on channel a or channel b, and in the end only one message will be sent; nothing happens until there is a receiver ready to take the message. In this way messages are really owned by threads and never by the channels themselves. You can of course add buffering if you like, simply by making a thread that waits on either sends or receives on a channel, and which buffers sends and makes them available to receives. It’s also possible to add explicit support for buffered channels, as Go, core.async, and many other systems do, which can reduce the number of context switches as there is no explicit buffer thread.
Whether to buffer or not to buffer is a tricky choice. It’s possible to implement singly-buffered channels in a system like Erlang via an explicit send/acknowlege protocol, though it seems difficult to implement completely unbuffered channels. As we mentioned, it’s possible to add buffering to an unbuffered system by the introduction of explicit buffer threads. In the end though in Fibers we follow CSP’s lead so that we can implement the nice select behavior that we mentioned above.
As a final point, select is OK but is not a great language abstraction. Say you call a function and it returns some kind of asynchronous result which you then have to select on. It could return this result as a channel, and that would be fine: you can add that channel to the other channels in your select set and you are good. However, what if what the function does is receive a message on a channel, then do something with the message? In that case the function should return a channel, plus a continuation (as a closure or something). If select results in a message being received over that channel, then we call the continuation on the message. Fine. But, what if the function itself wanted to select over some channels? It could return multiple channels and continuations, but that becomes unwieldy.
What we need is an abstraction over asynchronous operations, and that is the main idea of a CSP-derived system called “Concurrent ML” (CML). Originally implemented as a library on top of Standard ML of New Jersey by John Reppy, CML provides this abstraction, which in Fibers is called an operation1. Calling send-operation on a channel returns an operation, which is just a value. Operations are like closures in a way; a closure wraps up code in its environment, which can be later called many times or not at all. Operations likewise can be performed2 many times or not at all; performing an operation is like calling a function. The interesting part is that you can compose operations via the wrap-operation and choice-operation combinators. The former lets you bundle up an operation and a continuation. The latter lets you construct an operation that chooses over a number of operations. Calling perform-operation on a choice operation will perform one and only one of the choices. Performing an operation will call its wrap-operation continuation on the resulting values.
While it’s possible to implement Concurrent ML in terms of Go’s channels and baked-in select statement, it’s more expressive to do it the other way around, as that also lets us implement other operations types besides channel send and receive, for example timeouts and condition variables.
1 CML uses the term event, but I find this to be a confusing name. In this isolated article my terminology probably looks confusing, but in the context of the library I think it can be OK. The jury is out though.
2 In CML, synchronized.
Well, that's my limited understanding of the crushing weight of history. Note that part of this article is now in the Fibers manual.
Thanks very much to Matthew Flatt, Matthias Felleisen, and Michael Sperber for pushing me towards CML. In the beginning I thought its benefits were small and complication large, but now I see it as being the reverse. Happy hacking :)
October 11, 2016
cat filename.txt | sort | uniq
This is a very powerful concept. However there is a noticeable performance overhead. In order to do this, the shell needs to launch three different processes. In addition all communication between these programs happens over Unix pipes, meaning a round trip call into the kernel for each line of text transferred (roughly, not an exact number in all cases). Even though processes and pipes are cheap on Unix they still cause noticeable slowdown (as anyone who has run large shell scripts knows).
Meanwhile there has been a lot of effort to put coroutines into C++. A very simple (and not fully exhaustive) way of describing coroutines is to say that they basically implement the same functionality as Python generators. That is, this Python code:
def gen():
yield 1
yield 2
yield 3
could be written in C++ like this (syntax is approximate and might not even compile):
int gen() {
co_yield 1;
co_yield 2;
co_yield 3;
}
If we look more closely into the Unix userland specification we find that even though it is (always?) implemented with processes and pipes, it is not required to. If we use C's as if rule we are free to implement shell pipelines in any way we want as long as the output is the same. This gives us a straightforward guide to implement the Unix userland in C++ coroutines.
We can model each Unix command line application as a coroutine that reads input data, either stdout or stderr, one line at a time, operates on it and outputs the result in the same way. Then we parse a shell pipeline into its constituent commands, connect the output of element n as the input of element n + 1 and keep reading the output of the last pipeline element until EOF is reached. This allows us to run the entire pipeline without spawning a single process.
A simple implementation with a few basic commands is available on Github. Unfortunately real coroutines were not properly available on Linux at the time of writing, only a random nonmerged branch of Clang. Therefore the project contains a simple class based implementation that mimics coroutines. Interested readers are encouraged to go through the code and work out how much boilerplate code a proper coroutine implementation would eliminate.
Mini-FAQ
In this last week, the master branch of GTK+ has seen 24 commits, with 3731 lines added and 3351 lines removed.
Planning and status
- Matthias released GTK+ 3.22.1, and created the
gtk-3-22branch for stable releases - The window on the
masterbranch is now open for the 4.0 development - Benjamin Otte has started working on the removal of the 3.x deprecated API in his wip/otte/gtk4 branch
- Timm Bäder is removing the deprecated style API in his wip/baedert/box branch
- The GTK+ road map is available on the wiki.
Notable changes
- Matthias is working on the build system to ensure that the
masterbranch is parallel installable with thegtk-3-22andgtk-2-24stable branches - The old and deprecated
AM_PATH_GTK_3_0m4 macro for autotools-based build systems has been removed from themasterbranch; projects depending on GTK+ 3.x should already have been ported to just use pkg-config and thePKG_CHECK_MODULESmacro instead.
Bugs fixed
- 772695 – Show the keyboard shortcuts from left to right even in RTL
- 772345 – placesviewrow: busy_spinner when visible offsets the rest of the widgets on the row
- 772415 – Avoid calling eglGetDisplay
- 772389 – Appending a character to a GtkEntry control in overwrite mode rings the bell
Getting involved
Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.
October 10, 2016
Reproducibility, in Debian, is:
With free software, anyone can inspect the source code for malicious flaws. But Debian provide binary packages to its users. The idea of “deterministic” or “reproducible” builds is to empower anyone to verify that no flaws have been introduced during the build process by reproducing byte-for-byte identical binary packages from a given source.
Then, in order to provide reproducible binaries to Vala projects we need:
- Make sure you distribute generated C source code
- If you are a library, make sure to distribute VAPI and GIR files
This will help build process to avoid call valac in order to generate C source code, VAPI and GIR files from your Vala sources.
Because C source is distributed with a release’s tarball, any Vala project could be binary reproducible from sources.
In order to produce development packages, you should distribute VAPI and GIR files, along with .h ones. They should be included in your tarball, to avoid valac produce them.
GXml distribute all their C sources, but not GIR and VAPI files. This should be fixed next release.
GNOME Clocks distributes just Vala sources; this is why bug #772717 against Clocks, has been filed.
libgee distributes Vala sources also, but no Debian bug exists against it. May be its Vala source annotations helps, but may is a good idea to distribute C, VAPI and GIR files in future versions.
My patches to GNOME Builder, produce Makefiles to generate C sources form Vala ones. They require to be updated in order to distribute VAPI and GIR files with your Vala project.
October 09, 2016
We had a Fedora booth at LinuxDays 2016 in Prague and one of our attractions was Miro Hrončok‘s 3D printer Lulzbot Mini. Because Miro was busy helping organize the conference he just left the printer at the booth and I had to set it up myself. And it really surprised me how easy it is to 3D print using Fedora. Basically what I had to do was:
- installing Cura from the official repositories,
- plugging the printer into a USB (automatically connected due to Miro’s udev rules),
- starting Cura, choosing the model of the printer and material I’m going to print with,
- opening a file with the 3D model I wanted to print,
- hitting the “Print” button and watching the printer in action.
Fedora has been known to be the best OS for 3D printing already for some time, mainly due to the work of Miro (he packaged all the available open source software for 3D printing, prepared udev rules to automatically connect to 3D printers etc.), but I was still surprised how easy it is to 3D print with Fedora these days. It really took just a couple of minutes from a stock system to start of the actual printing. It’s almost as simple as printing on papers.
There is still room for improvements though. Some 3D printing apps (Cura Lulzbot Edition is one of them) are available in the official repositories of Fedora, but don’t have an appdata file, so they don’t show up in GNOME Software. And it would also be nice to have “3D Printing” category in GNOME Software, so that the software is more discoverable for users.
Over the past year I’ve chipped away at setting up new servers for apestaart and managing the deployment in puppet as opposed to a by now years old manual single server configuration that would be hard to replicate if the drives fail (one of which did recently, making this more urgent).
It’s been a while since I felt like I was good enough at puppet to love and hate it in equal parts, but mostly manage to control a deployment of around ten servers at a previous job.
Things were progressing an hour or two here and there at a time, and accelerated when a friend in our collective was launching a new business for which I wanted to make sure he had a decent redundancy setup.
I was saving the hardest part for last – setting up Nagios monitoring with Matthias Saou’s puppet-nagios module, which needs External Resources and storeconfigs working.
Even on the previous server setup based on CentOS 6, that was a pain to set up – needing MySQL and ruby’s ActiveRecord. But it sorta worked.
It seems that for newer puppet setups, you’re now supposed to use something called PuppetDB, which is not in fact a database on its own as the name suggests, but requires another database. Of course, it chose to need a different one – Postgres. Oh, and PuppetDB itself is in Java – now you get the cost of two runtimes when you use puppet!
So, to add useful Nagios monitoring to my puppet deploys, which without it are quite happy to be simple puppet apply runs from a local git checkout on each server, I now need storedconfigs which needs puppetdb which pulls in Java and Postgres. And that’s just so a system that handles distributed configuration can actually be told about the results of that distributed configuration and create a useful feedback cycle allowing it to do useful things to the observed result.
Since I test these deployments on local vagrant/VirtualBox machines, I had to double their RAM because of this – even just the puppetdb java server by default starts with 192MB reserved out of the box.
But enough complaining about these expensive changes – at least there was a working puppetdb module that managed to set things up well enough.
It was easy enough to get the first host monitored, and apart from some minor changes (like updating the default Nagios config template from 3.x to 4.x), I had a familiar Nagios view working showing results from the server running Nagios itself. Success!
But all runs from the other vm’s did not trigger adding any exported resources, and I couldn’t find anything wrong in the logs. In fact, I could not find /var/log/puppetdb/puppetdb.log at all…
fun with utf-8
After a long night of experimenting and head scratching, I chased down a first clue in /var/log/messages saying puppet-master[17702]: Ignoring invalid UTF-8 byte sequences in data to be sent to PuppetDB
I traced that down to puppetdb/char_encoding.rb, and with my limited ruby skills, I got a dump of the offending byte sequence by adding this code:
Puppet.warning "Ignoring invalid UTF-8 byte sequences in data to be sent to PuppetDB"
File.open('/tmp/ruby', 'w') { |file| file.write(str) }
Puppet.warning "THOMAS: is here"
(I tend to use my name in debugging to have something easy to grep for, and I wanted some verification that the File dump wasn’t triggering any errors)
It took a little time at 3AM to remember where these /tmp files end up thanks to systemd, but once found, I saw it was a json blob with a command to “replace catalog”. That could explain why my puppetdb didn’t have any catalogs for other hosts. But file told me this was a plain ASCII file, so that didn’t help me narrow it down.
I brute forced it by just checking my whole puppet tree:
find . -type f -exec file {} \; > /tmp/puppetfile
grep -v ASCII /tmp/puppetfile | grep -v git
This turned up a few UTF-8 candidates. Googling around, I was reminded about how terrible utf-8 handling was in ruby 1.8, and saw information that puppet recommended using ASCII only in most of the manifests and files to avoid issues.
It turned out to be a config from a webalizer module:
webalizer/templates/webalizer.conf.erb: UTF-8 Unicode text
While it was written by a Jesús with a unicode name, the file itself didn’t have his name in it, and I couldn’t obviously find where the UTF-8 chars were hiding. One StackOverflow post later, I had nailed it down – UTF-8 spaces!
00004ba0 2e 0a 23 c2 a0 4e 6f 74 65 20 66 6f 72 20 74 68 |..#..Note for th|
00004bb0 69 73 20 74 6f 20 77 6f 72 6b 20 79 6f 75 20 6e |is to work you n|
The offending character is c2 a0 – the non-breaking space
I have no idea how that slipped into a comment in a config file, but I changed the spaces and got rid of the error.
Puppet’s error was vague, did not provide any context whatsoever (Where do the bytes come from? Dump the part that is parseable? Dump the hex representation? Tell me the position in it where the problem is?), did not give any indication of the potential impact, and in a sea of spurious puppet warnings that you simply have to live with, is easy to miss. One down.
However, still no catalogs on the server, so still only one host being monitored. What next?
users, groups, and permissions
Chasing my next lead turned out to be my own fault. After turning off SELinux temporarily, checking all permissions on all puppetdb files to make sure that they were group-owned by puppetdb and writable for puppet, I took the last step of switching to that user role and trying to write the log file myself. And it failed. Huh? And then id told me why – while /var/log/puppetdb/ was group-writeable and owned by puppetdb group, my puppetdb user was actually in the www-data group.
It turns out that I had tried to move some uids and gids around after the automatic assignment puppet does gave different results on two hosts (a problem I still don’t have a satisfying answer for, as I don’t want to hard-code uids/gids for system accounts in other people’s modules), and clearly I did one of them wrong.
I think a server that for whatever reason cannot log should simply not start, as this is a critical error if you want a defensive system.
After fixing that properly, I now had a puppetdb log file.
resource titles
Now I was staring at an actual exception:
2016-10-09 14:39:33,957 ERROR [c.p.p.command] [85bae55f-671c-43cf-9a54-c149cede
c659] [replace catalog] Fatal error on attempt 0
java.lang.IllegalArgumentException: Resource '{:type "File", :title "/var/lib/p
uppet/concat/thomas_vimrc/fragments/75_thomas_vimrc-\" allow adding additional
config through .vimrc.local_if filereadable(glob(\"~_.vimrc.local\"))_\tsource
~_.vimrc.local_endif_"}' has an invalid tag 'thomas:vimrc-" allow adding additi
onal config through .vimrc.local
if filereadable(glob("~/.vimrc.local"))
source ~/.vimrc.local
endif
'. Tags must match the pattern /\A[a-z0-9_][a-z0-9_:\-.]*\Z/.
at com.puppetlabs.puppetdb.catalogs$validate_resources.invoke(catalogs.
clj:331) ~[na:na]
Given the name of the command (replace catalog), I felt certain this was going to be the problem standing between me and multiple hosts being monitored.
The problem was a few levels deep, but essentially I had code creating fragments of vimrc files using the concat module, and was naming the resources with file content as part of the title. That’s not a great idea, admittedly, but no other part of puppet had ever complained about it before. Even the files on my file system that store the fragments, which get their filename from these titles, happily stored with a double quote in its name.
So yet again, puppet’s lax approach to specifying types of variables at any of its layers (hiera, puppet code, ruby code, ruby templates, puppetdb) in any of its data formats (yaml, json, bytes for strings without encoding information) triggers errors somewhere in the stack without informing whatever triggered that error (ie, the agent run on the client didn’t complain or fail).
Once again, puppet has given me plenty of reasons to hate it with a passion, tipping the balance.
I couldn’t imagine doing server management without a tool like puppet. But you love it when you don’t have to tweak it much, and you hate it when you’re actually making extensive changes. Hopefully after today I can get back to the loving it part.
I wanted to try the latest version of Builder (gnome-builder 3.22) so I decided to install it with Flatpak. Here is a screenshot of GNOME Builder.
To run gnome builder nightly with any of my instructions and use all features you need Flatpak 0.6.11 or later. In case you run GNOME Software 3.23 or later I made a .flatpakref for GNOME Builder Nightly. You can download it and open it with GNOME Software to install Builder (note: you need to have Flatpak installed):
gnome-builder-nightly.flatpakref
If that doesn’t work for you, you can also do it by Flatpak’s commandline:
Installing the flatpakref
- Install gnome-nightly.flatpakrepo:
- Install the gnome-builder-nightly.flatpakref:
flatpak --user install --from gnome-nightly.flatpakrepo
flatpak --user install --from gnome-builder-nightly.flatpakref
- Install GNOME Nightly SDK keys:
- Install GNOME Nightly SDK:
- Install GNOME Nightly Apps keys:
- Install org.gnome.Builder
Manual approach
wget https://sdk.gnome.org/nightly/keys/nightly.gpg
flatpak --user remote-add --gpg-import=nightly.gpg gnome-nightly https://sdk.gnome.org/nightly/repo/
See also: http://flatpak.org/runtimes.html
flatpak --user install gnome-nightly org.gnome.Sdk
wget https://sdk.gnome.org/nightly/keys/nightly.gpg
flatpak --user remote-add --gpg-import=nightly.gpg gnome-nightly-apps https://sdk.gnome.org/nightly/repo-apps/
See also: http://flatpak.org/apps.html#nightly-gnome
flatpak --user install gnome-nightly-apps org.gnome.Builder
EDIT: Sorry i had forgotten to add --user to all the commands, should be fixed now!
EDIT2: in case you are wondering why the flatpakrepo and flatpakref is hosted on the gnome cloud space it’s because I haven’t gotten in touch with the right people to get official files up in the sdk.gnome.org space where they (imo) belong. stay tuned!
In my previous blog post, where I was providing an update on the 2016 GNOME Summit I was organizing in Montréal, I wrote,
With a change of attendees comes a change of the nature of the event: instead of being an extremely technical, “deep end of the pool” event as it has been in the past, this edition will be focused on newcomers and potential contributors, with presentations and workshops targetted for this purpose.
In practice, it turned out to be a great event. It was halfway between the traditional highly technical gathering and the “event aimed at new contributors”, with 13 attendees including myself.
After welcoming participants and opening the event with a rough action plan, I gave a short presentation to set the stage and provide an overview of what GNOME is (a collection of software, a community, and the GNOME Foundation). The point was to have an icebreaker talk that would ensure all participants are on the same level of understanding as to who they were dealing with. This eventually morphed into a joint talk with Hubert that looked more like a Saturday Night Live show with two hosts sitting in fancy chairs in front of the audience (wish I had pictures of the two of us taken from the outside to illustrate the “TV show” feel).
We gradually got people to open up and get involved by probing participants with questions. After all, the chief goal is to motivate people into contributing to the GNOME, showing them the various ways they can get involved, how the GNOME Project’s release cycle works (and where the timing of their contributions comes in) and what technologies are involved. We sang the praise of Flatpak and GNOME Builder (even though he uses Emacs and I use Gedit ;)
Our introductory discussion was then followed by a presentation by Guillaume Poirier-Morency on Vala, Meson, pollcore and Valum.
David Mirza Ahmad also presented Subgraph OS, the problem it aims to solve and how GNOME fits into the picture. Their distro runs a hardened GNOME3 and it’s quite relevant to our interests. One could have a fun week-end in Manchester with these folks!
It seems the Subgraph talk captured attendees’ interest and ended up morphing into a security discussion that took the rest of the afternoon.
While I initially thought we wouldn’t have enough content to fill more than a day, we ended up not having time to do newcomers workshops (short & sweet, as I would say). There was a shared interest among participants to try creating a GNOME community in Montréal to have such meetings more often. So, for starters, I have now created the #gnome-montreal IRC channel on irc.gnome.org. Get your butt over there if you’re in the area.
I would like to thank ÉTS and ML2 for providing us with a nice meeting space for the event, and the Subgraph folks for sponsoring dinner on Saturday night (a very nice touch), where lively debates were facilitated by beer and food.
Visualizers are almost ready to land. This week I got smooth scrolling, zoom, and selections working. The handy part about selections is that it allows you to update the callgraph to limit stack samples to those falling within a given time range.
October 08, 2016
But why automake? Cargo is nice.
Yes it is. But it is also limited to build the Rust crate. It does one thing, very well, and easily.
Although I'm writing a GNOME application and this needs more than building the code. So I decided I need to wrap the build process into automake.
Let's start with Autoconf for Rust Project. This post is a great introduction to solving the problem and give an actual example on doing it even though the author just uses autoconf. I need automake too, but this is a good start.
We'll basically write a configure.ac and a Makefile.am in the top level Rust crate directory.
AC_INIT([gpsami], m4_esyscmd([grep ^version Cargo.toml | awk '{print $3}' | tr -d '"' | tr -d "\n"]), [hub@figuiere.net])
AM_INIT_AUTOMAKE([1.11 foreign no-dependencies no-dist-gzip dist-xz subdir-objects])
Let's init autoconf and automake. We use the options: foreign to not require all the GNU files, no-dependencies because we don't have dependency tracking done by make (cargo do that for us) and subdir-objects because we have one Makefile.am and don't want recursive mode.
The m4_esyscmd macro is a shell command to extract the version out of the Cargo.toml.
VERSION=$(grep ^version Cargo.toml | awk '{print $3}' | tr -d '"' | tr -d "\n")
This does the same as above, but put it into VERSION
This shell command was adapted from Autoconf for Rust Project but fixed as it was being greedy and also captured the "version" strings from the dependencies.
AC_CHECK_PROG(CARGO, [cargo], [yes], [no])
AS_IF(test x$CARGO = xno,
AC_MSG_ERROR([cargo is required])
)
AC_CHECK_PROG(RUSTC, [rustc], [yes], [no])
AS_IF(test x$RUSTC = xno,
AC_MSG_ERROR([rustc is required])
)
Check for cargo and rustc. I'm pretty sure without rustc you don't have cargo, but better be safe than sorry. Note that this is considered a fatal error at configure time.
dnl Release build we do. CARGO_TARGET_DIR=release AC_SUBST(CARGO_TARGET_DIR)
This is a trick: we need the cargo target directory. We hardcode to release as that's what we want to build.
The end is pretty much standard.
So far just a few tricks.
desktop_files = data/gpsami.desktop desktopdir = $(datadir)/applications desktop_DATA = $(desktop_files)
ui_files = src/mgwindow.ui \ $(null)
Just some basic declarations in the Makefile.am. The desktop file with installation target and the ui_files. Note that at the moment the ui files are not installed because we inline them in Rust.
EXTRA_DIST = Cargo.toml \ src/devices.json \ src/devices.rs \ src/drivers.rs \ src/gpsbabel.rs \ src/main.rs \ src/mgapplication.rs \ src/utils.rs \ $(ui_files) \ $(desktop_in_files) \ $(null)
We want to distribute the source files and the desktop files. This will get more complex when the crate grows as we'll need to add more files to here.
all-local: cargo build --release clean-local: -cargo clean
Drive build and clean targets with cargo.
install-exec-local: $(MKDIR_P) $(DESTDIR)$(bindir) $(INSTALL) -c -m 755 target/@CARGO_TARGET_DIR@/gpsami $(DESTDIR)$(bindir)
We have to install the binary by hand. That's one of the drawback of cargo.
We this, we do
$ autoreconf -si $ ./configure $ make # make install
This build in release and install it in the prefix. You can even make dist, which is another of the reason why I wanted to do that.
Caveats: I know this will not work if we build in a different directory than the source directory. make distcheck fails for that reason.
I'm sure there are ways to improve this, and I will probably, but I wanted to give a recipe for something I wanted to do.
October 06, 2016
Now that my time as an intern is over, I want to take a moment to thank Outreachy for giving me the opportunity to be a part of this amazing experience. Also a big thank you to my mentor Jim Hall and the GNOME design team (Allan and Jakub) for the guidance and encouragements they provided throughout these months. And finally, a thank you to GNOME community for being awesome ^_^
For future applicants
Lately I have been receiving some emails with questions regarding the application, so I thought I’d answer them here by explaining the process in few short steps.
Step1: Am I eligible?
So, the first step is to check out if you are eligible. You can do that by meeting these requirements.
Step 2: Where can I find the participating organizations?
Outreachy provides a wide range of organizations and projects to choose from. Most likely you will be able to find something that suits you. You can find organizations are offering internships this round here.
They are divided into three categories: Organizations That Joined Most Recently, Organizations Looking for Applicants and Organizations That Already Have Many Applicants.
If you have just learned about the program and don’t have a strong preference about which organization to work with, consider the “Joined Most Recently” first.
I also suggest you go through Planet Outreach. That is a great way to explore the projects and get an idea of how these projects work and what will your responsibilities be, by reading the past interns blogs where they described their work weekly.
Step 4: First contribution?
The next step is making a small contribution for the project that you want to apply. Contact the project’s mentor and ask for a suitable contribution. Make sure to follow the deadlines. Also keep working on more contributions after that, if you have time. That will help strengthen your application.
Step 5: Where to apply?
You can find the application system here, but you should submit your application only after you have selected a project and already made the initial contribution.

What makes a good application?
I think this boils down to some key things. Good communication, respecting deadlines, and most importantly first contribution. My mentor published a post recently where he stated that ”If you plan to apply for a future cycle in Outreachy, don’t forget the initial contribution. Take it seriously. Reach out to the project’s mentors and discuss the initial contribution and how to approach it. I know I took into consideration each applicant’s engagement and relative success in the initial contribution, and that mattered when selecting interns for the program.”
Useful links:
1.Jim’s advice for future interns: http://opensource-usability.blogspot.com/2016/09/wrap-up-from-this-cycle-of-outreachy.html
2. Advices from other interns:
[Outreachy] Tips for the kernel newbies: https://vthakkar1994.wordpress.com/2016/09/13/outreachy-tips-for-the-kernel-newbies/
Outreachy: What? How? Why?: http://vakila.github.io/blog/outreachy-what-how-why/
3.Past usability interns (if you are interested on applying for usability testing):
Sanskriti: https://sanskritidawle.wordpress.com/
Gina: https://ginadobrescu.wordpress.com/
Ciarrai: http://www.nonerdsallowed.com/
Diana: https://dkripak.wordpress.com/

No more “which is now the index of this modem…?”
DBus object path and index
When modems are detected by ModemManager and exposed in DBus, they are assigned an unique DBus object path, with a common prefix and a unique index number, e.g.:
/org/freedesktop/ModemManager1/Modem/0
This path is the one used by the mmcli command line tool to operate on a modem, so users can identify the device by the full path or just by the index, e.g. this two calls are totally equivalent:
$ mmcli -m /org/freedesktop/ModemManager1/Modem/0 $ mmcli -m 0
This logic looks good, except for the fact that there isn’t a fixed DBus object path for each modem detected: i.e. the index given to a device is the next one available, and if the device is power cycled or unplugged and replugged, a different index will be given to it.
EquipmentIdentifier
Systems like NetworkManager handle this index change gracefully, just by assuming that the exposed device isn’t the same one as the one exposed earlier with a different index. If settings need to be applied to a specific device, they will be stored associated with the EquipmentIdentifier property of the modem, which is the same across reboots (i.e. the IMEI for GSM/UMTS/LTE devices).
User-provided names
The 1.8 stable release of ModemManager will come with support for user-provided names assigned to devices. A use case of this new feature is for example those custom systems where the user would like to assign a name to a device based on the USB port in which it is connected (e.g. assuming the USB hardware layout doesn’t change across reboots).
The user can specify the names (UID, unique IDs) just by tagging in udev the physical device that owns all ports of a modem with the new ID_MM_PHYSDEV_UID property. This tags need to be applied before the ID_MM_CANDIDATE properties, and therefore the rules file should be named before the 80-mm-candidate.rules one, for example like this:
$ cat /lib/udev/rules.d/78-mm-naming.rules
ACTION!="add|change|move", GOTO="mm_naming_rules_end"
DEVPATH=="/devices/pci0000:00/0000:00:1d.0/usb4/4-1/4-1.5/4-1.5.5",ENV{ID_MM_PHYSDEV_UID}="USB1"
DEVPATH=="/devices/pci0000:00/0000:00:1d.0/usb4/4-1/4-1.5/4-1.5.2",ENV{ID_MM_PHYSDEV_UID}="USB2"
DEVPATH=="/devices/pci0000:00/0000:00:1d.0/usb4/4-1/4-1.5/4-1.5.3",ENV{ID_MM_PHYSDEV_UID}="USB3"
DEVPATH=="/devices/pci0000:00/0000:00:1d.0/usb4/4-1/4-1.5/4-1.5.4",ENV{ID_MM_PHYSDEV_UID}="USB4"
LABEL="mm_naming_rules_end"
The value of the new ID_MM_PHYSDEV_UID property will be used in the Device property exposed in the DBus object, and can also be used directly in mmcli calls instead of the path or index, e.g.:
$ mmcli -m USB4
...
-------------------------
System | device: 'USB4'
| drivers: 'qmi_wwan, qcserial'
| plugin: 'Sierra'
| primary port: 'cdc-wdm2'
...
Given that the same property value will always be set for the modem in a specific device path, this user provided names may unequivocally identify a specific modem even when the device is power-cycled, unplugged and replugged or even the whole system rebooted.
Binding the property to the device path is just an example of what could be done. There is no restriction on what the logic is to apply the ID_MM_PHYSDEV_UID property, so users may also choose other different approaches.
This support is already in ModemManager git master, and as already said, will be included in the stable 1.8 release, whenever that is.
TL;DR? ModemManager now supports assigning unique names to devices that stay even across full system reboots.
Filed under: Development, FreeDesktop Planet, GNOME Planet, Planets Tagged: gnome, gnu/linux, ModemManager, udev
October 05, 2016
In this last week, the master branch of GTK+ has seen 33 commits, with 9362 lines added and 8025 lines removed.
Planning and status
- Matthias released GTK+ 3.21.6, the last snapshot before the 3.22.0 release of September 21st.
- The GTK+ road map is available on the wiki.
Notable changes
- This week we had mostly translations for property descriptions and user-visibile messages, in preparation for the 3.22 release.
- Ruslan fixed the
GdkEventKey.is_modifier()structure field on Windows to report Ctrl, Alt, and Shift keys as modifiers. - Jonas Ådahl landed last minute fixes for combo boxes and other popups under Wayland.
Bugs fixed
- 771117 – gtk3 3.21.5 broke displaying drop-down lists, need to scroll to see contents
- 771349 – gdk_screen_get_monitor_scale_factor on X11 always returns 1 with GTK 3.21+
- 771463 – variable may be used uninitialized in gtk_widget_render
- 602773 – GdkEventKey.is_modifier is 0 for Shift, Ctrl, Alt keys
Getting involved
Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.
systemd.conf 2016 is Over Now!
A few days ago systemd.conf 2016 ended, our second conference of this kind. I personally enjoyed this conference a lot: the talks, the atmosphere, the audience, the organization, the location, they all were excellent!
I'd like to take the opportunity to thanks everybody involved. In particular I'd like to thank Chris, Daniel, Sandra and Henrike for organizing the conference, your work was stellar!
I'd also like to thank our sponsors, without which the conference couldn't take place like this, of course. In particular I'd like to thank our gold sponsor, Red Hat, our organizing sponsor Kinvolk, as well as our silver sponsors CoreOS and Facebook. I'd also like to thank our bronze sponsors Collabora, OpenSUSE, Pantheon, Pengutronix, our supporting sponsor Codethink and last but not least our media sponsor Linux Magazin. Thank you all!










I'd also like to thank the Video Operation Center ("VOC") for their amazing work on live-streaming the conference and making all talks available on YouTube. It's amazing how efficient the VOC is, it's simply stunning! Thank you guys!
In case you missed this year's iteration of the conference, please have a look at our YouTube Channel. You'll find all of this year's talks there, as well the ones from last year. (For example, my welcome talk is available here). Enjoy!
We hope to see you again next year, for systemd.conf 2017 in Berlin!
The gspell fundraising has reached its initial goal! So thanks a lot for your support!
Expect GtkEntry support in the next version of gspell, which is planned for March 2017.
I’ve added a second milestone for the gspell fundraising, because there are a lot of other possible things to do.
The LaTeXila fundraising is going well too, it is currently at 80%. So thanks to the people who donated so far!
If you write documents in LaTeX, and care about having a good LaTeX editor, well maintained and well integrated to GNOME, then consider donating to LaTeXila ;) There will hopefully be other milestones in the future, for example to improve the auto-completion (e.g. complete the label in \ref commands), or implementing a full-screen mode.
Of course, just a few hours after releasing frogr 1.1, I’ve noticed that there was actually no good reason to depend on gettext 0.19.8 for the purposes of removing the intltool dependency only, since 0.19.7 would be enough.
So, as raising that requirement up to 0.19.8 was causing trouble to package frogr for some distros still in 0.19.7 (e.g. Ubuntu 16.04 LTS), I’ve decided to do a quick new release and frogr 1.2 is now out with that only change.
One direct consequence is that you can now install the packages for Ubuntu from my PPA if you have Ubuntu Xenial 16.04 LTS or newer, instead of having to wait for Ubuntu Yakkety Yak (yet to be released). Other than that 1.2 is exactly the same than 1.1, so you probably don’t want to package it for your distro if you already did it for 1.1 without trouble. Sorry for the noise.
I had a great time last week and the web engines hackfest! It was the 7th web hackfest hosted by Igalia and the 7th hackfest I attended. I’m almost a local Galician already. Brazilian Portuguese being so close to Galician certainly helps! Collabora co-sponsored the event and it was great that two colleagues of mine managed to join me in attendance.
It had great talks that will eventually end up in videos uploaded to the web site. We were amazed at the progress being made to Servo, including some performance results that blew our minds. We also discussed the next steps for WebKitGTK+, WebKit for Wayland (or WPE), our own Clutter wrapper to WebKitGTK+ which is used for the Apertis project, and much more.
Zan giving his talk on WPE (former WebKitForWayland)One thing that drew my attention was how many Dell laptops there were. Many collaborans (myself included) and igalians are now using Dells, it seems. Sure, there were thinkpads and macbooks, but there was plenty of inspirons and xpses as well. It’s interesting how the brand make up shifted over the years since 2009, when the hackfest could easily be mistaken with a thinkpad shop.
Back to the actual hackfest: with the recent release of Gnome 3.22 (and Fedora 25 nearing release), my main focus was on dealing with some regressions suffered by users experienced after a change that made putting the final rendering composited by the nested Wayland compositor we have inside WebKitGTK+ to the GTK+ widget so it is shown on the screen.
One of the main problems people reported was applications that use WebKitGTK+ not showing anything where the content was supposed to appear. It turns out the problem was caused by GTK+ not being able to create a GL context. If the system was simply not able to use GL there would be no problem: WebKit would then just disable accelerated compositing and things would work, albeit slower.
The problem was WebKit being able to use an older GL version than the minimum required by GTK+. We fixed it by testing that GTK+ is able to create GL contexts before using the fast path, falling back to the slow glReadPixels codepath if not. This way we keep accelerated compositing working inside WebKit, which gives us nice 3D transforms and less repainting, but take the performance hit in the final “blit”.
Introducing “WebKitClutterGTK+”Another issue we hit was GTK+ not properly updating its knowledge of the window’s opaque region when painting a frame with GL, which led to some really interesting issues like a shadow appearing when you tried to shrink the window. There was also an issue where the window would not use all of the screen when fullscreen which was likely related. Both were fixed.
André Magalhães also worked on a couple of patches we wrote for customer projects and are now pushing upstream. One enables the use of more than one frontend to connect to a remote web inspector server at once. This can be used to, for instance, show the regular web inspector on a browser window and also use IDE integration for setting breakpoints and so on.
The other patch was cooked by Philip Withnall and helped us deal with some performance bottlenecks we were hitting. It improves the performance of painting scroll bars. WebKitGTK+ does its own painting of scrollbars (we do not use the GTK+ widgets for various reasons). It turns out painting scrollbars can be quite a hit when the page is being scrolled fast, if not done efficiently.
Emanuele Aina had a great time learning more about meson to figure out a build issue we had when a more recent GStreamer was added to our jhbuild environment. He came out of the experience rather sane, which makes me think meson might indeed be much better than autotools.
Igalia 15 years cakeIt was a great hackfest, great seeing everyone face to face. We were happy to celebrate Igalia’s 15 years with them. Hope to see everyone again next year =)
After almost one year, I’ve finally released another small iteration of frogr with a few updates and improvements.
Not many things, to be honest, bust just a few as I said:
- Added support for flatpak: it’s now possible to authenticate frogr from inside the sandbox, as well as open pictures/videos in the appropriate viewer, thanks to the OpenURI portal.
- Updated translations: as it was noted in the past when I released 1.0, several translations were left out incomplete back then. Hopefully the new version will be much better in that regard.
- Dropped the build dependency on intltool (requires gettext >= 0.19.8).
- A few bugfixes too and other maintenance tasks, as usual.
Besides, another significant difference compared to previous releases is related to the way I’m distributing it: in the past, if you used Ubuntu, you could configure my PPA and install it from there even in fairly old versions of the distro. However, this time that’s only possible if you have Ubuntu 16.10 “Yakkety Yak”, as that’s the one that ships gettext >= 0.19.8, which is required now that I removed all trace of intltool (more info in this post).
However, this is also the first time I’m using flatpak to distribute frogr so, regardless of which distribution you have, you can now install and run it as long as you have the org.gnome.Platform/x86_64/3.22 stable runtime installed locally. Not too bad! :-). See more detailed instructions in its web site.
That said, it’s interesting that you also have the portal frontend service and a backend implementation, so that you can authorize your flickr account using the browser outside the sandbox, via the OpenURI portal. If you don’t have that at hand, you can still used the sandboxed version of frogr, but you’d need to copy your configuration files from a non-sandboxed frogr (under ~/.config/frogr) first, right into ~/.var/app/org.gnome.Frogr/config, and then it should be usable again (opening files in external viewers would not work yet, though!).
So this is all, hope it works well and it’s helpful to you. I’ve just finished uploading a few hundreds of pictures a couple of days ago and it seemed to work fine, but you never know… devil is in the detail!
October 03, 2016
The 3.24 cycle is just getting started, and I have a few plans for Sysprof to give us a more polished profiling experience in Builder. The details can be found on the mailing list.
In particular, I’d love to land support for visualizers. I expect this to happen soon, since there is just a little bit more to work through to make that viable. This will enable us to get a more holistic view of performance and allow us to drill into callgraphs during a certain problematic period of the profile.
Once we have visualizer support, we can start doing really cool things like extracting GPU counters, gdk/clutter frame-clock timing, dbus/cpu/network monitors and whatever else you come up with.
A CPU visualizer displayed above the callgraph to provide additional context to the profiled execution.Additionally we have some work to do around getting access to symbols when we are running in binary stripped environments. This means we can upload a stripped binary to your IoT/low-power device to profile, but have the instruction-pointer-to-symbol resolver happen on the developers workstation.
As I just alluded to, I’d love to see remote profiling happen too. There is some plumbing that needs to occur here, but in general it shouldn’t be terribly complicated.
This isn't limited to individual relationships. Something that distinguishes good customer service from bad customer service is getting the details right. There are many industries where significant failures happen infrequently, but minor ones happen a lot. Would you prefer to give your business to a company that handles those small details well (even if they're not overly annoying) or one that just tells you to deal with them?
And the same is true of software communities. A strong and considerate response to minor bug reports makes it more likely that users will be patient with you when dealing with significant ones. Handling small patch contributions quickly makes it more likely that a submitter will be willing to do the work of making more significant contributions. These things are well understood, and most successful projects have actively worked to reduce barriers to entry and to be responsive to user requests in order to encourage participation and foster a feeling that they care.
But what's often ignored is that this applies to other aspects of communities as well. Failing to use inclusive language may not seem like a big thing in itself, but it leaves people with the feeling that you're less likely to do anything about more egregious exclusionary behaviour. Allowing a baseline level of sexist humour gives the impression that you won't act if there are blatant displays of misogyny. The more examples of these "insignificant" issues people see, the more likely they are to choose to spend their time somewhere else, somewhere they can have faith that major issues will be handled appropriately.
There's a more insidious aspect to this. Sometimes we can believe that we are handling minor issues appropriately, that we're acting in a way that handles people's concerns, while actually failing to do so. If someone raises a concern about an aspect of the community, it's important to discuss solutions with them. Putting effort into "solving" a problem without ensuring that the solution has the desired outcome is not only a waste of time, it alienates those affected even more - they're now not only left with the feeling that they can't trust you to respond appropriately, but that you will actively ignore their feelings in the process.
It's not always possible to satisfy everybody's concerns. Sometimes you'll be left in situations where you have conflicting requests. In that case the best thing you can do is to explain the conflict and why you've made the choice you have, and demonstrate that you took this issue seriously rather than ignoring it. Depending on the issue, you may still alienate some number of participants, but it'll be fewer than if you just pretend that it's not actually a problem.
One warning, though: while building trust in this way enhances people's willingness to join your community, it also builds expectations. If a significant issue does arise, and if you fail to handle it well, you'll burn a lot of that trust in the process. The fact that you've built that trust in the first place may be what saves your community from disintegrating completely, but people will feel even more betrayed if you don't actively work to rebuild it. And if there's a pattern of mishandling major problems, no amount of getting the details right will matter.
Communities that ignore these issues are, long term, likely to end up weaker than communities that pay attention to them. Making sure you get this right in the first place, and setting expectations that you will pay attention to your contributors, is a vital part of building a meaningful relationship between your community and its members.
For some time I worked at Igalia to enable WebRTC on WebKitForWayland or WPE for the Raspberry Pi 2.
The goal was to have the WebKit WebRTC tests working for a demo. My fellow Igalian Alex was working on the platform itself in WebKit and assisting with some tuning for the Pi on WebKit but the main work needed to be done in OpenWebRTC.
My other fellow Igalian Phil had begun a branch to work on this that was half way with some workarounds. My first task was getting into combat/workaround mode and make OpenWebRTC work with compressed streams from gst-rpicamsrc. OpenWebRTC supported only raw video streams and that Raspberry Pi Cam module GStreamer element provides only H264 encoded ones. I moved some encoders and parsers around, some caps modifications, removed some elements that didn’t work on the Pi and made it work eventually. You can see the result at:
To make this work by yourselves you needed a custom branch of Buildroot where you could build with the proper plugins enabled also selected the appropriate branches in WPE and OpenWebRTC.
Unfortunately the work was far from being finished so I continued the effort to try to make the arch changes in OpenWebRTC have production quality and that meant do some tasks step by step:
- Rework the video orientation code: The workaround deactivated it as so far it was being done in GStreamer. In the case of rpicamsrc that can be done by the hardware itself so I cooked a GStreamer interface to enable rotation the same way it was done for the [gl]videoflip elements. The idea would be deprecate the original ones and use the new interface. These landed both in videoflip and glvideoflip. Of course I also implemented it on gst-rpicamsrc here and here and eventually in OpenWebRTC sources.
- Rework video flip: Once OpenWebRTC sources got orientation support, I could rework the flip both for local and remote feeds.
- Add gl{down|up}load elements back: There were some issues with the gl elements to upload and download textures, which we had removed. I readded them again.
- Reworked bins linking: In OpenWebRTC there are some bins that are created to perform some tasks and depending on different circumstances you add or not some elements. I reworked the way those elements are linked so that we don’t have to take into account all the use cases to link them. Now this is easier as the elements are linked as they are the added to the bin.
- Reworked the renderer_disabled: As in the case for orientation, some elements such as gst-rpicamsrc are able to change color and balance so I added support for that to avoid having that done by GStreamer elements if not necessary. In this case the proper interfaces were already there in GStreamer.
- Moved the decoding/parsing from the source to the renderer: Before our changes the source was parsing/decoding the remote feeds, local sources were not decoded, just raw was supported. Our workarounds made the local sources decode too but this was not working for all cases. So why decoding at the sources when GStreamer has caps and you can just chain all that to the renderers? So I did eventually, I moved the parsing/decoding to the renderers. This took fixing all the caps negotiation from sources to renderers. Unfortunatelly I think I broke audio on the way, but surely nothing difficult to fix.
This is still a work in progress and now I am changing tasks and handing this over back to my fellow Igalians Phil, who I am sure will do an awesome job together with Alex.
And again, thanks to Igalia for letting me work on this and to Metrological that is sponsoring this work.
Hey guys, last Saturday we had our [first?] GNOME Release Party here in São Paulo, Brazil. It was very, very productive and most importantly, fun!

Georges already blogged about it, check it out!
Also, we had a feeling that another event, with a more hacking focus would be interesting. So, I setup a meetup in order to see if there are enough people interested in such a hands-on event 
October 02, 2016
Interested in a preview of the course? Here's a quick breakdown of the syllabus:
Introduction to usability studies and how users interact with systems using open source software as an example. Students learn usability methods, then explore and contribute to open source software by performing usability tests, presenting their analysis of these tests, and making suggestions or changes that may improve the usability.
Course objectives:
- To understand what is usability and apply basic principles for how test usability
- Design and develop personas, scenarios, and other artifacts for usability testing
- Create an execute a usability test against an actual product
- To identify and reflect on the value and presentation of usability test results
Each student will work with the professor and other students to choose an individual project to complete during the second half of the term.
Requirements:
- Class engagement (discussions, presentations, feedback)
- Projects (small group project, and larger individual project)
- Final paper to document your individual project
Each discussion will be worth 5 points. This is graded on a scale: no points for no discussion posted, and 1 to 5 points based on the quality of your discussion. For example: a well thought-out discussion will be given 5 points; a sketched out discussion post will earn 1 point.
Course outline:
- Introduction
- What is usability?
- How do we test usability?
- Personas
- Scenarios
- Scenario tasks
- User interfaces
- Mini project (two weeks)
- Final project (four weeks)
- Final paper
I will also change the points. Last time, I had a 60/40 split for discussion points and final paper. I totaled your discussion, and that was 60% of your grade; your final paper was the other 40%. This year, I plan to make the points cumulative. If you assume 20 discussions at 5 points each, that's 100 points for discussion. The paper is an additional 50 points. That makes it clear you cannot skip the discussion and hope for a strong paper; neither can you punt the paper and rely on your discussion points. You need to participate every week and you need to make an effort on the final paper to get a good grade in the class.
Today, we had the first GNOME release party I’ve ever attended. All thanks to the organizing efforts of Jonh Wendell – really, we owe you one!
The Faria Lima avenue, famous here in São Paulo – and where the Red Hat office is located.While this release party was small, I was surprised how well the conversation flowed. I thought that 1 hour would be enough, but I was plain wrong! Stayed for 3 hours, and only leaved because I really had to.

I gave a simple talk about Flatpak, an overview of how it works and why is it important. The feedback was great, really – the attendants appearently were interested in learning about it, and we had a really productive discussion about the topic (after almost 2 hours of sidetalks before I started presenting it).
Jonh cared to bring a GNOME cake and, while I couldn’t it eat, it’s officially the first GNOME cake I’ve ever seen
The infamous GNOME cake, with some coxinhas and vegan appetizers.I’m happy I found some time to go there (almost 1h30min journey to reach there, but worth every single second). GNOME community in São Paulo, do show up at the next release party, and lets have a great time again. Thanks folks!

October 01, 2016
Short version: the master branch of Mono now has support for TLS 1.2 out of the box. This means that SslStream now uses TLS 1.2, and uses of HttpWebRequest for HTTPS endpoints also uses TLS 1.2 on the desktop.
This brings TLS 1.2 to Mono on Unix/Linux in addition to Xamarin.{Mac,iOS,tvOS} which were already enabled to use TLS 1.2 via the native Apple TLS stack.
To use, install your fresh version of Mono, and then either run the btls-cert-sync command which will convert your existing list of trusted certificates to the new format (if you used cert-sync or mozroots in the past).
In Detail
The new version of Mono now embeds Google's Boring SSL as the TLS implementation to use.
Last year, you might remember that we completed a C# implementation of TLS 1.2. But we were afraid of releasing a TLS stack that had not been audited, that might contain exploitable holes, and that we did not have the cryptographic chops to ensure that the implementation was bullet proof.
So we decided that rather than ship a brand new TLS implementation we would use a TLS implementation that had been audited and was under active development.
So we picked Boring TLS, which is Google's fork of OpenSSL. This is the stack that powers Android and Google Chrome so we felt more comfortable using this implementation than a brand new implementation.
Linux Distributions
We are considering adding a --with-openssl-cert-directory= option to the configure script so that Linux distributions that package Mono could pass a directory that contains trusted root certificates in the format expected by OpenSSL.
Let us discuss the details in the mono-devel-list@lists.dot.net
September 30, 2016
Richard Huges, on my recent post, has pointed out his interest to re-write GXml in C in order to avoid Vala dependency, quoting his words:
[…] Being honest; I’m not going to be depending on a vala library any time soon as I have to maintain all my stuff for the next decade and Vala isn’t an ideal long-term support option at all. […]
Is it Vala development a waste of time? Is Vala suitable for long term support libraries?
In GNOME, core technologies have been written in C, with high level bindings using Python and, as an intermediate solution Vala, both with strong relation to GObject Introspection.
Vala have a candy syntax, making write C API and C objects based on GLib really productive, while avoids any overhead.
While GLib have provide an Object Oriented approach because GObject, it is still C. You have to use C sentences all time even for the most common tasks. You can write really resource and speed optimized code if written in C.
GObject Introspection makes bindings to different languages available at release-day of C libraries, so for Vala ones. Vala create directly GIR required files at compilation time.
Vala generates C code. It isn’t of course, may never, perfect or reproducible. This is true and a hard issue for libraries providing interfaces, because its algorithm, I think, uses hash tables to store some parsed code representation, making impossible to generated reproducible code each time it is generated, because function position in its C form, could change producing ABI incompatible sources; this never change C behavior or security.
There are a set of fixes introduced recently to Vala to avoid interfaces issues, used by, for example libgee. This is the path for GXml to follow if it is written in Vala.
Reproducible C code generation may require to change a lot of things internally in Vala compiler.
GLib and GObject uses macros to reduce hand written C code, Vala coders use Vala compilers because C development could be slow and error prone; and because you should care about memory management by hand too.
If Vala is not going to be maintained, because a set of core developers prefer to use C, and GNOME Builder is not going to take the time to improve its Vala support, because C is the way to go for long term support projects, then may we should stop to use Vala for libraries. Is it?
Improve Vala C code generation.
While there is macros in C to reduce burden on hand code generation, still there is room to have macros written in Vala to do the same. This is a really futuristic, wish list, sentence.
Vala syntax makes possible to write GXml, with the features it have today. Serialization framework, was possible once a GObject based API (and in Vala) was available and DOM4, was possible because we was able to copy and paste specification declarations making just few changes to Vala syntax.
Vala provides a high productive C code generator with lot of candy syntax to most common activities, like string manipulation, with automatic memory management, using secure GLib’s methods and so on.
Is it possible to improve Vala compiler to create more readable C code, suitable to switch from Vala to C?
Is it possible to embed Vala code in C or C in Vala code?
We need to explode how C generated code from Vala code, can be imported *as is* to GLib and improve over time, removing temporally variables and replace some more code to C’sh style according with the target project code standard. May this, task for GXml, is a little hard because it depends on libgee, another Vala library; but no impossible.
C limits expanded with Vala
C have lot of applications and Vala generics is pushing C limits with Generics, libgee use generics along a set of interfaces and implementations, allowing as to use different kind of collection for our objects, this powerful combination of GInteface and Vala generics, is a feature hard to reproduce (in its easy and convenient way in Vala syntax) to C API.
GObject and GInterface have a set of limits, expanded with Vala too. Now is possible to define very quickly a set of interfaces and implementations, figuring out really fast its relation ship, thanks to Vala syntax.
This advantages, in others out of the scope (and my mind), make me ask if we can embed Vala code in C or C in Vala code. Even if once a feature is mature enough in Vala code, may someone can (with C code generation optimization) import to other core libraries like GLib. Again this is one of my wish list.
Vala as a Long Term supported language
Vala syntax provides lot of advantages over hand written GObject classes, in the spirit of productivity.
GXml provides C and Vala API. Vala is more easy to use and allows to implement object oriented specifications (may be based on the idea of Java classes), but providing back a C API. Vala code is more Object Oriented syntax, than C/GObject.
Then how we can continue to contribute Vala libraries to be used by any one, and suitable to be used by any other projects even written in C?
How we can make a Vala library, like GXml, a GNOME core component with Long Term Support?
There are plenty of bugs and room to improvements to C code generation in Vala compiler, we, their users, should care about and may pay for improvements. I really want to some one add Jürg Billeter, Rico Tzschichholz or Matthias Berndt to the list of “Adopt a Hacker” list from GNOME Foundation, in order to push at least one of this ideas to improve GNOME infrastructure.
Every year, I introduce Fedora to new students at Brno Technical University. There are approx. 500 of them and a sizable amount of them then installs Fedora. We also organize a sort of installfest one week after the presentation where anyone who has had any difficulties with Fedora can come and ask for help. It’s a great opportunity to observe what things new users struggle with the most. Especially when you have such a high number of new users. What are my observations this year?
- I always ask how many people have experience with Linux (I narrow it down to GNU/Linux distributions excluding things like Android). A couple of years ago, only 25-30% of students raised their hands. This year, it was roughly 75% which is a significant increase. It seems like high school students interested in IT are more familiar with Linux than ever before.
- Linux users tend to have strong opinions about desktops (too thick or thin title bars, too light or dark theme, no minimize button etc), but new users coming from Windows and MacOS don’t care that much. We give students Fedora Workstation with GNOME and receive almost no complains about the desktop from them, and literally zero questions how to switch to another desktop.
- The most frequent question we receive is why they have multiple Fedora entries in GRUB. Like many other distributions, Fedora keeps three last kernels and allows you to boot with them via entries in GRUB. When you install Fedora, there is just one entry, but with kernel updates you get the second and then third. And new users are completely puzzled by that. One guy came and told us: “I’ve got two Fedora entries in the menu, I’m afraid I’ve installed the OS twice accidentally, can you help me remove the second instance?” Hiding the menu is not a solution because most students have dualboots with Windows and switching between OSes is a common use case for them. But we should definitely compress the Fedora entries into one somehow.
- Hardware support evergreen are discrete graphics cards. They’re still not natively supported by Linux and you can find them on most laptops these days and laptops of students are not an exception. So this is currently the most frequent hardware support problem we get installing Fedora. Someone brought a Dell Inspiron 15 7000 series where Fedora didn’t boot at all (other distributions fail on this model, too).
- Another common problem are Broadcom wireless cards. It’s easy to solve if you know what to do and have a wired connection. But some laptops don’t even have ethernet sockets any more. With one laptop, we ended up connecting to WiFi via phone and tethering with the laptop via a microUSB-USB cable.
- Installation of Fedora is simple. A couple of clicks, several minutes and you’re done. But only if everything goes ideally. Anaconda handles the typical scenario “Installing Fedora next to Windows” well, but there was a student who had a relatively new Lenovo laptop with MBR and 4 primary partitions (MBR in 2016?!) which effectively prevents you from installing anything on the disk unless you want to lose a Windows recovery partition because MBR can’t handle more than 4 primary partitions. Someone had a dualboot of Windows and Hackintosh which is also in “not-so-easy” waters as well. It also shows how difficult life Linux installer developers have, you can cover most common scenarios, but you can’t cover all possible combinations laptop vendor or later users can create on disks.
- We’ve also come a conclusion that it’s OK to admit that the hardware support in Linux for the laptop model is not good enough and offer the student an installation in a virtual machine in Windows. You can sometimes manage to get it working, but you know it’s likely to fall apart with the next update of kernel or whatever. Then it’s more responsible to recommend the student virtualization.
Due to unexpectedly low international attendance numbers this year, we have decided to scale down the 2016 GNOME Summit to one day, most likely to be on the afternoon of Saturday the 8th instead of three whole days. I have updated the wiki page accordingly; some small changes might still occur on that page as plans settle down, but that should be the jist of it. With a change of attendees comes a change of the nature of the event: instead of being an extremely technical, “deep end of the pool” event as it has been in the past, this edition will be focused on newcomers and potential contributors, with presentations and workshops targetted for this purpose. We might even speak French or Python!
September 29, 2016
A couple of weeks ago, I released gudev-rs, Rust wrappers for gudev. The goal was to be able to receive events from udev into a Gtk application written in Rust. I had a need for it, so I made it and shared it.
It is mostly auto-generated using gir-rs from the gtk-rs project. The license is MIT.
If you have problems, suggestion, patches, please feel free to submit them.
Today, the Calendar Team had the first meeting in history. Isaque, Lapo, Renata, Vamsi and I attended it, and the meeting was extremely productive! In fact, we were able to sketch out the general direction that GNOME Calendar will head towards.
What’ll be added
Lets start talking about the additions that Calendar will have.
A New Sidebar
Our design samurai came up with this idea and it was immediately completely welcomed. We want to do a simpler, less cluttered version of this:
The search would behave just like the redesigned GNOME Control Center, and in fact, using a sidebar fixes many issues we currently have with GNOME Calendar, like the search popover and random glitches all around.
The sidebar will be able to be hidden, just like Nautius’ sidebar, so people can have their old Calendar again, behaving just like it used to behave. Stay tuned.
Week View
Vamsi has been working on Week View for the past few months. It’s harder than it looks like! Still, he’s working on it and we plan to have the week view by the end of this year. The current state is described by him in his blog post, and here’s how it actually looks like (be aware – IT’S NOT FINISHED):
There’s still a lot of work to do, but we’ll get there eventually
Attendees
This is something I myself use frequently, but cannot do with Calendar. Obviously, this will be integrated with GNOME Contacts and will support mailing the attendees.
What’ll change
There are some pain points that we want to change in GNOME Calendar. Thanks to Renata, we have a much clearer picture about the big bottlenecks of Calendar now: adding new calendars and promoting Online Accounts.
Calendar Management Workflow
Based on Renata’s usability testing results, we’ll improve the way people manage calendars by changing its workflow. We’ll use this oportunity to finally implement the initial setup wizard, which we have mockups for quite some time now:

We’ll also turn the calendar management dialog into a wizard-like dialog, and make it more pleasant to work with.
Year View
The current Year View is not in the greatest shape possible. The single biggest issue right now is that the months in Year view are different from the months in Calendar (and GNOME Shell):

Fortunately, Isaque has a deep understanding of the Year view, and we’ll turn Year view into a GtkFlowBox-based widget, and share the same month widget (and behavior) all around. Exciting times ahead!
What’ll be removed
Nothing (just a bait for the trolls :P)
Conclusion
Of course, I didn’t cover every single thing discussed in the meeting. Obvious things like recurrences, more reminders, natural language support, jump to date and many other things are not worth the time – they’ll be added. Period.
In general, I’m very happy that this meeting happened. There is a team of contributors growing around Calendar, and this is awsome! Remember that the very first post in this blog was about the Calendar revival? It was a long way from a dead app to a serious core project. A big thanks for this awsome team that is putting time and efforts to make Calendar a better application – without you guys, there wouldn’t be a Calendar!
Excited? Join us in making Calendar great. Get involved, ping us on #gnome-calendar IRC channel, file bugs and/or document it. Every contribution is endlessly appreciated
Peace!
September 28, 2016
I can try and do more coding, more code reviewing, revive designing discussions… that’s cool, yet never enough. GIMP needs more people, developers, designers, community people, writers for the website or the documentation, tutorial makers… everyone is welcome in my grand scheme!
Many of my actions lately have been towards gathering more people, so when I heard about the GNOME newcomers initiative during GUADEC, I thought that could be a good fit. Thus a few days ago, I had GIMP added in the list of newcomer-friendly GNOME projects, with me as the newcomers mentor. I’ll catch this occasion to remind you all the ways you can contribute to GIMP, and not necessarily as a developer.
Coding for GIMP
GIMP is not your random small project. It is a huge project, with too much code for any sane person to know it all. It is used by dozen of thousands of people, Linux users of course, but also on Windows, OSX, BSDs… A flagship for Free Software, some would say. So clearly coding for GIMP can be scary and exciting in the same time. It won’t be the same as contributing to most smaller programs. But we are lucky: GIMP has a very sane and good quality code. Now let’s be clear: we have a lot of crappy pieces of code here and there, some untouched for years, some we hate to touch but have to sometimes. That will happen with any project this size. But overall, I really enjoy the quality of the code and it makes coding in GIMP somewhat a lot more enjoyable than in some less-cared projects I had to hack on in my life. This is also thanks to the maintainer, Mitch, who will bore you with syntax, spaces, tabs, but also by his deep knowledge of GIMP architecture. And I love this.
On the other hand, it also means that getting your patch into GIMP can be a littler more complicated than in some other projects. I saw a lot of projects which would accept patches in any state as long as it does more or less what it says it does. But nope, not in GIMP. It has to work, of course, but it also has to follow strict code quality, syntax-wise, but also architecture-wise. Also if your code touches the public API or the GUI, be ready for some lengthy discussions. But this is all worth it. Whether you are looking for improving an already awesome software, adding lines to your resume, improving your knowledge or experience on programming, learning, you will get something meaningful out of it. GIMP is not your random project and you will have reasons to be proud to be part of it.
How to choose a first bug?
Interested already? Have a look at bugs that we think are a good fit for newcomers! Now don’t feel obligated to start there. If you use GIMP and are annoyed by specific bugs or issues, this may well be a much better entrance. Personally I never contributed to fix a random bug as first patch. Every single first patch I did for Free Software was for an issue I experienced. And that’s even more rewarding!
Oh and if you happen to be a Windows or OSX developer, you will have an even bigger collection of bugs to look into. We are even more needing developer on non-Linux platforms, and that means we have a lot more bugs there, but also most likely a good half of these are probably easy to handle even for new developers.
Finally crashes and bugs which output warnings are often pretty easy since you can usually directly investigate them in a debugger (gdb for instance), which is also a good tool to learn if you never used. Bugs related to a graphical element, especially with text, are a good fit for new developers too since you can easily grep texts to search through the code.
Infrastructure
Now there are whole other areas where you could contribute. These are unloved area and less visible, which is sad. And I wish to change this. One of these is infrastructure! GIMP, as many big projects, have a website, build and continuous integration servers, wikis, mailing lists… These are time-consuming and have few contributors.
So we definitely welcome administrators. Our continuous integration regularly encounters issue. Well as we speak, the build fails, not because of GIMP, instead because minimum requirements for our dev environment are not met. At times, we have had a failing continuous integration for months. The problem is easy: we need more contributors to share the workload. Currently Sam Gleske is our only server administrator but as a volunteer, he has only limited time. We want to step up to next level with new people to co-administrate the servers!
Writers
While we got a new website recently (thanks to Patrick David especially!), more frequent news (here I feel we have to cite Alexandre Prokoudine too), we’d still welcome new hands. That could be yours!
We need documentation for GIMP 2.10 coming release, but also real good quality tutorials under Free/Libre licenses. The state of our tutorials on gimp.org were pretty sad before the new website, to say the least. Well now that’s pretty empty.
Of course translations are also a constant need too. GIMP is not doing too bad here, but if that’s what you like, we could do even better! For this, you will want to contact directly the GNOME translation team for your target language.
Designers
And finally my pet project, I repeat this often, but I think a lot of GIMP workflow would benefit from some designer view. If you are a UX designer and interested, be welcome to the team too!
So here it is. All the things which you could do with us. Don’t be scared. Don’t be a stranger. Instead of being this awesome project you use, it could be your awesome project. Make GIMP! 
So our latest and greatest Endless OS is out with the new 3.0 version series!
The shiny new things include the use of Flatpak to manage the applications; a new app center (GNOME Software); a new icon set; a new Windows installer that gives you the possibility of installing Endless OS in dual-boot; and many bug fixes.
Apps, apps, apps!
One of the big changes is the replacement of our old (and in-house) App Store by GNOME Software — the GNOME app center. Most of my time the past months has been spent in adapting this project to our needs. GNOME Software is surely a complex beast but I have been getting the invaluable help of its maintainer — Richard Hughes — who I now owe many Weißbiere.
Last week I gave a talk at the first edition of the Libre Application Summit in Portland about the work we’re doing regarding the applications story in the Endless OS: the evolution of the applications in the OS, the motivation behind some decisions, the changes we did to GNOME Sofware, etc. A video and slides should be up on the internetz soon if you want to know about that in more detail.
Join the future
The changes in this new 3.0 version may not seem such a big deal on the surface but everybody had to work really hard to make it happen and they open a lot of possibilities for our users and developers. We’re betting big on Flatpak and we want to see it succeed as not only Endless would benefit from it but pretty much every user of a Linux desktop. So if you’re an app developer, check it out and talk to the community if you need some help. We’re also still hiring, in case you are looking for new challenges.
Be sure to try the Endless OS and drop your thoughts or questions in our Community Forum.
September 27, 2016
And as they say about dog food, and how you should eat your own…
Tomorrow I'm going on a conference, and what better way than to plan the trip using your own stuff :-)
Here I have entered the time a which I would at the latest want to arrive and entered the start and destination, so I've gotten some alternatives. Notice how now the trips are ordered by arrival time, and in reverse time order (so to speak). This was a behavior that one of Andreas' test panel persons commented on: ”This is different from Google, but I actually like it better this way!“.
I have now selected a trip that is among the fastest (and has fewer changes).
Clicking on the little expand reavler button on a leg of the trip shows the intermediate stops the train makes on the way. This is especially useful when your a bit unsure where to get off so that you can make a note of the stop just before (I well know this already in this case, though).
And clicking on the last section of walking to the destination will zoom in to this part of the map.
Now I used the export functionallity to save down the map view to my phone (currently there's a little bug in libchamplain where the path layers aren't exported along, but thankfully Marius have made a patch for it).
Speaking of exporting, since last time I have also made the print button only show up when selecting a particular trip.
However, currently the print layout implementation for this mode is only stubbed out and just shows a header with the start and destination names.
Hopefully Andreas' will get around to cook up some nice mockups for that soon, bearing the same good quality as the ones used for rendering the sidebar itinerary views.
This will also be useful, either to make an actual paper printout, or to export to i.e. PDF to view on a mobile device, for example.
And that's pretty much what has happened since the last blog post in June on this subject that comes to my mind.











































