Steven Deobald

@steven

Donate Less

We have a new donation page. But before you go there, I would like to impress upon you this idea:

We would vastly prefer you donate $10/mo for one year ($120 total) than $200 in one lump sum. That’s counter-intuitive, so let me explain.

First of all, cash flow matters just as much to a non-profit as it does to a corporation. If a business only saw revenue once or twice a year — say, in the form of $300,000 cheques — it would need to be very careful with expenses, for fear of one of those cheques disappearing.

And so it is with non-profits. A non-profit built on chasing grants and begging for large cheques is inherently fragile. Financial planning that is based on big, irregular revenue sources is bound to fail sooner or later. Conversely, financial planning based on monthly recurring revenue trends close to reality. The organization is more stable as a result.

Second, if your monthly donation is negligible, you probably won’t worry too much about whether you keep your donation going or not. Maybe you decide $10/mo is such a small number (the price of two coffees in any country where I’ve lived over the past 15 years) that you’re happy to keep on donating at the end of one year? Great! If not? No hard feelings. The consistency still made the $120 figure extremely valuable to us.

We want your donation to represent a number that is very comfortable for you. Personally, I make two larger (for me) donations every month. Out of my bank account in India, I donate $50/mo to a charity in Sikkim. Out of my Canadian bank account, I donate $100/mo to a charity in Nova Scotia. I have enough money available in both countries that these donations will not run out before I die. These are the two most important charities I donate to… but I’m not putting myself at risk by donating to them.

If you value GNOME, we would appreciate your support. But your comfort is essential. $50/mo is too much? Don’t stretch yourself! $25/mo or $15/mo still makes a massive difference. We’re asking all GNOME users, developers, and fans to consider supporting us in this way. (If you’re not sure if you run GNOME, it’s the default desktop on Ubuntu, Fedora, Debian, and Red Hat.)

GNOME brings incredible value to the world. This is how you ensure that it continues to exist.

 

Donate less.

Arun Raghavan

@arunsr

The Unbearable Anger of Broken Audio

It should be surprising to absolutely nobody that the Linux audio stack is often the subject of varying levels of negative feedback, ranging from drive-by meme snark to apoplectic rage[1].

A lot of what computers are used for today involves audiovisual media in some form or the other, and having that not work can throw a wrench in just going about our day. So it is completely understandable for a person to get frustrated when audio on their device doesn’t work (or maybe worse, stops working for no perceivable reason).

It is also then completely understandable for this person to turn up on Matrix/IRC/Gitlab and make their displeasure known to us in the PipeWire (and previously PulseAudio) community. After all, we’re the maintainers of the part of the audio stack most visible to you.

To add to this, we have two and a half decades’ worth of history in building the modern Linux desktop audio stack, which means there are historical artifacts in the stack (OSS -> ALSA -> ESD/aRTs -> PulseAudio/JACK -> PipeWire). And a lot of historical animus that apparently still needs venting.

In large centralised organisations, there is a support function whose (thankless) job it is to absorb some of that impact before passing it on to the people who are responsible for fixing the problem. In the F/OSS community, sometimes we’re lucky to have folks who step up to help users and triage issues. Usually though, it’s just maintainers managing this.

This has a number of … interesting … impacts for those of us who work in the space. For me this includes:

  1. Developing thick skin
  2. Trying to maintain equanimity while being screamed at
  3. Knowing to step away from the keyboard when that doesn’t work
  4. Repeated reminders that things do work for millions of users every day

So while the causes for the animosity are often sympathetic, this is not a recipe for a healthy community. I try to be judicious while invoking the fd.o Code of Conduct, but thick skin or not, abusive behaviour only results in a toxic community, so there are limits to that.

While I paint a picture of doom and gloom, most recent user feedback and issue reporting in the PipeWire community has been refreshingly positive. Even the trigger for this post is an issue from an extremely belligerent user (who I do sympathise with), who was quickly supplanted by someone else who has been extremely courteous in the face of what is definitely a frustrating experience.

So if I had to ask something of you, dear reader – the next time you’re angry with the maintainers of some free software you depend on, please get some of the venting out of your system in private (tell your friends how terrible we are, or go for a walk maybe), so we can have a reasonable conversation and make things better.

Thank you for reading!


  1. I’m not linking to examples, because that’s not the point of this post. ↩

Michael Meeks

@michael

2025-06-25 Wednesday

  • Catch up with H. with some great degree news, poke at M's data-sets briefly, sync with Dave, Pedro & Asja. Lunch.
  • Published the next strip around the excitement of setting up your own non-profit structure:
    The Open Road to Freedom - strip#23 - A solid foundation
  • Partner sales call.

Michael Meeks

@michael

2025-06-24 Tuesday

  • Tech planning call, sync with Laser, Stephan, catch up with Andras, partner call in the evening. Out for a walk with J. on the race-course in the sun. Catch up with M. now returned home.

Why is my Raspberry Pi 4 too slow as a server?

I self-host services on a beefy server in a datacenter. Every night, Kopia performs a backup of my volumes and sends the result to a s3 bucket in Scaleway's Parisian datacenter.

The VPS is expensive, and I want to move my services to a Raspberry Pi at home. Before actually moving the services I wanted to see how the Raspberry Pi would handle them with real life data. To do so, I downloaded kopia on the Raspberry Pi, connected it to the my s3 bucket in Scaleway's datacenter, and attempted to restore the data from a snapshot of a 2.8GB volume.

thib@tinykube:~ $ kopia restore k1669883ce6d009e53352fddeb004a73a
Restoring to local filesystem (/tmp/snapshot-mount/k1669883ce6d009e53352fddeb004a73a) with parallelism=8...
Processed 395567 (3.6 KB) of 401786 (284.4 MB) 13.2 B/s (0.0%) remaining 6000h36m1s.

A restore time in Bytes pers second? It would take 6000h, that is 250 days, to transfer 2.8GB from a s3 bucket to the Raspberry Pi in my living room? Put differently, it means I can't restore backups to my Raspberry Pi, making it unfit for production as a homelab server in its current state.

Let's try to understand what happens, and if I can do anything about it.

The set-up

Let's list all the ingredients we have:

  • A beefy VPS (16 vCPU, 48 GB of RAM, 1 TB SSD) in a German datacenter
  • A Raspberry Pi 4 (8 GB of RAM) in my living room, booting from an encrypted drive to avoid data leaks in case of burglary. That NVMe disk is connected to the Raspberry Pi via a USB 3 enclosure.
  • A s3 bucket that the VPS pushes to, and that the Rasperry Pi pulls from
  • A fiber Internet connection for the Raspberry Pi to download data

Where the problem can come from

Two computers and a cloud s3 bucket look like it's fairly simple, but plenty of things can fail or be slow already! Let's list them and check if the problem could come from there.

Network could be slow

I have a fiber plan, but maybe my ISP lied to me, or maybe I'm using a poor quality ethernet cable to connect my Raspberry Pi to my router. Let's do a simple test by installing Ookla's speedtest CLI on the Pi.

I can list the nearest servers

thib@tinykube:~ $ speedtest -L
Closest servers:

    ID  Name                           Location             Country
==============================================================================
 67843  Syxpi                          Les Mureaux          France
 67628  LaNetCie                       Paris                France
 63829  EUTELSAT COMMUNICATIONS SA     Paris                France
 62493  ORANGE FRANCE                  Paris                France
 61933  Scaleway                       Paris                France
 27961  KEYYO                          Paris                France
 24130  Sewan                          Paris                France
 28308  Axione                         Paris                France
 52534  Virtual Technologies and Solutions Paris                France
 62035  moji                           Paris                France
 41840  Telerys Communication          Paris                France

Happy surprise, Scaleway, my s3 bucket provider, is among the test servers! Let's give it a go

thib@tinykube:~ $ speedtest -s 61933
[...]
   Speedtest by Ookla

      Server: Scaleway - Paris (id: 61933)
         ISP: Free SAS
Idle Latency:    12.51 ms   (jitter: 0.47ms, low: 12.09ms, high: 12.82ms)
    Download:   932.47 Mbps (data used: 947.9 MB)                                                   
                 34.24 ms   (jitter: 4.57ms, low: 12.09ms, high: 286.97ms)
      Upload:   907.77 Mbps (data used: 869.0 MB)                                                   
                 25.42 ms   (jitter: 1.85ms, low: 12.33ms, high: 40.68ms)
 Packet Loss:     0.0%

With a download speed of 900 Mb/s ≈ 112 MB/s between Scaleway and my Raspberry Pi, it looks like the network is not the core issue.

The s3 provider could have an incident

I could test that the network itself is not to blame, but I don't know exactly what is being downloaded and from what server. Maybe Scaleway's s3 platform itself has an issue and is slow?

Let's use aws-cli to just pull the data from the bucket without performing any kind of operation on it. Scaleway provides detailed instructions about how to use aws-cli with their services. After following it, I can download a copy of my s3 bucket on the encrypted disk attached to my Raspberry Pi with

thib@tinykube:~ $ aws s3 sync s3://ergaster-backup/ /tmp/s3 \
    --endpoint-url https://s3.fr-par.scw.cloud 

It downloads at a speed of 1 to 2 MB/s. Very far from what I would expect. It could be tempting to stop here and think Scaleway is unjustly throttling my specific bucket. But more things could actually be happening.

Like most providers, Scaleway has egress fees. In other words, they bill customers who pull data out of their s3 buckets. It means that if I'm going to do extensive testing, I will end up with a significant bill. I've let the sync command finish overnight so I could have a local copy of my bucket on my Raspberry Pi's encrypted disk.

After it's done, I can disconnect kopia from my s3 bucket with

thib@tinykube:~ $ kopia repository disconnect

And I can connect it to the local copy of my bucket with

thib@tinykube:~ $ kopia repository connect filesystem \
    --path=/tmp/s3

Attempting to restoring a snapshot gives me the same terrible speed as earlier. Something is up with the restore operation specifically. Let's try to understand what happens.

Kopia could be slow to extract data

Kopia performs incremental, encrypted, compressed backups to a repository. There's a lot information packed in this single sentence, so let's break it down.

How kopia does backups

When performing a first snapshot of a directory, Kopia doesn't just upload files as it finds them. Instead if splits the files into small chunks, all of the same size on average. It computes a hash for each of them, that will serve as an unique identifier. It writes in a index table which block (identified by a hash) belongs to which file in which snapshot. And finally, it compresses, encrypts, and uploads them to the repository.

When performing a second snapshot, instead of just uploading all the files again, kopia performs the same file splitting operation. It hashes each block again, looks up in the index table if the hash is already present. If that's the case, it means the corresponding chunk has already been backed up and doesn't need to be re-uploaded. If not, it writes the hash to the table, compresses and encrypts the new chunk, and sends it to the repository.

Splitting the files and computing a hash for the chunks allows kopia to only send the data that has changed, even in large files, instead of uploading whole directories.

The algorithm to split the files in small chunks is called a splitter. The algorithm to compute a hash for each chunk is called... a hash.

Kopia supports several splitters, several hash algorithms, several encryption algorithms, and several compression algorithms. Different processors have different optimizations and will perform more or less well, which is why kopia offers to pick between several splitters, hash, and compression algorithms.

The splitter, hash and encryption algorithms are defined per repository, when the repository is created. These algorithms cannot be changed after the repository has been created. After connecting a repository, the splitter and hash can be determined with

thib@tinykube:~ $ kopia repository status
Config file:         /home/thib/.config/kopia/repository.config

Description:         Repository in Filesystem: /tmp/kopia
Hostname:            tinykube
Username:            thib
Read-only:           false
Format blob cache:   15m0s

Storage type:        filesystem
Storage capacity:    1 TB
Storage available:   687.5 GB
Storage config:      {
                       "path": "/tmp/kopia",
                       "fileMode": 384,
                       "dirMode": 448,
                       "dirShards": null
                     }

Unique ID:           e1cf6b0c746b932a0d9b7398744968a14456073c857e7c2f2ca12b3ea036d33e
Hash:                BLAKE2B-256-128
Encryption:          AES256-GCM-HMAC-SHA256
Splitter:            DYNAMIC-4M-BUZHASH
Format version:      2
Content compression: true
Password changes:    true
Max pack length:     21 MB
Index Format:        v2

Epoch Manager:       enabled
Current Epoch: 465

Epoch refresh frequency: 20m0s
Epoch advance on:        20 blobs or 10.5 MB, minimum 24h0m0s
Epoch cleanup margin:    4h0m0s
Epoch checkpoint every:  7 epochs

The compression algorithm is defined by a kopia policy. By default kopia doesn't apply any compression.

How kopia restores data

When kopia is instructed to restore data from a snapshot, it looks up the index table to figure out what chunks it must retrieve. It decrypts them, then decompresses them if they were compressed, and appends the relevant chunks together to reconstruct the files.

Kopia doesn't rely on the splitter and hash algorithms when performing a restore, but it relies on the encryption and compression ones.

Figuring out the theoretical speed

Kopia has built in benchmarks to let you figure out what are the best hash and encryption algorithms to use for your machine. I'm trying to understand why the restore operation is slow, so I only need to know about what I can expect from the encryption algorithms.

thib@tinykube:~ $ kopia benchmark encryption
Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1)
Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1)
     Encryption                     Throughput
-----------------------------------------------------------------
  0. CHACHA20-POLY1305-HMAC-SHA256  173.3 MB / second
  1. AES256-GCM-HMAC-SHA256         27.6 MB / second
-----------------------------------------------------------------
Fastest option for this machine is: --encryption=CHACHA20-POLY1305-HMAC-SHA256

The Raspberry Pi is notorious for not being excellent with encryption algorithms. The kopia repository was created from my VPS, a machine with much better results with AES. Running the same benchmark on my VPS gives much different results.

[thib@ergaster ~]$ kopia benchmark encryption
Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1)
Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1)
     Encryption                     Throughput
-----------------------------------------------------------------
  0. AES256-GCM-HMAC-SHA256         2.1 GB / second
  1. CHACHA20-POLY1305-HMAC-SHA256  699.1 MB / second
-----------------------------------------------------------------
Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256

Given that the repository I try to perform a restore from does not use compression and that it uses the AES256 encryption algorithm, I should expect a restore speed of 27.6 MB/s on the Raspberry Pi. So why is the restore so slow? Let's keep chasing the performance bottleneck.

The disk could be slow

The hardware

The Raspberry Pi is a brave little machine, but it was obviously not designed as a home lab server. The sd cards it usually boots from are notorious for being fragile and not supporting I/O intensive operations.

A common solution is to make the Raspberry Pi boot from a SSD drive. But to connect this kind of disk to the Raspberry Pi 4, you need an USB enclosure. I bought a Kingston SNV3S/1000G NVMe drive. It supposedly can read and write at 6 GB/s and 5 GB/s respectively. I put that drive an ICY BOX IB-1817M-C31 enclosure, with a maximum theoretical speed of 1000 MB/s.

According to this thread on the Raspberry Pi forums, the USB controller of the Pi has a bandwidth of 4Gb/s ≈ 512 MB/s (and not 4 GB/s as I initially wrote. Thanks baobun on hackernews for pointing out my mistake!) shared across all 4 ports. Since I only plug my disk there, it should get all the bandwidth.

So the limiting factor is the enclosure, that should still give me a generous 1000 MB/s.

So the limiting factor is the USB controller of the Raspberry Pi, that should still give me about 512 MB/s, although baobun on hackernews also pointed out that the USB controller on the Pi might share a bus with the network card.

Let's see how close to the reality that is.

Disk sequential read speed

First, let's try with a gentle sequential read test to see how well it performs in ideal conditions.

thib@tinykube:~ $ fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [R(1)][11.5%][r=144MiB/s][r=144 IOPS][eta 00m:54s]
Jobs: 1 (f=1): [R(1)][19.7%][r=127MiB/s][r=126 IOPS][eta 00m:49s] 
Jobs: 1 (f=1): [R(1)][27.9%][r=151MiB/s][r=151 IOPS][eta 00m:44s] 
Jobs: 1 (f=1): [R(1)][36.1%][r=100MiB/s][r=100 IOPS][eta 00m:39s] 
Jobs: 1 (f=1): [R(1)][44.3%][r=111MiB/s][r=111 IOPS][eta 00m:34s] 
Jobs: 1 (f=1): [R(1)][53.3%][r=106MiB/s][r=105 IOPS][eta 00m:28s] 
Jobs: 1 (f=1): [R(1)][61.7%][r=87.1MiB/s][r=87 IOPS][eta 00m:23s] 
Jobs: 1 (f=1): [R(1)][70.0%][r=99.9MiB/s][r=99 IOPS][eta 00m:18s] 
Jobs: 1 (f=1): [R(1)][78.3%][r=121MiB/s][r=121 IOPS][eta 00m:13s] 
Jobs: 1 (f=1): [R(1)][86.7%][r=96.0MiB/s][r=96 IOPS][eta 00m:08s] 
Jobs: 1 (f=1): [R(1)][95.0%][r=67.1MiB/s][r=67 IOPS][eta 00m:03s] 
Jobs: 1 (f=1): [R(1)][65.6%][r=60.8MiB/s][r=60 IOPS][eta 00m:32s] 
TEST: (groupid=0, jobs=1): err= 0: pid=3666160: Thu Jun 12 20:14:33 2025
  read: IOPS=111, BW=112MiB/s (117MB/s)(6739MiB/60411msec)
    slat (usec): min=133, max=41797, avg=3396.01, stdev=3580.27
    clat (msec): min=12, max=1061, avg=281.85, stdev=140.49
     lat (msec): min=14, max=1065, avg=285.25, stdev=140.68
    clat percentiles (msec):
     |  1.00th=[   41],  5.00th=[   86], 10.00th=[  130], 20.00th=[  171],
     | 30.00th=[  218], 40.00th=[  245], 50.00th=[  271], 60.00th=[  296],
     | 70.00th=[  317], 80.00th=[  355], 90.00th=[  435], 95.00th=[  550],
     | 99.00th=[  793], 99.50th=[  835], 99.90th=[  969], 99.95th=[ 1020],
     | 99.99th=[ 1062]
   bw (  KiB/s): min=44521, max=253445, per=99.92%, avg=114140.83, stdev=31674.32, samples=120
   iops        : min=   43, max=  247, avg=111.18, stdev=30.90, samples=120
  lat (msec)   : 20=0.07%, 50=1.69%, 100=4.94%, 250=35.79%, 500=50.73%
  lat (msec)   : 750=5.24%, 1000=1.45%, 2000=0.07%
  cpu          : usr=0.66%, sys=21.39%, ctx=7650, majf=0, minf=8218
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=98.2%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=6739,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=6739MiB (7066MB), run=60411-60411msec

Disk stats (read/write):
    dm-0: ios=53805/810, merge=0/0, ticks=14465332/135892, in_queue=14601224, util=100.00%, aggrios=13485/943, aggrmerge=40434/93, aggrticks=84110/2140, aggrin_queue=86349, aggrutil=36.32%
  sda: ios=13485/943, merge=40434/93, ticks=84110/2140, in_queue=86349, util=36.32%

So I can read from my disk at 117 MB/s. We're far from the theoretical 1000 MB/s. One thing is interesting here. The read performance seems to decrease over time? Running the same test again with htop to monitor what happens, I can see even more surprising. Not only the speed remains slower, but all four CPUs are pegging.

So when performing a disk read test, the CPU is going to maximum capacity, with a wait metric of about 0%. So the CPU is not waiting for the disk. Why would my CPU go crazy when just reading from disk? Oh. Oh no. The Raspberry Pi performs poorly with encryption. I am trying to read from an encrypted drive. This is why even with this simple reading test my CPU is a bottleneck.

Disk random read/write speed

Let's run the test that this wiki describes as "will show the absolute worst I/O performance you can expect."

thib@tinykube:~ $ fio --name TEST --eta-newline=5s --filename=temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting 
[...]
Run status group 0 (all jobs):
   READ: bw=6167KiB/s (6315kB/s), 6167KiB/s-6167KiB/s (6315kB/s-6315kB/s), io=361MiB (379MB), run=60010-60010msec
  WRITE: bw=6167KiB/s (6315kB/s), 6167KiB/s-6167KiB/s (6315kB/s-6315kB/s), io=361MiB (379MB), run=60010-60010msec

Disk stats (read/write):
    dm-0: ios=92343/185391, merge=0/0, ticks=90656/620960, in_queue=711616, util=95.25%, aggrios=92527/182570, aggrmerge=0/3625, aggrticks=65580/207873, aggrin_queue=319891, aggrutil=55.65%
  sda: ios=92527/182570, merge=0/3625, ticks=65580/207873, in_queue=319891, util=55.65%

In the worst conditions, I can expect a read and write speed of 6 MB/s each.

The situation must be even worse when trying to restore my backups with kopia: I read an encrypted repository from an encrypted disk and try to write data on the same encrypted disk. Let's open htop and perform a kopia restore to confirm that the CPU is blocking, and that I'm not waiting for my disk.

htop seems to confirm that intuition: it looks like the bottleneck when trying to restore a kopia backup on my Raspberry Pi is its CPU.

Let's test with an unencrypted disk to see if that hypothesis holds. I should expect higher restore speeds because the CPU will not be busy decrypting/encrypting data to disk, but it will still be busy decrypting data from the kopia repository.

Testing it all

I've flashed a clean Rasbperry Pi OS Lite image onto a sdcard, and booted from it. Using fdisk and mkfs.ext4 I can format the encrypted drive the Raspberry Pi was previously booting from into a clean, unencrypted drive.

I then create a mount point for the disk, mount it, and change the ownership to my user thib.

thib@tinykube:~ $ sudo mkdir /mnt/icy
thib@tinykube:~ $ sudo mount /dev/sda1 /mnt/icy
thib@tinykube:~ $ sudo chown -R thib:thib /mnt/icy

I can now perform my tests, not forgetting to change the --filename parameter to /mnt/icy/temp.file so the benchmarks is performed on the disk and not on the sd card.

Unencrypted disk performance

Sequential read speed

I can then run the sequential read test from the mounted disk

thib@tinykube:~ $ fio --name TEST --eta-newline=5s --filename=/mnt/icy/temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [R(1)][19.4%][r=333MiB/s][r=333 IOPS][eta 00m:29s]
Jobs: 1 (f=1): [R(1)][36.4%][r=333MiB/s][r=332 IOPS][eta 00m:21s] 
Jobs: 1 (f=1): [R(1)][53.1%][r=333MiB/s][r=332 IOPS][eta 00m:15s] 
Jobs: 1 (f=1): [R(1)][68.8%][r=333MiB/s][r=332 IOPS][eta 00m:10s] 
Jobs: 1 (f=1): [R(1)][87.1%][r=332MiB/s][r=332 IOPS][eta 00m:04s] 
Jobs: 1 (f=1): [R(1)][100.0%][r=334MiB/s][r=333 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=14807: Sun Jun 15 11:58:14 2025
  read: IOPS=333, BW=333MiB/s (349MB/s)(10.0GiB/30733msec)
    slat (usec): min=83, max=56105, avg=2967.97, stdev=10294.97
    clat (msec): min=28, max=144, avg=92.78, stdev=16.27
     lat (msec): min=30, max=180, avg=95.75, stdev=18.44
    clat percentiles (msec):
     |  1.00th=[   71],  5.00th=[   78], 10.00th=[   80], 20.00th=[   83],
     | 30.00th=[   86], 40.00th=[   88], 50.00th=[   88], 60.00th=[   90],
     | 70.00th=[   93], 80.00th=[   97], 90.00th=[  126], 95.00th=[  131],
     | 99.00th=[  140], 99.50th=[  142], 99.90th=[  144], 99.95th=[  144],
     | 99.99th=[  144]
   bw (  KiB/s): min=321536, max=363816, per=99.96%, avg=341063.31, stdev=14666.91, samples=61
   iops        : min=  314, max=  355, avg=333.02, stdev=14.31, samples=61
  lat (msec)   : 50=0.61%, 100=83.42%, 250=15.98%
  cpu          : usr=0.31%, sys=18.80%, ctx=1173, majf=0, minf=8218
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=333MiB/s (349MB/s), 333MiB/s-333MiB/s (349MB/s-349MB/s), io=10.0GiB (10.7GB), run=30733-30733msec

Disk stats (read/write):
  sda: ios=20359/2, merge=0/1, ticks=1622783/170, in_queue=1622998, util=82.13%

I can read from that disk at a speed of about 350 MB/s. Looking at htop while the reading test is being performed paints a much different picture as compared to when the drive was encrypted

I can see that the CPU is not very busy, and the wait time is well beyond 10%. Unsurprisingly this time, when testing what is the max read capacity for the risk the bottleneck is the disk.

Sequential write speed

thib@tinykube:~ $ fio --name TEST --eta-newline=5s --filename=/mnt/icy/temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [W(1)][12.5%][w=319MiB/s][w=318 IOPS][eta 00m:49s]
Jobs: 1 (f=1): [W(1)][28.6%][w=319MiB/s][w=318 IOPS][eta 00m:30s] 
Jobs: 1 (f=1): [W(1)][44.7%][w=319MiB/s][w=318 IOPS][eta 00m:21s] 
Jobs: 1 (f=1): [W(1)][59.5%][w=319MiB/s][w=318 IOPS][eta 00m:15s] 
Jobs: 1 (f=1): [W(1)][75.0%][w=318MiB/s][w=318 IOPS][eta 00m:09s] 
Jobs: 1 (f=1): [W(1)][91.4%][w=320MiB/s][w=319 IOPS][eta 00m:03s] 
Jobs: 1 (f=1): [W(1)][100.0%][w=312MiB/s][w=311 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=15551: Sun Jun 15 12:19:37 2025
  write: IOPS=300, BW=300MiB/s (315MB/s)(10.0GiB/34116msec); 0 zone resets
    slat (usec): min=156, max=1970.0k, avg=3244.94, stdev=19525.85
    clat (msec): min=18, max=2063, avg=102.64, stdev=103.41
     lat (msec): min=19, max=2066, avg=105.89, stdev=105.10
    clat percentiles (msec):
     |  1.00th=[   36],  5.00th=[   96], 10.00th=[   97], 20.00th=[   97],
     | 30.00th=[   97], 40.00th=[   97], 50.00th=[   97], 60.00th=[   97],
     | 70.00th=[   97], 80.00th=[   97], 90.00th=[  101], 95.00th=[  101],
     | 99.00th=[  169], 99.50th=[  182], 99.90th=[ 2039], 99.95th=[ 2056],
     | 99.99th=[ 2056]
   bw (  KiB/s): min= 6144, max=329728, per=100.00%, avg=321631.80, stdev=39791.66, samples=65
   iops        : min=    6, max=  322, avg=314.08, stdev=38.86, samples=65
  lat (msec)   : 20=0.05%, 50=1.89%, 100=88.33%, 250=9.44%, 2000=0.04%
  lat (msec)   : >=2000=0.24%
  fsync/fdatasync/sync_file_range:
    sync (nsec): min=189719k, max=189719k, avg=189718833.00, stdev= 0.00
    sync percentiles (msec):
     |  1.00th=[  190],  5.00th=[  190], 10.00th=[  190], 20.00th=[  190],
     | 30.00th=[  190], 40.00th=[  190], 50.00th=[  190], 60.00th=[  190],
     | 70.00th=[  190], 80.00th=[  190], 90.00th=[  190], 95.00th=[  190],
     | 99.00th=[  190], 99.50th=[  190], 99.90th=[  190], 99.95th=[  190],
     | 99.99th=[  190]
  cpu          : usr=7.25%, sys=11.37%, ctx=22027, majf=0, minf=26
  IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=300MiB/s (315MB/s), 300MiB/s-300MiB/s (315MB/s-315MB/s), io=10.0GiB (10.7GB), run=34116-34116msec

Disk stats (read/write):
  sda: ios=0/20481, merge=0/47, ticks=0/1934829, in_queue=1935035, util=88.80%

I now know I can write at about 300 MB/s on that unencrypted disk. Looking at htop while the test was running, I also know that the disk is the bottleneck and not the CPU.

Random read/write speed

Let's run the "worst performance test" again from the unencrypted disk.

thib@tinykube:~ $ fio --name TEST --eta-newline=5s --filename=/mnt/icy/temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting 
TEST: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-3.33
Starting 32 processes
TEST: Laying out IO file (1 file / 2048MiB)
Jobs: 32 (f=32): [m(32)][13.1%][r=10.5MiB/s,w=10.8MiB/s][r=2677,w=2773 IOPS][eta 00m:53s]
Jobs: 32 (f=32): [m(32)][23.0%][r=11.0MiB/s,w=11.0MiB/s][r=2826,w=2819 IOPS][eta 00m:47s] 
Jobs: 32 (f=32): [m(32)][32.8%][r=10.9MiB/s,w=11.5MiB/s][r=2780,w=2937 IOPS][eta 00m:41s] 
Jobs: 32 (f=32): [m(32)][42.6%][r=10.8MiB/s,w=11.0MiB/s][r=2775,w=2826 IOPS][eta 00m:35s] 
Jobs: 32 (f=32): [m(32)][52.5%][r=10.9MiB/s,w=11.3MiB/s][r=2787,w=2886 IOPS][eta 00m:29s] 
Jobs: 32 (f=32): [m(32)][62.3%][r=11.3MiB/s,w=11.6MiB/s][r=2901,w=2967 IOPS][eta 00m:23s] 
Jobs: 32 (f=32): [m(32)][72.1%][r=11.4MiB/s,w=11.5MiB/s][r=2908,w=2942 IOPS][eta 00m:17s] 
Jobs: 32 (f=32): [m(32)][82.0%][r=11.6MiB/s,w=11.7MiB/s][r=2960,w=3004 IOPS][eta 00m:11s] 
Jobs: 32 (f=32): [m(32)][91.8%][r=11.0MiB/s,w=11.2MiB/s][r=2815,w=2861 IOPS][eta 00m:05s] 
Jobs: 32 (f=32): [m(32)][100.0%][r=11.0MiB/s,w=10.5MiB/s][r=2809,w=2700 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=32): err= 0: pid=14830: Sun Jun 15 12:05:54 2025
  read: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(656MiB/60004msec)
    slat (usec): min=14, max=1824, avg=88.06, stdev=104.92
    clat (usec): min=2, max=7373, avg=939.12, stdev=375.40
     lat (usec): min=130, max=7479, avg=1027.18, stdev=360.39
    clat percentiles (usec):
     |  1.00th=[    6],  5.00th=[  180], 10.00th=[  285], 20.00th=[  644],
     | 30.00th=[  889], 40.00th=[  971], 50.00th=[ 1037], 60.00th=[ 1090],
     | 70.00th=[ 1156], 80.00th=[ 1221], 90.00th=[ 1319], 95.00th=[ 1385],
     | 99.00th=[ 1532], 99.50th=[ 1614], 99.90th=[ 1811], 99.95th=[ 1926],
     | 99.99th=[ 6587]
   bw (  KiB/s): min= 8062, max=14560, per=100.00%, avg=11198.39, stdev=39.34, samples=3808
   iops        : min= 2009, max= 3640, avg=2793.55, stdev= 9.87, samples=3808
  write: IOPS=2806, BW=11.0MiB/s (11.5MB/s)(658MiB/60004msec); 0 zone resets
    slat (usec): min=15, max=2183, avg=92.95, stdev=108.34
    clat (usec): min=2, max=7118, avg=850.19, stdev=310.22
     lat (usec): min=110, max=8127, avg=943.13, stdev=312.58
    clat percentiles (usec):
     |  1.00th=[    6],  5.00th=[  174], 10.00th=[  302], 20.00th=[  668],
     | 30.00th=[  832], 40.00th=[  889], 50.00th=[  938], 60.00th=[  988],
     | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1172],
     | 99.00th=[ 1401], 99.50th=[ 1532], 99.90th=[ 1745], 99.95th=[ 1844],
     | 99.99th=[ 2147]
   bw (  KiB/s): min= 8052, max=14548, per=100.00%, avg=11234.02, stdev=40.18, samples=3808
   iops        : min= 2004, max= 3634, avg=2802.45, stdev=10.08, samples=3808
  lat (usec)   : 4=0.26%, 10=1.50%, 20=0.08%, 50=0.14%, 100=0.42%
  lat (usec)   : 250=5.89%, 500=7.98%, 750=6.66%, 1000=31.11%
  lat (msec)   : 2=45.93%, 4=0.01%, 10=0.01%
  fsync/fdatasync/sync_file_range:
    sync (usec): min=1323, max=17158, avg=5610.76, stdev=1148.23
    sync percentiles (usec):
     |  1.00th=[ 3195],  5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4686],
     | 30.00th=[ 4883], 40.00th=[ 5080], 50.00th=[ 5342], 60.00th=[ 5604],
     | 70.00th=[ 6128], 80.00th=[ 6718], 90.00th=[ 7177], 95.00th=[ 7570],
     | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[ 9896], 99.95th=[10552],
     | 99.99th=[15401]
  cpu          : usr=0.51%, sys=2.25%, ctx=1006148, majf=0, minf=977
  IO depths    : 1=200.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=167837,168384,0,336200 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=10.9MiB/s (11.5MB/s), 10.9MiB/s-10.9MiB/s (11.5MB/s-11.5MB/s), io=656MiB (687MB), run=60004-60004msec
  WRITE: bw=11.0MiB/s (11.5MB/s), 11.0MiB/s-11.0MiB/s (11.5MB/s-11.5MB/s), io=658MiB (690MB), run=60004-60004msec

Disk stats (read/write):
  sda: ios=167422/311772, merge=0/14760, ticks=153900/409024, in_queue=615762, util=81.63%

The read and write performance is much worse than I expected, only a few MB/s above the same test on the encrypted drive. But here again, htop tells us that the disk is the bottleneck, and not the CPU.

Copying the bucket

I now know that my disk can read or write at a maximum speed of about 300 MB/s. Let's sync the repository again from Scaleway s3.

thib@tinykube:~ $ aws s3 sync s3://ergaster-backup/ /mnt/icy/s3 \
    --endpoint-url https://s3.fr-par.scw.cloud 

The aws CLI reports download speeds between 45 and 65 MB/s, much higher than the initial tests! Having a look at htop while the sync happens, I can see that the CPUs are not at full capacity, and that the i/o wait time is at 0%.

The metric that is has gone up though is si, that stands for softirqs. This paper and this StackOverflow answer explain what softirqs are. I understand the si metric from (h)top as "time the CPU spends to make the system's devices work." In this case, I believe this is time the CPU spends helping the network chip. If I'm wrong and you have a better explanation, please reach out at thib@ergaster.org!

Testing kopia's performance

Now for the final tests, let's first try to perform a restore from the AES-encrypted repository directly from the s3 bucket. Then, let's change the encryption algorithm of the repository and perform a restore.

Restoring from the s3 bucket

After connecting kopia to the repository my s3 bucket, I perform a tentative restore and...

thib@tinykube:/mnt/icy $ kopia restore k5a270ab7f4acf72d4c3830a58edd7106
Restoring to local filesystem (/mnt/icy/k5a270ab7f4acf72d4c3830a58edd7106) with parallelism=8...
Processed 94929 (102.9 GB) of 118004 (233.3 GB) 19.8 MB/s (44.1%) remaining 1h49m45s.        

I'm reaching much higher speeds, closer to the theoretical 27.6 MB/s I got in my encryption benchmark! Looking at htop, I can see that the CPU remains the bottleneck when restoring. Those are decent speeds for a small and power efficient device like the Raspberry Pi, but this is not enough for me to use it in production.

The CPU is the limiting factor, and the Pi is busy exclusively doing a restore. If it was serving services in addition to that, the performance of the restore and of the services would degrade. We should be able to achieve better results by changing the encryption algorithm of the repository.

Re-encrypting the repository

Since the encryption algorithm can only be set when the repository is created, I need to create a new repository with the Chacha algorithm and ask kopia to decrypt the current repository encrypted with AES and re-encrypt its using Chacha.

The Pi performs so poorly with AES that it would take days to do so. I can do this operation on my beefy VPS and then transfer the repository data onto my Pi.

So on my VPS, I then connect to the s3 repo, passing it an option to dump the config in a special place

[thib@ergaster ~]$ 
Enter password to open repository: 

Connected to repository.

NOTICE: Kopia will check for updates on GitHub every 7 days, starting 24 hours after first use.
To disable this behavior, set environment variable KOPIA_CHECK_FOR_UPDATES=false
Alternatively you can remove the file "/home/thib/old.config.update-info.json".

I then create a filesystem repo on my VPS, with the new encryption algorithm that is faster on the Pi

[thib@ergaster ~]$ kopia repo create filesystem \
    --block-hash=BLAKE2B-256-128 \
    --encryption=CHACHA20-POLY1305-HMAC-SHA256 \
    --path=/home/thib/kopia_chacha
Enter password to create new repository: 
Re-enter password for verification: 
Initializing repository with:
  block hash:          BLAKE2B-256-128
  encryption:          CHACHA20-POLY1305-HMAC-SHA256
  key derivation:      scrypt-65536-8-1
  splitter:            DYNAMIC-4M-BUZHASH
Connected to repository.

And I can finally launch the migration to retrieve data from the s3 provider and migrate it locally.

[thib@ergaster ~]$ kopia snapshot migrate \
    --all \
    --source-config=/home/thib/old.config \
    --parallel 16

I check that the repository is using the right encryption with

[thib@ergaster ~]$ kopia repo status
Config file:         /home/thib/.config/kopia/repository.config

Description:         Repository in Filesystem: /home/thib/kopia_chacha
Hostname:            ergaster
Username:            thib
Read-only:           false
Format blob cache:   15m0s

Storage type:        filesystem
Storage capacity:    1.3 TB
Storage available:   646.3 GB
Storage config:      {
                       "path": "/home/thib/kopia_chacha",
                       "fileMode": 384,
                       "dirMode": 448,
                       "dirShards": null
                     }

Unique ID:           eaa6041f654c5e926aa65442b5e80f6e8cf35c1db93573b596babf7cff8641d5
Hash:                BLAKE2B-256-128
Encryption:          AES256-GCM-HMAC-SHA256
Splitter:            DYNAMIC-4M-BUZHASH
Format version:      3
Content compression: true
Password changes:    true
Max pack length:     21 MB
Index Format:        v2

Epoch Manager:       enabled
Current Epoch: 0

Epoch refresh frequency: 20m0s
Epoch advance on:        20 blobs or 10.5 MB, minimum 24h0m0s
Epoch cleanup margin:    4h0m0s
Epoch checkpoint every:  7 epochs

I could scp that repository to my Raspberry Pi, but I want to evaluate the restore performance the same conditions as before, so I create a new s3 bucket and sync the Chacha-encrypted repository to it. My repository weights about 200 GB. Pushing it to a new bucket and pulling it from the Pi will only cost me a handful of euros.

[thib@ergaster ~]$ kopia repository sync-to s3 \
    --bucket=chacha \
    --access-key=REDACTED \
    --secret-access-key=REDACTED \
    --endpoint="s3.fr-par.scw.cloud" \
    --parallel 16

After it's done, I can connect to that new bucket from the Raspberry Pi, disconnect kopia from the former AES-encrypted repo and connect to the new Chacha-encrypted repo

thib@tinykube:~ $ kopia repo disconnect
thib@tinykube:~ $ kopia repository connect s3 \
    --bucket=chacha \
    --access-key=REDACTED \
    --secret-access-key=REDACTED \
    --endpoint="s3.fr-par.scw.cloud"
Enter password to open repository: 

Connected to repository.

I can then check that the repo indeed uses the Chacha encryption algorithm

thib@tinykube:~ $ kopia repo status
Config file:         /home/thib/.config/kopia/repository.config

Description:         Repository in S3: s3.fr-par.scw.cloud chacha
Hostname:            tinykube
Username:            thib
Read-only:           false
Format blob cache:   15m0s

Storage type:        s3
Storage capacity:    unbounded
Storage config:      {
                       "bucket": "chacha",
                       "endpoint": "s3.fr-par.scw.cloud",
                       "accessKeyID": "SCWW3H0VJTP98ZJXJJ8V",
                       "secretAccessKey": "************************************",
                       "sessionToken": "",
                       "roleARN": "",
                       "sessionName": "",
                       "duration": "0s",
                       "roleEndpoint": "",
                       "roleRegion": ""
                     }

Unique ID:           632d3c3999fa2ca3b1e7e79b9ebb5b498ef25438b732762589537020977dc35c
Hash:                BLAKE2B-256-128
Encryption:          CHACHA20-POLY1305-HMAC-SHA256
Splitter:            DYNAMIC-4M-BUZHASH
Format version:      3
Content compression: true
Password changes:    true
Max pack length:     21 MB
Index Format:        v2

Epoch Manager:       enabled
Current Epoch: 0

Epoch refresh frequency: 20m0s
Epoch advance on:        20 blobs or 10.5 MB, minimum 24h0m0s
Epoch cleanup margin:    4h0m0s
Epoch checkpoint every:  7 epochs

I can now do a test restore

thib@tinykube:~ $ kopia restore k6303a292f182dcabab119b4d0e13b7d1 /mnt/icy/nextcloud-chacha
Restoring to local filesystem (/mnt/icy/nextcloud-chacha) with parallelism=8...
Processed 82254 (67 GB) of 118011 (233.4 GB) 23.1 MB/s (28.7%) remaining 1h59m50s

After a minute or two of restoring at 1.5 MB/s with CPUs mostly idle, the Pi starts restoring increasingly faster. The restore speed displayed by kopia very slowly rises up to 23.1 MB/s. I expected it to reach 70 or 80 MB/s at least.

The CPU doesn't look like it's going at full capacity. While the wait time remained regularly below 10%, I could see bumps of where the wa metric was going above 80% for some of the CPUs, and sometimes all at the same time.

With the Chacha encryption algorithm, it looks like the bottleneck is not the CPU anymore but the disk. Unfortunately, I can only attach a NVMe drive via an usb enclosure on my Raspberry Pi 4, so I won't be able to remove that bottleneck.

Conclusion

It was a fun journey figuring out why my Raspberry Pi 4 was too slow to restore data backed up from my VPS. I now know the value of htop when chasing bottlenecks. I also understand better how Kopia works and the importance of using encryption and hash algorithms that work well on the machine that will perform the backups and restore.

When doing a restore, the Raspberry Pi had to pull the repository data from Scaleway, decrypt the chunks from the repository, and encrypt data to write it on disk. The CPU of the Raspberry Pi is not optimized for encryption and favors power efficiency over computing power. It was completely saturated by the decryption and encryption to do.

My only regret here is that I couldn't test a Chacha-encrypted kopia repository on an encrypted disk since my Raspberry Pi refused to boot from the encrypted drive shortly after testing the random read / write speed. I could get from a restore speed in Bytes per second to a restore speed in dozens of MegaByes per second. But even without the disk encryption overhead, the Pi is too slow at restoring backups for me to use it in production.

Since I intend to run quite a few services on my server (k3s, Flux, Prometheus, kube-state-metrics, Grafana, velero, and a flurry of actual user-facing services) I need a much beefier machine. I purchased a Minisforum UM880 Plus to host it all, and now I know the importance of configuring velero and how it uses kopia for maximum efficiency on my machine.

A massive thank you to my friends and colleagues, Olivier Reivilibre, Ben Banfield-Zanin, and Guillaume Villemont for their suggestions when chasing the bottleneck.

Why is there no consistent single signon API flow?

Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.

This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.

But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.

There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.

Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.

I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.

There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.

Someone, please, write a spec for this. Please don't make it be me.

comment count unavailable comments

Nirbheek Chauhan

@nirbheek

A strange game. The only winning move is not to play.

That's a reference to the 1983 film “WarGames”. A film that has had incredible influence on not just the social milieu, but also cyber security and defence. It has a lot of lessons that need re-learning every couple of generations, and I think the time for that has come again.

Human beings are very interesting creatures. Tribalism and warfare are wired in our minds in such a visceral way, that we lose the ability to think more than one or two steps forward when we're trying to defend our tribe in anger.

Most people get that this is what makes warfare conducted with nuclear weapons particularly dangerous, but I think not enough words have been written about how this same tendency also makes warfare conducted with Social Media dangerous.

You cannot win a war on Social Media. You can only mire yourself in it more and more deeply, harming yourself, the people around you, and the potential of what you could've been doing instead of fighting that war. The more you throw yourself in it, the more catharsis you will feel, followed by more attacks, more retaliation, and more catharsis.

A Just War is addictive, and a Just War without loss of life is the most addictive of all.

The only winning move is not to play.

The Internet in general and Social Media in particular are very good at bringing close to you all kinds of strange and messed-up people. For a project like GNOME, it is almost unavoidable that the project and hence the people in it will encounter such people. Many of these people live for hate, and wish to see the GNOME project fail.

Engaging them and hence spending your energy on them is the easiest way to help them achieve their goals. You cannot bully them off the internet. Your angry-posting and epic blogs slamming them into the ground aren't going to make them stop. The best outcome is that they get bored and go annoy someone else.

The only winning move is not to play.

When dealing with abusive ex-partners or ex-family members, a critical piece of advice is given to victims: all they want is a reaction. Everything they're doing is in pursuit of control, and once you leave them, the only control they have left is over your emotional state.

When they throw a stone at you, don't lob it back at them. Catch it and drop it on the ground, as if it doesn't matter. In the beginning, they will intensify their attack, saying increasingly mean and cutting things in an attempt to evoke a response. You have to not care. Eventually they will get bored and leave you alone.

This is really REALLY hard to do, because the other person knows all your trigger points. They know you inside out. But it's the only way out.

The only winning move is not to play.

Wars that cannot be won, should not be fought. Simply because war has costs, and for the people working on GNOME, the cost is time and energy that they could've spent on creating the future that they want to see.

In my 20s and early 30s I made this same youthful mistake, and what got me out of it was my drive to always distil decisions through two questions: What is my core purpose? Is this helping me achieve my purpose?

This is such a powerful guiding and enabling force, that I would urge all readers to imbue it. It will change your life.

Jiri Eischmann

@jeischma

Linux Desktop Migration Tool 1.5

After almost a year I made another release of the Linux Desktop Migration Tool. In this release I focused on the network settings migration, specifically NetworkManager because it’s what virtually all desktop distributions use.

The result isn’t a lot of added code, but it certainly took some time to experiment with how NetworkManager behaves. It doesn’t officially support network settings migration, but it’s possible with small limitations. I’ve tested it with all kinds of network connections (wired, Wi-Fi, VPNs…) and it worked for me very well, but I’m pretty sure there are scenarios that may not work with the way I implemented the migration. I’m interested in learning about them. What is currently not fully handled are scenarios where the network connection requires a certificate. It’s either located in ~/.pki and thus already handled by the migration tool, or you have to migrate it manually.

The Linux Desktop Migration Tool now covers everything I originally planned to cover and the number of choices has grown quite a lot. So I’ll focus on dialogs and generally UX instead of adding new features. I’ll also look at optimizations. E.g. migrating files using rsync takes a lot of time if you have a lot of small files in your home. It can certainly be speeded up.

Hans de Goede

@hansdg

Is Copilot useful for kernel patch review?

Patch review is an important and useful part of the kernel development process, but it also a time-consuming part. To see if I could save some human reviewer time I've been pushing kernel patch-series to a branch on github, creating a pull-request for the branch and then assigning it to Copilot for review. The idea being that In would fix any issues Co-pilot catches before posting the series upstream saving a human reviewer from having to catch the issues.

I've done this for 5 patch-series: one, two, three, four, five, totalling 53 patches in total. click the number to see the pull-request and Copilot's reviews.

Unfortunately the results are not great on 53 patches Co-pilot had 4 low-confidence comments which were not useful and 3 normal comments. 2 of the no comments were on the power-supply fwnode series one was about spelling degrees Celcius as degrees Celsius instead which is the single valid remark. The other remark was about re-assigning a variable without freeing it first, but Copilot missed that the re-assignment was to another variable since this happened in a different scope. The third normal comment (here) was about as useless as they can come.

To be fair these were all patch-series written by me and then already self-reviewed and deemed ready for upstream posting before I asked Copilot to review them.

As another experiment I did one final pull-request with a couple of WIP patches to add USBIO support from Intel. Copilot generated 3 normal comments here all 3 of which are valid and one of them catches a real bug. Still given the WIP state of this case and the fact that my own review has found a whole lot more then just this, including the need for a bunch if refactoring, the results of this Copilot review are also disappointing IMHO.

Co-pilot also automatically generates summaries of the changes in the pull-requests, at a first look these look useful for e.g. a cover-letter for a patch-set but they are often full with half-truths so at a minimum these need some very careful editing / correcting before they can be used.

My personal conclusion is that running patch-sets through Copilot before posting them on the list is not worth the effort.

comment count unavailable comments

Casilda 0.9.0 Development Release!

Native rendering Release!

I am pleased to announce a new development release of Casilda, a simple Wayland compositor widget for Gtk 4 which can be used to embed other processes windows in your Gtk 4 application.

The main feature of this release is dmabuf support which allow clients to use hardware accelerated libraries for their rendering brought to you by Val Packet!

You can see all her cool work here.

This allowed me to stop relaying on wlroots scene compositor and render client windows directly in the widget snapshot method which not only is faster but also integrates better with Gtk since now the background is not handled by wlroots anymore and can be set with CSS like with any other widget. This is why I decided to deprecate bg-color property.

Other improvements include transient window support and better initial window placement.

Release Notes

    • Fix rendering glitch on resize
    • Do not use wlr scene layout
    • Render windows and popups directly in snapshot()
    • Position windows on center of widget
    • Position transient windows on center of parent
    • Fix unmaximize
    • Add dmabuf support (Val Packett)
    • Added vapi generation (PaladinDev)
    • Add library soname (Benson Muite)

Fixed Issues

    • “Resource leak causing crash with dmabuf”
    • ” Unmaximize not working properly”
    • “Add dmabuff support” (Val Packett)
    • “Bad performance”
    • “Add a soname to shared library” (Benson Muite)

Where to get it?

Source code lives on GNOME gitlab here

git clone https://gitlab.gnome.org/jpu/casilda.git

Matrix channel

Have any question? come chat with us at #cambalache:gnome.org

Mastodon

Follow me in Mastodon @xjuan to get news related to Casilda and Cambalache development.

Happy coding!

Steven Deobald

@steven

2025-06-20 Foundation Report

Welcome to the mid-June Foundation Report! I’m in an airport! My back hurts! This one might be short! haha

 

## AWS OSS

Before the UN Open Source Week, Andrea Veri and I had a chance to meet Mila Zhou, Tom (Spot) Callaway, and Hannah Aubry from AWS OSS. We thanked them for their huge contribution to GNOME’s infrastructure but, more importantly, discussed other ways we can partner with them to make GNOME more sustainable and secure.

I’ll be perfectly honest: I didn’t know what to expect from a meeting with AWS. And, as it turns out, it was such a lovely conversation that we chatted nonstop for nearly 5 hours and then continued the conversation over supper. At a… vegan chinese food place, of all things? (Very considerate of them to find some vegetarian food for me!) Lovely folks and I can’t wait for our next conversation.

 

## United Nations Open Source Week

The big news for me this week is that I attended the United Nations Open Source Week in Manhattan. The Foundation isn’t in a great financial position, so I crashed with friends-of-friends (now also friends!) on an air mattress in Queens. Free (as in ginger beer) is a very reasonable price but my spine will also appreciate sleeping in my own bed tonight. 😉

I met too many people to mention, but I was pleasantly surprised by the variety of organizations and different folks in attendance. Indie hackers, humanitarian workers, education specialists, Digital Public Infrastructure Aficionados, policy wonks, OSPO leaders, and a bit of Big Tech. I came to New York to beg for money (and I did do a bit of that) but it was the conversations about the f/oss community that I really enjoyed.

We did do a “five Executive Directors” photo, because 4 previous GNOME Foundation EDs happened to be there. One of them was Richard! I got to hang out with him in person and he gave me a hug. So did Karen. It was nice. The history matters (recent history and ancient history) … and GNOME has a lot of history.

Special shout-out to Sumana Harihareswara (it’s hard for me to spell that without an “sh”) who organized an extremely cool, low-key gathering in an outdoor public space near the UN. She couldn’t make the conf herself but she managed to create the best hallway track I attended. (By dragging a very heavy bag of snacks and drinks all the way from Queens.) More of that, please. The unconf part, not the dragging snacks across the city part.

All in all, a really exciting and exhausting week.

 

## Donation Page

As I mentioned above, the GNOME Foundation’s financial situation could use help. We’ll be starting a donation drive soon to encourage GNOME users to donate, using the new donation page:

https://donate.gnome.org

This blog post is as good a time as any to say this isn’t just a cash grab. The flip side of finding money for the Foundation is finding ways to grow the project with it. I’m of the opinion that this needs to include more than running infrastructure and conferences. Those things are extremely important — nothing in recent memory has reminded me of the value of in-person interactions like meeting a bunch of new friends here in New York — the real key to the GNOME project is the project itself. And the core of the project is development.

As usual: No Promises. But if you want to hear a version of what I was saying all week, you can bug Adrian Vovk for his opinion about my opinions. 😉

The donation page would not have been possible without the help of Bart Piotrowski, Sam Hewitt, Jakub Steiner, Shivam Singhal, and Yogiraj Hendre. Thanks everyone for putting in the hard work to get this over the line, to test it with your own credit cards, and to fix bugs as they cropped up.

We will keep iterating on this as we learn more about what corporate sponsors want in exchange for their sponsorship and as we figure out how best to support Causes (campaigns), such as development.

 

## Elections

Voting has closed! Thank you to all the candidates who ran this year. I know that running for election on the Board is intimidating but I’m glad folks overcame that fear and made the effort to run campaigns. It was very important to have you all in the race and I look forward to working with my new bosses once they take their seats. That’s when you get to learn about governance and demonstrate that you’re willing to put in the work. You might be my bosses… but I’m going to push you. 😉

Until next week!

Jussi Pakkanen

@jpakkane

Book creation using almost entirely open source tools

Some years ago I wrote a book. Now I have written a second one, but because no publisher wanted to publish it I chose to self-publish a small print run and hand it out to friends (and whoever actually wants one) as a a very-much-not-for-profit art project.

This meant that I had to create every PDF used in the printing myself. I received the shipment from the printing house yesterday.  The end result turned out very nice indeed.

The red parts in this dust jacket are not ink but instead are done with foil stamping (it has a metallic reflective surface). This was done with Scribus. The cover image was painted by hand using watercolor paints. The illustrator used a proprietary image processing app to create the final TIFF version used here. She has since told me that she wants to eventually shift to an open source app due to ongoing actions of the company making the proprietary app.

The cover itself is cloth with a debossed emblem. The figure was drawn in Inkscape and then copypasted to Scribus.

Evert fantasy book needs to have a map. This has two and they are printed in the end papers. The original picture was drawn with a nib pen and black ink and processed with Gimp. The printed version is brownish to give it that "old timey" look. Despite its apparent simplicity this PDF was the most problematic. The image itself is monochrome and printed with a Pantone spot ink. Trying to print this with CMYK inks would just not have worked. Because the PDF drawing model for spot inks in images behaves, let's say, in an unexpected way, I had to write a custom script to create the PDF with CapyPDF. As far as I know no other open source tool can do this correctly, not even Scribus. The relevant bug can be found here. It was somewhat nerve wrecking to send this out to the print shop with zero practical experience and a theoretical basis of "according to my interpretation of the PDF spec, this should be correct". As this is the first ever commercial print job using CapyPDF, it's quite fortunate that it succeeded pretty much perfectly.

The inner pages were created with the same Chapterizer tool as the previous book. It uses Pango and Cairo to generate PDFs. Illustrations in the text were drawn with Krita. As Cairo only produces RGB PDFs, as the last step it had to be converted to grayscale using Ghostscript.

My a11y journey

23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.

The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.

This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.

But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.

I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.

But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.

That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.

When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.

comment count unavailable comments

This Week in GNOME

@thisweek

#205 Loading Films

Update on what happened across the GNOME project in the week from June 13 to June 20.

GNOME Core Apps and Libraries

Maps

Maps gives you quick access to maps all across the world.

mlundblad announces

Maps now shows localized metro/railway station icons in some locations

Settings

Configure various aspects of your GNOME desktop.

Matthijs Velsink announces

We ported the GNOME Settings app to Blueprint! UI definition files are much easier to read and write in Blueprint compared to the standard XML syntax that GTK uses. Hopefully this makes UI contributions more approachable to newcomers. In any case, reviewing UI changes has gotten quite enjoyable already! Settings is one of the first large core apps to make the switch (together with Calendar), and Blueprint is still considered experimental, but the experience has been great so far. Small missing features in Blueprint have not been dealbreakers.

Many thanks to Jamie Gravendeel who did most of the work and together with Hari Rana motivated us to consider the port in the first place! We’d like to thank James Westman as well for creating Blueprint and making the whole porting process so straightforward.

Calendar

A simple calendar application.

Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️‍⚧️ announces

GNOME Calendar received a nice visual overhaul, thanks to the code contributed by Markus Göllnitz, which the design was led by Philipp Sauberz and Jeff Fortin. You can find the really long discussion on GitLab. This should hopefully make Calendar work better on smaller monitors thanks to the collapsible sidebar.

Afterwards, Jamie Gravendeel ported the entirety of GNOME Calendar to Blueprint. This should hopefully make it easier for everyone to contribute to Calendar’s UI.

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Ignacy Kuchciński (ignapk) says

There was recently an interesting improvement in GLib, that makes sure your Trash is really empty, by fixing a bug resulting in leftover files in ~/.local/share/Trash/expunged/. For more information, check out https://ignapk.blogspot.com/2025/06/taking-out-trash-or-just-sweeping-it.html

Third Party Projects

bjawebos reports

In my spare time I like to take photographs. I use different cameras with different characteristics and therefore different purposes. Most of these cameras use film as the image carrier medium. It happened a few weeks ago that I wanted to use a camera and wondered whether it had film in it or not. I was of the opinion that there was no film inserted and I opened the back of the camera. What can I say, of course there was film inside. It wasn’t much damage, I lost about 3-5 pictures. Nevertheless, I had to find a solution and since I wanted to learn more about GTK4/libadwaita and Rust anyway, I combined these two topics.

So here is the application for photographers who no longer know whether a film is inserted. The application is called Filmbook and is divided into 4 sections. The first tab “Current” shows a list of cameras with inserted films. The “History” tab shows which cameras were loaded with which films. In addition, the camera-film pairs can be marked as developed. The third and fourth tabs show the cameras and films.

The application is currently in a sufficiently stable state and I would like to test it extensively on my Pinephone Pro under Phosh to explore the weaknesses of the current design. In addition, my goal is to get in touch with other photographers to gather their ideas and needs.

So, if you feel addressed, get in touch with me. Here are a few important links:

Flathub: https://flathub.org/apps/page.codeberg.bjawebos.Filmbook Issues: https://codeberg.org/bjawebos/filmbook/issues Fediverse:

johannes_b reports

This week I released a new version of BMI Calculator. Now it includes german, italian and dutch translations. The app remembers the last entries and you can choose the color scheme. You can install the app from Flathub: https://flathub.org/apps/io.github.johannesboehler2.BmiCalculator

Pipeline

Follow your favorite video creators.

schmiddi says

Version 2.5.0 of Pipeline was now released. Pipeline now displays a random splash text when reloading the feed. This tells users about random facts about Pipeline, showcases some features and also advertises some other great alternative YouTube clients. Examples include:

  • Did you know? The first commit of Pipeline was 1566 days ago.
  • Feature Spotlight: Seeing something you don’t like? You can hide videos from your feed based on the title and uploader of the video.
  • Also try: NewPipe.

A useless feature? Pretty much. But I enjoyed coding it and maybe some people will enjoy reading the splash texts I came up with.

This release also adds debug information to the About window, which will possibly help me debug issues by knowing your versions of dependencies and the most important settings. This release also fixes minor bugs, like some buttons being hidden in a narrow layout in the video page, the description of YouTube videos containing escaped characters, and that a video will not be added to the watched list if Pipeline is closed while it is still displayed.

Fractal

Matrix messaging app for GNOME written in Rust.

Kévin Commaille reports

We released Fractal 11.2 which updates the matrix-sdk-crypto dependency to include a fix for a high severity security issue. It is available right now on Flathub.

GNOME Foundation

steven says

A week late to TWIG, but almost on time for the blog, it’s this week’s Foundation Report: Elections, GUADEC, ops, infra, fundraising, some fun meetings, and the ED gets another feedback session.

https://blogs.gnome.org/steven/2025/06/14/2025-06-14-foundation-report/

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

libinput and tablet tool eraser buttons

This is, to some degree, a followup to this 2014 post. The TLDR of that is that, many a moon ago, the corporate overlords at Microsoft that decide all PC hardware behaviour decreed that the best way to handle an eraser emulation on a stylus is by having a button that is hardcoded in the firmware to, upon press, send a proximity out event for the pen followed by a proximity in event for the eraser tool. Upon release, they dogma'd, said eraser button shall virtually move the eraser out of proximity followed by the pen coming back into proximity. Or, in other words, the pen simulates being inverted to use the eraser, at the push of a button. Truly the future, back in the happy times of the mid 20-teens.

In a world where you don't want to update your software for a new hardware feature, this of course makes perfect sense. In a world where you write software to handle such hardware features, significantly less so.

Anyway, it is now 11 years later, the happy 2010s are over, and Benjamin and I have fixed this very issue in a few udev-hid-bpf programs but I wanted something that's a) more generic and b) configurable by the user. Somehow I am still convinced that disabling the eraser button at the udev-hid-bpf level will make users that use said button angry and, dear $deity, we can't have angry users, can we? So many angry people out there anyway, let's not add to that.

To get there, libinput's guts had to be changed. Previously libinput would read the kernel events, update the tablet state struct and then generate events based on various state changes. This of course works great when you e.g. get a button toggle, it doesn't work quite as great when your state change was one or two event frames ago (because prox-out of one tool, prox-in of another tool are at least 2 events). Extracing that older state change was like swapping the type of meatballs from an ikea meal after it's been served - doable in theory, but very messy.

Long story short, libinput now has a internal plugin system that can modify the evdev event stream as it comes in. It works like a pipeline, the events are passed from the kernel to the first plugin, modified, passed to the next plugin, etc. Eventually the last plugin is our actual tablet backend which will update tablet state, generate libinput events, and generally be grateful about having fewer quirks to worry about. With this architecture we can hold back the proximity events and filter them (if the eraser comes into proximity) or replay them (if the eraser does not come into proximity). The tablet backend is none the wiser, it either sees proximity events when those are valid or it sees a button event (depending on configuration).

This architecture approach is so successful that I have now switched a bunch of other internal features over to use that internal infrastructure (proximity timers, button debouncing, etc.). And of course it laid the ground work for the (presumably highly) anticipated Lua plugin support. Either way, happy times. For a bit. Because for those not needing the eraser feature, we've just increased your available tool button count by 100%[2] - now there's a headline for tech journalists that just blindly copy claims from blog posts.

[1] Since this is a bit wordy, the libinput API call is just libinput_tablet_tool_config_eraser_button_set_button()
[2] A very small number of styli have two buttons and an eraser button so those only get what, 50% increase? Anyway, that would make for a less clickbaity headline so let's handwave those away.

Marcus Lundblad

@mlundblad

Midsommer Maps

 As tradition has it, it's about time for the (Northern Hemisphere) summer update on the happenings around Maps!

About dialog for GNOME Maps 49.alpha development 


Bug Fixes 

 Since the GNOME 48 release in March, there's been some bug fixes, such as correctly handling daylight savings time in public transit itineraries retrieved from Transitous. Also James Westman fixed a regression where the search result popover wasn't showing on small screen devices (phones) because of sizing issues.

 

More Clickable Stuff

More symbols can now be directly selected in the map view by clicking/tapping on there symbols, like roads and house numbers (and then also, like any other POI can be marked as favorites).
 
Showing place information for the AVUS motorway in Berlin

 And related to traffic and driving, exit numbers are now shown for highway junctions (exits) when available.
 
Showing information for a highway exit in a driving-on-the-right locallity

Showing information for a highway exit in a driving-on-the-left locallity

 Note how the direction the arrow is pointing depends on the side of the road vehicle traffic drives in the country/territoy of the place…
Also the icon for the “Directions” button shows a “turn off left” mirrored icon now for places in drives-on-the-left countries as an additional attention-to-detail.
 

Furigana Names in Japanese

Since some time (around when we re-designed the place information “bubbles”) we show the native name for place under the name translated in the user's locale (when they are different).
As there exists an established OpenStreetMap tag for phonetic names in Japanese (using Hiragana), name:ja-Hira akin to Furigana (https://en.wikipedia.org/wiki/Furigana) used to aid with pronounciation of place names. I had been thinking that it might be a good idea to show this when available as the dimmed supplimental text in the cases where the displayed name and native names are identical, and the Hiragana name is available. E.g. when the user's locale is Japanese and looking at Japanese names.  For other locales in these cases the displayed name would typically be the Romaji name with the Japanese full (Kanji) name displayed under it as the native name.
So, I took the opportunity to discuss this with my college Daniel Markstedt, who speaks fluent Japanese and has lived many years in Japan. As he like the idea, and demo of it, I decided to go ahead with this!
 
Showing a place in Japanese with supplemental Hiragana name

 

Configurable Measurement Systems

Since like the start of time, Maps has  shown distances in feet and miles when using a United States locale (or more precisely when measurements use such a locale, LC_MEASUREMENT when speaking about the environment variables). For other locales using standard metric measurements.
Despite this we have several times recieved bug reports about Maps not  using the correct units. The issue here is that many users tend to prefer to have their computers speaking American English.
So, I finally caved in and added an option to override the system default.
 
Hamburger menu

 
Hamburger menu showing measurement unit selection

Station Symbols

One feature I had been wanted to implement since we moved to vector tiles and integrated the customized highway shields from OpenStreeMap Americana is showing localized symbols for e.g. metro stations. Such as the classic “roundel” symbol used in London, and the ”T“ in Stockholm.
 
After adding the network:wikidata tag to the pre-generated vector tiles this has been possible to implement. We choose to rely on the Wikidata tag instead of the network name/abbreviations as this is more stable and names could risk getting collitions with unrelated networks having the same (short-) name.
 
U-Bahn station in Hamburg

Metro stations in Copenhagen

Subway stations in Boston

S-Bahn station in Berlin  

 
 This requires the stations being tagged consitently to work out. I did some mass tagging of metro stations in Stockholm, Oslo, and Copenhagen. Other than that I mainly choose places where's at least partial coverage already.
 
If you'd like to contribute and update a network with the network Wikidata tag, I prepared to quick steps to do such an edit with the JOSM OpenStreetMap desktop editor.
 
Download a set of objects to update using an Overpass query, as an example, selecting the stations of Washing DC metro
 
[out:xml][timeout:90][bbox:{{bbox}}];

(

     nwr["network"="Washington Metro"]["railway"="station"];

     );

    (._;>;);

    out meta;

 

JOSM Overpass download query editor  

 Select the region to download from

Select region in JOSM

 

Select to only show the datalayer (not showing the background map) to make it easier to see the raw data.

Toggle data layers in JOSM

 Select the nodes.

Show raw datapoints in JSOM

 

Edit the field in the tag edit panel to update the value for all selected objects

Showing tags for selected objects

Note that this sample assumed the relevant station node where already tagged with network names (the network tag). Other queries to limit selection might be needed.

Also it could also be a good idea to reach out to local OSM communities before making bulk edits like this (e.g. if there is no such tagging at all in specific region) to make sure it would be aliged with expectations and such.

Then it will also potentially take a while before it gets include in out monthly vector tile  update.

When this has been done, given a suitable icon is available as e.g. public domain or commons in WikimediaCommons, it could be bundled in data/icons/stations and a definition added in the data mapping in src/mapStyle/stations.js.

 

And More…

One feature that has been long-wanted is the ability to dowload maps for offline usage. Lately precisely this is something James Westman has been working on.

It's still an early draft, so we'll see when it is ready, but it already look pretty promising.

 

Showing the new Preferences option  

  



Preference dialog with dowloads

Selecting region to download

 
Entering a name for a downloaded region

  

Dialog showing dowloaded areas

    

 

And that's it for now! 

 
 

Alley Chaggar

@AlleyChaggar

Demystifying The Codegen Phase Part 1

Intro

I want to start off and say I’m really glad that my last blog was helpful to many wanting to understand Vala’s compiler. I hope this blog will also be just as informative and helpful. I want to talk a little about the basics of the compiler again, but this time, catering to the codegen phase. The phase that I’m actually working on, but has the least information in the Vala Docs.

Last blog, I briefly mentioned the directories codegen and ccode being part of the codegen phase. This blog will be going more into depth about it. The codegen phase takes the AST and outputs the C code tree (ccode* objects), so that it can be generated to C code more easily, usually by GCC or another C compiler you installed. When dealing with this phase, it’s really beneficial to know and understand at least a little bit of C.

ccode Directory

  • Many of the files in the ccode directory are derived from the class CCodeNode, valaccodenode.vala.
  • The files in this directory represent C Constructs. For example, the valaccodefunction.vala file represents a C code function. Regular C functions have function names, parameters, return types, and bodies that add logic. Essentially, what this class specifically does, is provide the building blocks for building a function in C.

       //...
      	writer.write_string (return_type);
          if (is_declaration) {
              writer.write_string (" ");
          } else {
              writer.write_newline ();
          }
          writer.write_string (name);
          writer.write_string (" (");
          int param_pos_begin = (is_declaration ? return_type.char_count () + 1 : 0 ) + name.char_count () + 2;
    
          bool has_args = (CCodeModifiers.PRINTF in modifiers || CCodeModifiers.SCANF in modifiers);
     //...
    

This code snippet is part of the ccodefunction file, and what it’s doing is overriding the ‘write’ function that is originally from ccodenode. It’s actually writing out the C function.

codegen Directory

  • The files in this directory are higher-level components responsible for taking the compiler’s internal representation, such as the AST and transforming it into the C code model ccode objects.
  • Going back to the example of the ccodefunction, codegen will take a function node from the abstract syntax tree (AST), and will create a new ccodefunction object. It then fills this object with information like the return type, function name, parameters, and body, which are all derived from the AST. Then the CCodeFunction.write() (the code above) will generate and write out the C function.

    //...
    private void add_get_property_function (Class cl) {
    		var get_prop = new CCodeFunction ("_vala_%s_get_property".printf (get_ccode_lower_case_name (cl, null)), "void");
    		get_prop.modifiers = CCodeModifiers.STATIC;
    		get_prop.add_parameter (new CCodeParameter ("object", "GObject *"));
    		get_prop.add_parameter (new CCodeParameter ("property_id", "guint"));
    		get_prop.add_parameter (new CCodeParameter ("value", "GValue *"));
    		get_prop.add_parameter (new CCodeParameter ("pspec", "GParamSpec *"));
      
    		push_function (get_prop);
    //...
    

This code snippet is from valagobjectmodule.vala and it’s calling CCodeFunction (again from the valaccodefunction.vala) and adding the parameters, which is calling valaccodeparameter.vala. What this would output is something that looks like this in C:

    void _vala_get_property (GObject *object, guint property_id, GValue *value, GParamSpec *pspec) {
       //... 
    }

Why do all this?

Now you might ask why? Why separate codegen and ccode?

  • We split things into codegen and ccode to keep the compiler organized, readable, and maintainable. It prevents us from having to constantly write C code representations from scratch all the time.
  • It also reinforces the idea of polymorphism and the ability that objects can behave differently depending on their subclass.
  • And it lets us do hidden generation by adding new helper functions, temporary variables, or inlined optimizations after the AST and before the C code output.

Jsonmodule

I’m happy to say that I am making a lot of progress with the JSON module I mentioned last blog. The JSON module follows very closely other modules in the codegen, specifically like the gtk module and the gobject module. It will be calling ccode functions to make ccode objects and creating helper methods so that the user doesn’t need to manually override certain JSON methods.

Jamie Gravendeel

@monster

UI-First Search With List Models

You can find the repository with the code here.

When managing large amounts of data, manual widget creation finds its limits. Not only because managing both data and UI separately is tedious, but also because performance will be a real concern.

Luckily, there’s two solutions for this in GTK:

1. Gtk.ListView using a factory: more performant since it reuses widgets when the list gets long
2. Gtk.ListBox‘s bind_model(): less performant, but can use boxed list styling

This blog post provides an example of a Gtk.ListView containing my pets, which is sorted, can be searched, and is primarily made in Blueprint.

The app starts with a plain window:

from gi.repository import Adw, Gtk


@Gtk.Template.from_resource("/app/example/Pets/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"
using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Pets");
  default-width: 450;
  default-height: 450;

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}
  }
}

Data Object

The Gtk.ListView needs a data object to work with, which in this example is a pet with a name and species.

This requires a GObject.Object called Pet with those properties, and a GObject.GEnum called Species:

from gi.repository import Adw, GObject, Gtk


class Species(GObject.GEnum):
    """The species of an animal."""

    NONE = 0
    CAT = 1
    DOG = 2

[…]

class Pet(GObject.Object):
    """Data for a pet."""

    __gtype_name__ = "Pet"

    name = GObject.Property(type=str)
    species = GObject.Property(type=Species, default=Species.NONE)

List View

Now that there’s a data object to work with, the app needs a Gtk.ListView with a factory and model.

To start with, there’s a Gtk.ListView wrapped in a Gtk.ScrolledWindow to make it scrollable, using the .navigation-sidebar style class for padding:

content: Adw.ToolbarView {
  […]

  content: ScrolledWindow {
    child: ListView {
      styles [
        "navigation-sidebar",
      ]
    };
  };
};

Factory

The factory builds a Gtk.ListItem for each object in the model, and utilizes bindings to show the data in the Gtk.ListItem:

content: ListView {
  […]

  factory: BuilderListItemFactory {
    template ListItem {
      child: Label {
        halign: start;
        label: bind template.item as <$Pet>.name;
      };
    }
  };
};

Model

Models can be modified through nesting. The data itself can be in any Gio.ListModel, in this case a Gio.ListStore works well.

The Gtk.ListView expects a Gtk.SelectionModel because that’s how it manages its selection, so the Gio.ListStore is wrapped in a Gtk.NoSelection:

using Gtk 4.0;
using Adw 1;
using Gio 2.0;

[…]

content: ListView {
  […]

  model: NoSelection {
    model: Gio.ListStore {
      item-type: typeof<$Pet>;

      $Pet {
        name: "Herman";
        species: cat;
      }

      $Pet {
        name: "Saartje";
        species: dog;
      }

      $Pet {
        name: "Sofie";
        species: dog;
      }

      $Pet {
        name: "Rex";
        species: dog;
      }

      $Pet {
        name: "Lady";
        species: dog;
      }

      $Pet {
        name: "Lieke";
        species: dog;
      }

      $Pet {
        name: "Grumpy";
        species: cat;
      }
    };
  };
};

Sorting

To easily parse the list, the pets should be sorted by both name and species.

To implement this, the Gio.ListStore has to be wrapped in a Gtk.SortListModel which has a Gtk.MultiSorter with two sorters, a Gtk.NumericSorter and a Gtk.StringSorter.

Both of these need an expression: the property that needs to be compared.

The Gtk.NumericSorter expects an integer, not a Species, so the app needs a helper method to convert it:

class Window(Adw.ApplicationWindow):
    […]

    @Gtk.Template.Callback()
    def _species_to_int(self, _obj: Any, species: Species) -> int:
        return int(species)
model: NoSelection {
  model: SortListModel {
    sorter: MultiSorter {
      NumericSorter {
        expression: expr $_species_to_int(item as <$Pet>.species) as <int>;
      }

      StringSorter {
        expression: expr item as <$Pet>.name;
      }
    };

    model: Gio.ListStore { […] };
  };
};

To learn more about closures, such as the one used in the Gtk.NumericSorter, consider reading my previous blog post.

Search

To look up pets even faster, the user should be able to search for them by both their name and species.

Filtering

First, the Gtk.ListView‘s model needs the logic to filter the list by name or species.

This can be done with a Gtk.FilterListModel which has a Gtk.AnyFilter with two Gtk.StringFilters.

One of the Gtk.StringFilters expects a string, not a Species, so the app needs another helper method to convert it:

class Window(Adw.ApplicationWindow):
    […]

    @Gtk.Template.Callback()
    def _species_to_string(self, _obj: Any, species: Species) -> str:
        return species.value_nick
model: NoSelection {
  model: FilterListModel {
    filter: AnyFilter {
      StringFilter {
        expression: expr item as <$Pet>.name;
      }

      StringFilter {
        expression: expr $_species_to_string(item as <$Pet>.species) as <string>;
      }
    };

    model: SortListModel { […] };
  };
};

Entry

To actually search with the filters, the app needs a Gtk.SearchBar with a Gtk.SearchEntry.

The Gtk.SearchEntry‘s text property needs to be bound to the Gtk.StringFilters’ search properties to filter the list on demand.

To be able to start searching by typing from anywhere in the window, the Gtk.SearchEntry‘s key-capture-widget has to be set to the window, in this case the template itself:

content: Adw.ToolbarView {
  […]

  [top]
  SearchBar {
    key-capture-widget: template;

    child: SearchEntry search_entry {
      hexpand: true;
      placeholder-text: _("Search pets");
    };
  }

  content: ScrolledWindow {
    child: ListView {
      […]

      model: NoSelection {
        model: FilterListModel {
          filter: AnyFilter {
            StringFilter {
              search: bind search_entry.text;
              […]
            }

            StringFilter {
              search: bind search_entry.text;
              […]
            }
          };

          model: SortListModel { […] };
        };
      };
    };
  };
};

Toggle Button

The Gtk.SearchBar should also be toggleable with a Gtk.ToggleButton.

To do so, the Gtk.SearchEntry‘s search-mode-enabled property should be bidirectionally bound to the Gtk.ToggleButton‘s active property:

content: Adw.ToolbarView {
  [top]
  Adw.HeaderBar {
    [start]
    ToggleButton search_button {
      icon-name: "edit-find-symbolic";
      tooltip-text: _("Search");
    }
  }

  [top]
  SearchBar {
    search-mode-enabled: bind search_button.active bidirectional;
    […]
  }

  […]
};

The search_button should also be toggleable with a shortcut, which can be added with a Gtk.ShortcutController:

[start]
ToggleButton search_button {
  […]

  ShortcutController {
    scope: managed;

    Shortcut {
      trigger: "<Control>f";
      action: "activate";
    }
  }
}

Empty State

Last but not least, the view should fall back to an Adw.StatusPage if there are no search results.

This can be done with a closure for the visible-child-name property in an Adw.ViewStack or Gtk.Stack. I generally prefer an Adw.ViewStack due to its animation curve.

The closure takes the amount of items in the Gtk.NoSelection as input, and returns the correct Adw.ViewStackPage name:

class Window(Adw.ApplicationWindow):
    […]

    @Gtk.Template.Callback()
    def _get_visible_child_name(self, _obj: Any, items: int) -> str:
        return "content" if items else "empty"
content: Adw.ToolbarView {
  […]

  content: Adw.ViewStack {
    visible-child-name: bind $_get_visible_child_name(selection_model.n-items) as <string>;
    enable-transitions: true;

    Adw.ViewStackPage {
      name: "content";

      child: ScrolledWindow {
        child: ListView {
          […]

          model: NoSelection selection_model { […] };
        };
      };
    }

    Adw.ViewStackPage {
      name: "empty";

      child: Adw.StatusPage {
        icon-name: "edit-find-symbolic";
        title: _("No Results Found");
        description: _("Try a different search");
      };
    }
  };
};

End Result

from typing import Any

from gi.repository import Adw, GObject, Gtk


class Species(GObject.GEnum):
    """The species of an animal."""

    NONE = 0
    CAT = 1
    DOG = 2


@Gtk.Template.from_resource("/org/example/Pets/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"

    @Gtk.Template.Callback()
    def _get_visible_child_name(self, _obj: Any, items: int) -> str:
        return "content" if items else "empty"

    @Gtk.Template.Callback()
    def _species_to_string(self, _obj: Any, species: Species) -> str:
        return species.value_nick

    @Gtk.Template.Callback()
    def _species_to_int(self, _obj: Any, species: Species) -> int:
        return int(species)


class Pet(GObject.Object):
    """Data about a pet."""

    __gtype_name__ = "Pet"

    name = GObject.Property(type=str)
    species = GObject.Property(type=Species, default=Species.NONE)
using Gtk 4.0;
using Adw 1;
using Gio 2.0;

template $Window: Adw.ApplicationWindow {
  title: _("Pets");
  default-width: 450;
  default-height: 450;

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {
      [start]
      ToggleButton search_button {
        icon-name: "edit-find-symbolic";
        tooltip-text: _("Search");

        ShortcutController {
          scope: managed;

          Shortcut {
            trigger: "<Control>f";
            action: "activate";
          }
        }
      }
    }

    [top]
    SearchBar {
      key-capture-widget: template;
      search-mode-enabled: bind search_button.active bidirectional;

      child: SearchEntry search_entry {
        hexpand: true;
        placeholder-text: _("Search pets");
      };
    }

    content: Adw.ViewStack {
      visible-child-name: bind $_get_visible_child_name(selection_model.n-items) as <string>;
      enable-transitions: true;

      Adw.ViewStackPage {
        name: "content";

        child: ScrolledWindow {
          child: ListView {
            styles [
              "navigation-sidebar",
            ]

            factory: BuilderListItemFactory {
              template ListItem {
                child: Label {
                  halign: start;
                  label: bind template.item as <$Pet>.name;
                };
              }
            };

            model: NoSelection selection_model {
              model: FilterListModel {
                filter: AnyFilter {
                  StringFilter {
                    expression: expr item as <$Pet>.name;
                    search: bind search_entry.text;
                  }

                  StringFilter {
                    expression: expr $_species_to_string(item as <$Pet>.species) as <string>;
                    search: bind search_entry.text;
                  }
                };

                model: SortListModel {
                  sorter: MultiSorter {
                    NumericSorter {
                      expression: expr $_species_to_int(item as <$Pet>.species) as <int>;
                    }

                    StringSorter {
                      expression: expr item as <$Pet>.name;
                    }
                  };

                  model: Gio.ListStore {
                    item-type: typeof<$Pet>;

                    $Pet {
                      name: "Herman";
                      species: cat;
                    }

                    $Pet {
                      name: "Saartje";
                      species: dog;
                    }

                    $Pet {
                      name: "Sofie";
                      species: dog;
                    }

                    $Pet {
                      name: "Rex";
                      species: dog;
                    }

                    $Pet {
                      name: "Lady";
                      species: dog;
                    }

                    $Pet {
                      name: "Lieke";
                      species: dog;
                    }

                    $Pet {
                      name: "Grumpy";
                      species: cat;
                    }
                  };
                };
              };
            };
          };
        };
      }

      Adw.ViewStackPage {
        name: "empty";

        child: Adw.StatusPage {
          icon-name: "edit-find-symbolic";
          title: _("No Results Found");
          description: _("Try a different search");
        };
      }
    };
  };
}

List models are pretty complicated, but I hope that this example provides a good idea of what’s possible from Blueprint, and is a good stepping stone to learn more.

Thanks for reading!

PS: a shout out to Markus for guessing what I’d write about next ;)

Hari Rana

@theevilskeleton

It’s True, “We” Don’t Care About Accessibility on Linux

Introduction

What do concern trolls and privileged people without visible or invisible disabilities who share or make content about accessibility on Linux being trash without contributing anything to projects have in common? They don’t actually really care about the group they’re defending; they just exploit these victims’ unfortunate situation to fuel hate against groups and projects actually trying to make the world a better place.

I never thought I’d be this upset to a point I’d be writing an article about something this sensitive with a clickbait-y title. It’s simultaneously demotivating, unproductive, and infuriating. I’m here writing this post fully knowing that I could have been working on accessibility in GNOME, but really, I’m so tired of having my mood ruined because of privileged people spending at most 5 minutes to write erroneous posts and then pretending to be oblivious when confronted while it takes us 5 months of unpaid work to get a quarter of recognition, let alone acknowledgment, without accounting for the time “wasted” addressing these accusations. This is far from the first time, and it will certainly not be the last.

I’m Not Angry

I’m not mad. I’m absolutely furious and disappointed in the Linux Desktop community for being quiet in regards to any kind of celebration to advancing accessibility, while proceeding to share content and cheer for random privileged people from big-name websites or social media who have literally put a negative amount of effort into advancing accessibility on Linux. I’m explicitly stating a negative amount because they actually make it significantly more stressful for us.

None of this is fair. If you’re the kind of person who stays quiet when we celebrate huge accessibility milestones, yet shares (or even makes) content that trash talks the people directly or indirectly contributing to the fucking software you use for free, you are the reason why accessibility on Linux is shit.

No one in their right mind wants to volunteer in a toxic environment where their efforts are hardly recognized by the public and they are blamed for “not doing enough”, especially when they are expected to take in all kinds of harassment, nonconstructive criticism, and slander for a salary of 0$.

There’s only one thing I am shamefully confident about: I am not okay in the head. I shouldn’t be working on accessibility anymore. The recognition-to-smearing ratio is unbearably low and arguably unhealthy, but leaving people in unfortunate situations behind is also not in accordance with my values.

I’ve been putting so much effort, quite literally hundreds of hours, into:

  1. thinking of ways to come up with inclusive designs and experiences;
  2. imagining how I’d use something if I had a certain disability or condition;
  3. asking for advice and feedback from people with disabilities;
  4. not getting paid from any company or organization; and
  5. making sure that all the accessibility-related work is in the public, and stays in the public.

Number 5 is especially important to me. I personally go as far as to refuse to contribute to projects under a permissive license, and/or that utilize a contributor license agreement, and/or that utilize anything riskily similar to these two, because I am of the opinion that no amount of code for accessibility should either be put under a paywall or be obscured and proprietary.

Permissive licenses make it painlessly easy for abusers to fork, build an ecosystem on top of it which may include accessibility-related improvements, slap a price tag alongside it, all without publishing any of these additions/changes. Corporations have been doing that for decades, and they’ll keep doing it until there’s heavy push back. The only time I would contribute to a project under a permissive license is when the tool is the accessibility infrastructure itself. Contributor license agreements are significantly worse in that regard, so I prefer to avoid them completely.

The Truth Nobody Is Telling You

KDE hired a legally blind contractor to work on accessibility throughout the KDE ecosystem, including complying with the EU Directive to allow selling hardware with Plasma.

GNOME’s new executive director, Steven Deobald, is partially blind.

The GNOME Foundation has been investing a lot of money to improve accessibility on Linux, for example funding Newton, a Wayland accessibility project and AccessKit integration into GNOME technologies. Around 250,000€ (1/4) of the STF budget was spent solely on accessibility. And get this: literally everybody managing these contracts and communication with funders are volunteers; they’re ensuring people with disabilities earn a living, but aren’t receiving anything in return. These are the real heroes who deserve endless praise.

The Culprits

Do you want to know who we should be blaming? Profiteers who are profiting from the community’s effort while investing very little to nothing into accessibility.

This includes a significant portion of the companies sponsoring GNOME and even companies that employ developers to work on GNOME. These companies are the ones making hundreds of millions, if not billions, in net profit indirectly from GNOME (and other free and open-source projects), and investing little to nothing into them. However, the worst offenders are the companies actively using GNOME without ever donating anything to fund the projects.

Some companies actually do put an effort, like Red Hat and Igalia. Red Hat employs people with disabilities to work on accessibility in GNOME, one of which I actually rely on when making accessibility-related contributions in GNOME. Igalia funds Orca, the screen reader as part of GNOME, which is something the Linux community should be thankful of. However, companies have historically invested what’s necessary to comply with governments’ accessibility requirements, and then never invest in it again.

The privileged people who keep sharing and making content around accessibility on Linux being bad without contributing anything to it are, in my opinion, significantly worse than the companies profiting off of GNOME. Companies are and stay quiet, but these privileged people add an additional burden to contributors by either trash talking or sharing trash talkers. Once again, no volunteer deserves to be in the position of being shamed and ridiculed for “not doing enough”, since no one is entitled to their free time, but themselves.

My Work Is Free but the Worth Is Not

Earlier in this article, I mentioned, and I quote: “I’ve been putting so much effort, quite literally hundreds of hours […]”. Let’s put an emphasis on “hundreds”. Here’s a list of most accessibility-related merge requests that have been incorporated into GNOME:

GNOME Calendar’s !559 addresses an issue where event widgets were unable to be focused and activated by the keyboard. That was present since the very beginning of GNOME Calendar’s existence, to be specific: for more than a decade. This alone was was a two-week effort. Despite it being less than 100 lines of code, nobody truly knew what to do to have them working properly before. This was followed up by !576, which made the event buttons usable in the month view with a keyboard, and then !587, which properly conveys the states of the widgets. Both combined are another two-week effort.

Then, at the time of writing this article, !564 adds 640 lines of code, which is something I’ve been volunteering on for more than a month, excluding the time before I opened the merge request.

Let’s do a little bit of math together with ‘only’ !559, !576, and !587. Just as a reminder: these three merge requests are a four-week effort in total, which I volunteered full-time—8 hours a day, or 160 hours a month. I compiled a small table that illustrates its worth:

Country Average Wage for Professionals Working on Digital AccessibilityWebAIM Total in Local Currency
(160 hours)
Exchange Rate Total (CAD)
Canada 58.71$ CAD/hour 9,393.60$ CAD N/A 9,393.60$
United Kingdom 48.20£ GBP/hour 7,712£ GBP 1.8502 14,268.74$
United States of America 73.08$ USD/hour 11,692.80$ USD 1.3603 15,905.72$

To summarize the table: those three merge requests that I worked on for free were worth 9,393.60$ CAD (6,921.36$ USD) in total at a minimum.

Just a reminder:

  • these merge requests exclude the time spent to review the submitted code;
  • these merge requests exclude the time I spent testing the code;
  • these merge requests exclude the time we spent coordinating these milestones;
  • these calculations exclude the 30+ merge requests submitted to GNOME; and
  • these calculations exclude the merge requests I submitted to third-party GNOME-adjacent apps.

Now just imagine how I feel when I’m told I’m “not doing enough”, either directly or indirectly, by privileged people who don’t rely on any of these accessibility features. Whenever anybody says we’re “not doing enough”, I feel very much included, and I will absolutely take it personally.

It All Trickles Down to “GNOME Bad”

I fully expect everything I say in this article to be dismissed or be taken out of context on the basis of ad hominem, simply by the mere fact I’m a GNOME Foundation member / regular GNOME contributor. Either that, or be subject to whataboutism because another GNOME contributor made a comment that had nothing to do with mine but ‘is somewhat related to this topic and therefore should be pointed out just because it was maybe-probably-possibly-perhaps ableist’. I can’t speak for other regular contributors, but I presume that they don’t feel comfortable talking about this because they dared be a GNOME contributor. At least, that’s how I felt for the longest time.

Any content related to accessibility that doesn’t dunk on GNOME doesn’t foresee as many engagement, activity, and reaction as content that actively attacks GNOME, regardless of whether the criticism is fair. Many of these people don’t even use these accessibility features; they’re just looking for every opportunity to say “GNOME bad” and will 🪄 magically 🪄 start caring about accessibility.

Regular GNOME contributors like myself don’t always feel comfortable defending ourselves because dismissing GNOME developers just for being GNOME developers is apparently a trend…

Final Word

Dear people with disabilities,

I won’t insist that we’re either your allies or your enemies—I have no right to claim that whatsoever.

I wasn’t looking for recognition. I wasn’t looking for acknowledgment since the very beginning either. I thought I would be perfectly capable of quietly improving accessibility in GNOME, but because of the overall community’s persistence to smear developers’ efforts without actually tackling the underlying issues within the stack, I think I’ve justified myself to at least demand for acknowledgment from the wider community.

I highly doubt it will happen anyway, because the Linux community feeds off of drama and trash talking instead of being productive, without realizing that it negatively demotivates active contributors while pushing away potential contributors. And worst of all: people with disabilities are the ones affected the most because they are misled into thinking that we don’t care.

It’s so unfair and infuriating that all the work I do and share online gain very little activity compared to random posts and articles from privileged people without disabilities that rant about the Linux desktop’s accessibility being trash. It doesn’t help that I become severely anxious sharing accessibility-related work to avoid signs of virtue signalling. The last thing I want is to (unintentionally) give any sign and impression of pretending to care about accessibility.

I beg you, please keep writing banger posts like fireborn’s I Want to Love Linux. It Doesn’t Love Me Back series and their interluding post. We need more people with disabilities to keep reminding developers that you exist and your conditions and disabilities are a spectrum, and not absolute.

We simultaneously need more interest from people with disabilities to contribute to free and open-source software, and the wider community to be significantly more intolerant of bullies who profit from smearing and demotivating people who are actively trying.

We should take inspiration from “Accessibility on Linux sucks, but GNOME and KDE are making progress” by OSNews. They acknowledge that accessibility on Linux is suboptimal while recognizing the efforts of GNOME and KDE. As a community, we should promote progress more often.

Jamie Gravendeel

@monster

Data Driven UI With Closures

It’s highly recommended to read my previous blog post first to understand some of the topics discussed here.

UI can be hard to keep track of when changed imperatively, preferably it just follows the code’s state. Closures provide an intuitive way to do so by having data as input, and the desired value as output. They couple data with UI, but decouple the specific piece of UI that’s changed, making closures very modular. The example in this post uses Python and Blueprint.

Technicalities

First, it’s good to be familiar with the technical details behind closures. To quote from Blueprint’s documentation:

Expressions are only reevaluated when their inputs change. Because Blueprint doesn’t manage a closure’s application code, it can’t tell what changes might affect the result. Therefore, closures must be pure, or deterministic. They may only calculate the result based on their immediate inputs, not properties of their inputs or outside variables.

To elaborate, expressions know when their inputs have changed due to the inputs being GObject properties, which emit the “notify” signal when modified.

Another thing to note is where casting is necessary. To again quote Blueprint’s documentation:

Blueprint doesn’t know the closure’s return type, so closure expressions must be cast to the correct return type using a cast expression.

Just like Blueprint doesn’t know about the return type, it also doesn’t know the type of ambiguous properties. To provide an example:

Button simple_button {
  label: _("Click");
}

Button complex_button {
  child: Adw.ButtonContent {
    label: _("Click");
  };
}

Getting the label of simple_button in a lookup does not require a cast, since label is a known property of Gtk.Button with a known type:

simple_button.label

While getting the label of complex_button does require a cast, since child is of type Gtk.Widget, which does not have the label property:

complex_button.child as <Adw.ButtonContent>.label

Example

To set the stage, there’s a window with a Gtk.Stack which has two Gtk.StackPages, one for the content and one for the loading view:

from gi.repository import Adw, Gtk


@Gtk.Template.from_resource("/org/example/App/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"
using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Demo");

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}

    content: Stack {
      StackPage {
        name: "content";

        child: Label {
          label: _("Meow World!");
        };
      }

      StackPage {
        name: "loading";

        child: Adw.Spinner {};
      }
    };
  };
}

Switching Views Conventionally

One way to manage the views would be to rely on signals to communicate when another view should be shown:

from typing import Any

from gi.repository import Adw, GObject, Gtk


@Gtk.Template.from_resource("/org/example/App/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"

    stack: Gtk.Stack = Gtk.Template.Child()

    loading_finished = GObject.Signal()

    @Gtk.Template.Callback()
    def _show_content(self, *_args: Any) -> None:
        self.stack.set_visible_child_name("content")

A reference to the stack has been added, as well as a signal to communicate when loading has finished, and a callback to run when that signal is emitted.

using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Demo");
  loading-finished => $_show_content();

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}

    content: Stack stack {
      StackPage {
        name: "content";

        child: Label {
          label: _("Meow World!");
        };
      }

      StackPage {
        name: "loading";

        child: Adw.Spinner {};
      }
    };
  };
}

A signal handler has been added, as well as a name for the Gtk.Stack.

Only a couple of changes had to be made to switch the view when loading has finished, but all of them are sub-optimal:

  1. A reference in the code to the stack would be nice to avoid
  2. Imperatively changing the view makes following state harder
  3. This approach doesn’t scale well when the data can be reloaded, it would require another signal to be added

Switching Views With a Closure

To use a closure, the class needs data as input and a method to return the desired value:

from typing import Any

from gi.repository import Adw, GObject, Gtk


@Gtk.Template.from_resource("/org/example/App/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"

    loading = GObject.Property(type=bool, default=True)

    @Gtk.Template.Callback()
    def _get_visible_child_name(self, _obj: Any, loading: bool) -> str:
        return "loading" if loading else "content"

The signal has been replaced with the loading property, and the template callback has been replaced by a method that returns a view name depending on the value of that property. _obj here is the template class, which is unused.

using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Demo");

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}

    content: Stack {
      visible-child-name: bind $_get_visible_child_name(template.loading) as <string>;

      StackPage {
        name: "content";

        child: Label {
          label: _("Meow World!");
        };
      }

      StackPage {
        name: "loading";

        child: Adw.Spinner {};
      }
    };
  };
}

In Blueprint, the signal handler has been removed, as well as the unnecessary name for the Gtk.Stack. The visible-child-name property is now bound to a closure, which takes in the loading property referenced with template.loading.

This fixed the issues mentioned before:

  1. No reference in code is required
  2. State is bound to a single property
  3. If the data reloads, the view will also adapt

Closing Thoughts

Views are just one UI element that can be managed with closures, but there’s plenty of other elements that should adapt to data, think of icons, tooltips, visibility, etc. Whenever you’re writing a widget with moving parts and data, think about how the two can be linked, your future self will thank you!

Victor Ma

@victorma

A strange bug

In the last two weeks, I’ve been trying to fix a strange bug that causes the word suggestions list to have the wrong order sometimes.

For example, suppose you have an empty 3x3 grid. Now suppose that you move your cursor to each of the cells of the 1-Across slot (labelled α, β, and γ).

+---+---+---+
| α | β | γ |
+---+---+---+
| | | |
+---+---+---+
| | | |
+---+---+---+

You should expect the word suggestions list for 1-Across to stay the same, regardless of which cell your cursor is on. After all, all three cells have the same information: that the 1-Across slot is empty, and the intersecting vertical slot of whatever cell we’re on (1-Down, 2-Down, or 3-Down) is also empty.

There are no restrictions whatsoever, so all three cells should show the same word suggestion list: one that includes every three-letter word.

But that’s not what actually happens. In reality, the word suggestions list changes quite dramatically. The order of the list definitely changes. And it looks like there may even be words in one list that doesn’t appear in another. What’s going on here?

Understanding the code

My first step was to understand how the code for the word suggestions list works. I took notes along the way, in order to solidify my understanding. I especially found it useful to create diagrams for the word list resource (a pre-compiled resource that the code uses):

Word list resource diagram

By the end of the first week, I had a good idea of how the word-suggestions-list code works. The next step was to figure out the cause of the bug and how to fix it.

Investigating the bug

After doing some testing, I realized that the seemingly random orderings of the lists are not so random after all! The lists are actually all in alphabetical order—but based on the letter that corresponds to the cell, not necessarily the first letter.

What I mean is this:

  • The word suggestions list for cell α is sorted alphabetically by the first letter of the words. (This is normal alphabetical order.) For example:
    ALE, AXE, BAY, BOA, CAB
    
  • The word suggestions list for cell β is sorted alphabetically by the second letter of the words. For example:
    CAB, BAY, ALE, BOA, AXE
    
  • The word suggestions list for cell γ is sorted alphabetically by the third letter of the words. For example:
    BOA, CAB, ALE, AXE, BAY
    

Fixing the bug

The cause of the bug is quite simple: The function that generates the word suggestions list does not sort the list before it returns it. So the order of the list is whatever order the function added the words in. And because of how our implementation works, that order happens to be alphabetical, based on the letter that corresponds to the cell.

The fix for the bug is also quite simple—at least theoretically. All we need to do is sort the list before we return it. But in reality, this fix runs into some other problems that need to be addressed. Those problems are what I’m going to work on this week.

Jussi Pakkanen

@jpakkane

A custom C++ standard library part 4: using it for real

Writing your own standard library is all fun and games until someone (which is to say yourself) asks the important question: could this be actually used for real? Theories and opinions can be thrown about the issue pretty much forever, but the only way to actually know for sure is to do it.

Thus I converted CapyPDF, which is a fairly compact 15k LoC codebase from the C++ standard library to Pystd, which is about 4k lines. All functionality is still the same, which is to say that the test suite passes, there are most likely new bugs that the tests do not catch. For those wanting to replicate the results themselves, clone the CapyPDF repo, switch to the pystdport branch and start building. Meson will automatically download and set up Pystd as a subproject. The code is fairly bleeding edge and only works on Linux with GCC 15.1.

Build times

One of the original reasons for starting Pystd was being annoyed at STL compile times. Let's see if we succeeded in improving on them. Build times when using only one core in debug look like this.

When optimizations are enabled the results look like this:

In both cases the Pystd version compiles in about a quarter of the time.

Binary size

C++ gets a lot of valid criticism for creating bloated code. How much of that is due to the language as opposed to the library?

That's quite unexpected. The debug info for STL types seems to take an extra 20 megabytes. But how about the executable code itself?

STL is still 200 kB bigger. Based on observations most of this seems to come from stdlibc++'s implementation of variant. Note that if you try this yourself the Pystd version is probably 100 kB bigger, because by default the build setup links against libsubc++, which adds 100+ kB to binary sizes whereas linking against the main C++ runtime library does not.

Performance

Ok, fine, so we can implement basic code to build faster and take less space. Fine. But what about performance? That is the main thing that matters after all, right? CapyPDF ships with a simple benchmark program. Let's look at its memory usage first.

Apologies for the Y-axis does not starting at zero. I tried my best to make it happen, but LibreOffice Calc said no. In any case the outcome itself is expected. Pystd has not seen any performance optimization work so it requiring 10% more memory is tolerable. But what about the actual runtime itself?

This is unexpected to say the least. A reasonable result would have been to be only 2x slower than the standard library, but the code ended up being almost 25% faster. This is even stranger considering that Pystd's containers do bounds checks on all accesses, the UTF-8 parsing code sometimes validates its input twice, the hashing algorithm is a simple multiply-and-xor and so on. Pystd should be slower, and yet, in this case at least, it is not.

I have no explanation for this. It is expected that Pystd will start performing (much) worse as the data set size grows but that has not been tested.

Status update, 15/06/2025

This month I created a personal data map where I tried to list all my important digital identities.

(It’s actually now a spreadsheet, which I’ll show you later. I didn’t want to start the blog post with something as dry as a screenshot of a spreadsheet.)

Anyway, I made my personal data map for several reasons.

The first reason was to stay safe from cybercrime. In a world of increasing global unfairness and inequality, of course crime and scams are increasing too. Schools don’t teach how digital tech actually works, so it’s a great time to be a cyber criminal. Imagine being a house burglar in a town where nobody knows how doors work.

Lucky for me, I’m a professional door guy. So I don’t worry too much beyond having a really really good email password (it has numbers and letters). But its useful to double check if I have my credit card details on a site where the password is still “sam2003”.

The second reason is to help me migrate to services based in Europe. Democracy over here is what it is, there are good days and bad days, but unlike the USA we have at least more options than a repressive death cult and a fundraising business. (Shout to @angusm@mastodon.social for that one). You can’t completely own your digital identity and your data, but you can at least try to keep it close to home.

The third reason was to see who has the power to influence my online behaviour.

This was an insight from reading the book Technofeudalism. I’ve always been uneasy about websites tracking everything I do. Most of us are, to the point that we have made myths like “your phone microphone is always listening so Instagram can target adverts”. (As McSweeney’s Internet Tendency confirms, it’s not! It’s just tracking everything you type, every app you use, every website you visit, and everywhere you go in the physical world).

I used to struggle to explain why all that tracking feels bad. Technofeudalism frames a concept of cloud capital, saying this is now more powerful than other kinds of capital because cloud capitalists can do something Henry Ford, Walt Disney and The Monopoly Guy can only dream of: mine their data stockpile to produce precisely targeted recommendations, search bubbles and adverts which can influence your behaviour before you’ve even noticed.

This might sound paranoid when you first hear it, but consider how social media platforms reward you for expressing anger and outrage. Remember the first time you saw a post on Twitter from a stranger that you disagreed with? And your witty takedown attracted likes and praise? This stuff can be habit-forming.

In the 20th century, ad agencies changed people’s buying patterns and political views using billboards, TV channel and newspapers. But all that is like a primitive blunderbuss compared to recommendation algorithms, feedback loops and targeted ads on social media and video apps.

I lived through the days when web search for “Who won the last election” would just return you 10 pages that included the word “election”. (If you’re nostalgic for those days… you’ll be happy to know that GNOME’s desktop search engine still works like that today! : -) I can spot when apps trying to ‘nudge’ me with dark patterns. But kids aren’t born with that skill, and they aren’t necessarily going to understand the nature of Tech Billionaire power unless we help them to see it. We need a framework to think critically and discuss the power that Meta, Amazon and Microsoft have over everyone’s lives. Schools don’t teach how digital tech actually works, but maybe a “personal data map” can be a useful teaching tool?

By the way, here’s what my cobbled-together “Personal data map” looks like, taking into account security, what data is stored and who controls it. (With some fake data… I don’t want this blog post to be a “How to steal my identity” guide.)

NameRisksSensitivity ratingEthical ratingLocationControllerFirst factorSecond factorCredentials cached?Data stored
Bank accountFinancial loss102EuropeBank FingerprintNoneOn phoneMoney, transactions
InstagramIdentity theft5-10USAMetaPasswordEmailOn phonePosts, likes, replies, friends, views, time spent, locations, searches.
Google Mail (sam@gmail.com)Reset passwords9-5USAGooglePasswordNoneYes – cookiesConversations, secrets
GithubImpersonation33USAMicrosoftPasswordOTPYes – cookiesCredit card, projects, searches.

How is it going migrating off USA based cloud services?

“The internet was always a project of US power”, says Paris Marx, a keynote at PublicSpaces conference, which I never heard of before.

Closing my Amazon account took an unnecessary amount of steps, and it was sad to say goodbye to the list of 12 different address I called home at various times since 2006, but I don’t miss it; I’ve been avoiding Amazon for years anyway. When I need English-language books, I get them from an Irish online bookstore named Kenny’s. (Ireland, cleverly, did not leave the EU so they can still ship books to Spain without incurring import taxes).

Dropbox took a while because I had years of important stuff in there. I actually don’t think they’re too bad of a company, and it was certainly quick to delete my account. (And my data… right? You guys did delete all my data?).

I was using Dropbox to sync notes with the Joplin notes app, and switched to the paid Joplin Cloud option, which seems a nice way to support a useful open source project.

I still needed a way to store sensitive data, and realized I have access to Protondrive. I can’t recommend that as a service because the parent company Proton AG don’t seem so serious about Linux support, but I got it to work thanks to some heroes who added a protondrive backend to rclone.

Instead of using Google cloud services to share photos, and to avoid anything so primitive as an actual cable, I learned that KDE Connect can transfer files from my Android phone over my laptop really neatly. KDE Connect is really good. On the desktop I use GSConnect which integrates with GNOME Shell really well. I think I’ve not been so impressed by a volunteer-driven open source project in years. Thanks to everyone who worked on these great apps!

I also migrated my VPS from a US-based host Tornado VPS to one in Europe. Tornado VPS (formally prgmr.com) are a great company, but storing data in the USA doesn’t seem like the way forwards.

That’s about it so far. Feels a bit better.

What’s next?

I’m not sure whats next!

I can’t leave Github and Gitlab.com, but my days of “Write some interesting new code and push it straight to Github” are long gone. I didn’t sign up to train somebody else’s LLM for free, and neither should you. (I’m still interested in sharing interesting code with nice people, of course, but let’s not make it so easy for Corporate America to take our stuff without credit or compensation. Bring back the “sneakernet“!)

Leaving Meta platforms and dropping YouTube doesn’t feel directly useful. It’s like individually renouncing debit cards, or air travel: a lot of inconvenience for you, but the business owners don’t even notice. The important thing is to use the alternatives more. Hence why I still write a blog in 2025 and mostly read RSS feeds and the Fediverse. Gigs where I live are mostly only promoted on Instagram, but I’m sure that’s temporary.

In the first quarter of 2025, rich people put more money into AI startups than everything else put together (see: Pivot to AI). Investors love a good bubble, but there’s also an element of power here.

If programmers only know how to write code using Copilot, then whoever controls Microsoft has the power to decide what code we can and can’t write. (This currently this seems limited to not using the word ‘gender’. But I can imagine a future where it catches you reverse-engineering proprietary software, or jailbreaking locked-down devices, or trying write a new Bittorrent client).

If everyone gets their facts from ChatGPT, then whoever controls OpenAI has the power to tweak everyone’s facts, an ability that is currently limited only to presidents of major world superpowers. If we let ourselves avoid critical thinking and rely on ChatGPT to generate answers to hard questions instead, which teachers say is very much exactly what’s happening in schools now… then what?

Toluwaleke Ogundipe

@toluwalekeog

Hello GNOME and GSoC!

I am delighted to announce that I am contributing to GNOME Crosswords as part of the Google Summer of Code 2025 program. My project primarily aims to add printing support to Crosswords, with some additional stretch goals. I am being mentored by Jonathan Blandford, Federico Mena Quintero, and Tanmay Patil.

The Days Ahead

During my internship, I will be refactoring the puzzle rendering code to support existing and printable use cases, adding clues to rendered puzzles, and integrating a print dialog into the game and editor with crossword-specific options. Additionally, I should implement an ipuz2pdf utility to render puzzles in the IPUZ format to PDF documents.

Beyond the internship, I am glad to be a member of the GNOME community and look forward to so much more. In the coming weeks, I will be sharing updates about my GSoC project and other contributions to GNOME. If you are interested in my journey with GNOME and/or how I got into GSoC, I implore you to watch out for a much longer post coming soon.

Appreciation

Many thanks to Hans Petter Jansson, Federico Mena Quintero and Jonathan Blandford, who have all played major roles in my journey with GNOME and GSoC. 🙏❤

Taking out the trash, or just sweeping it under the rug? A story of leftovers after removing files

 There are many things that we take for granted in this world, and one of them is undoubtedly the ability to clean up your files - imagine a world where you can't just throw all those disk space hungry things that you no longer find useful. Though that might sound impossible, turns out some people have encountered a particularly interesting bug, that resulted in silent sweeping the Trash under the rug instead of emptying it in Nautilus. Since I was blessed to run into that issue myself, I decided to fix it and shed some light on the fun.

Trash after emptying in Nautilus, are the files really gone?


It all started with a 2009 Ubuntu launchpad ticket, reported against Nautilus. The user found 70 GB worth of files using disk analyzer in the ~/.local/share/Trash/expunged directory, even though they had emptied it with graphical interface. They did realize the offending files belonged to another user, however, they couldn't reproduce it easily at first. After all, when you try to move to trash a file or a directory not belonging to you, you would usually be correctly informed that you don't have necessary permissions, and perhaps even offer to permanently delete them instead. So what was so special about this case?

First let's get a better view of when we can and when we can't permanently delete files, something that is done at the end of a successful trash emptying operation. We'll focus only on the owners of relevant files, since other factors, such as file read/write/execute permissions, can be adjusted freely by their owners, and that's what trash implementations will do for you. Here are cases where you CAN delete files:

- when a file is in a directory owned by you, you can always delete it
- when a directory is in a directory owned by you and it's owned by you, you can obviously delete it
- when a directory is in a directory owned by you but you don't own it, and it's empty, you can surprisingly delete it as well

So to summarize, no matter who the owner of the file or a directory is, if it's in a directory owned by you, you can get rid of it. There is one exception to this - the directory must be empty, otherwise, you will not be able to remove neither it, nor its including files. Which takes us to an analogous list for cases where you CANNOT delete files:

- when a directory is in a directory owned by you but you don't own it, and it's not empty, you can't delete it.
- when a file is in a directory NOT owned by you, you can't delete it
- when a directory is in a directory NOT owned by you, you can't delete it either

In contrast with removing files in a directory you own, when you are not the owner of the parent directory, you cannot delete any of the child files and directories, without exceptions. This is actually the reason for the one case where you can't remove something from a directory you own - to remove a non-empty directory, first you need to recursively delete all of its including files and directories, and you can't do that if the directory is not owned by you.

Now let's look inside the trash can, or rather how it functions - the reason for separating permanently deleting and trashing operations, is obvious - users are expected to change their mind and be able to get their files back on a whim, so there's a need for a middle step. That's where the Trash specification comes, providing a common way in which all "Trash can" implementation should store, list, and restore trashed files, even across different filesystems - Nautilus Trash feature is one of the possible implementations. The way the trashing works is actually moving files to the $XDG_DATA_HOME/Trash/files directory and setting up some metadata to track their original location, to be able to restore them if needed. Only when the user empties the trash, are they actually deleted. If it's all about moving files, specifically outside their previous parent directory (i.e. to Trash), let's look at cases where you CAN move files:

- when a file is in a directory owned by you, you can move it
- when a directory is in a directory owned by you and you own it, you can obviously move it

We can see that the only exception when moving files in a directory you own, is when the directory you're moving doesn't belong to you, in which case you will be correctly informed you don't have permissions. In the remaining cases, users are able to move files and therefore trash them. Now what about the cases where you CANNOT move files?

- when a directory is in a directory owned by you but you don't own it, you can't move it
- when a file is in a directory NOT owned by you, you can't move it either
- when a directory is in a directory NOT owned by you, you still can't move it

In those cases Nautilus will either not expose the ability to trash files, or will tell user about the error, and the system is working well - even if moving them was possible, permanently deleting files in a directory not owned by you is not supported anyway.

So, where's the catch? What are we missing? We've got two different operations that can succeed or fail given different circumstances, moving (trashing) and deleting. We need to find a situation, where moving a file is possible, and such overlap exists, by chaining the following two rules:

- when a directory A is in a directory owned by you and it's owned by you, you can obviously move it
- when a directory B is in a directory A owned by you but you don't own it, and it's not empty, you can't delete it.

So a simple way to reproduce was found, precisely:

mkdir -p test/root
touch test/root/file
sudo chown root:root test/root

Afterwards trashing and emptying in Nautilus or gio trash command will result in the files not being deleted, and left in the ~/.local/share/Trash/expunged, which is used by the gvfsd-trash as an intermediary during emptying operation. The situations where that can happen are very rare, but they do exist - personally I have encountered this when manually cleaning container files created by podman in ~/.local/share/containers, which I arguably I shouldn't be doing in the first place, and rather leave it up to the podman itself. Nevertheless, it's still possible from the user perspective, and should be handled and prevented correctly. That's exactly what was done, a ticket was submitted and moved to appropriate place, which turned out to be glib itself, and I have submitted a MR that was merged - now both Nautilus and gio trash will recursively check for this case, and prevent you from doing this. You can expect it in the next glib release 2.85.1.

On the ending notes I want to thank the glib maintainer Philip Withnall who has walked me through on the required changes and reviewed them, and ask you one thing: is your ~/.local/share/Trash/expunged really empty? :)

TIL that htop can display more useful metrics

A program on my Raspberry Pi was reading data on disk, performing operations, and writing the result on disk. It did so at an unusually slow speed. The problem could either be that the CPU was too underpowered to perform the operations it needed or the disk was too slow during read, write, or both.

I asked colleagues for opinions, and one of them mentioned that htop could orient me in the right direction. The time a CPU spends waiting for an I/O device such as a disk is known as the I/O wait. If that wait time is superior to 10%, then the CPU spends a lot of time waiting for data from the I/O device, so the disk would likely be the bottleneck. If the wait time remains low, then the CPU is likely the bottleneck.

By default htop doesn't show the wait time. By pressing F2 I can access htop's configuration. There I can use the right arrow to move to the Display options, select Detailed CPU time (System/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest), and press Space to enable it.

I can then press the left arrow to get back to the options menu, and move to Meters. Using the right arrow I can go to the rightmost column, select CPUs (1/1): all CPUs by pressing Enter, move it to one of the two columns, and press Enter when I'm done. With it still selected, I can press Enter to alternate through the different visualisations. The most useful to me is the [Text] one.

I can do the same with Disk IO to track the global read / write speed, and Blank to make the whole set-up more readable.

With htop configured like this, I can trigger my slow program again see that the CPU is not waiting for the disk. All CPUs have a wa of 0%

If you know more useful tools I should know about when chasing bottlenecks, or if you think I got something wrong, please email me at thib@ergaster.org!

This Week in GNOME

@thisweek

#204 Sending Packets

Update on what happened across the GNOME project in the week from June 06 to June 13.

GNOME Releases

Adrian Vovk announces

The GNOME Release team is pleased to announce that we have decided to move forward with the removal of GNOME’s X11 session. To that end, we have disabled the X11 session by default at compile time, and have released an early GNOME 49.alpha.0 to get this change into distributions like Fedora Rawhide. The feedback we hear back will inform our next steps. Please check out Jordan’s blog post for more details.

GNOME Core Apps and Libraries

Adrian Vovk reports

Core components of the GNOME desktop, like GDM and gnome-session, are actively undergoing modernizations that will increase GNOME’s dependency on systemd. To ensure that our downstreams are aware of this change and have time to prepare, the GNOME release team has written a blog post explaining what is changing, why, and how to adapt. Please see Adrian’s blog for details.

Glycin

Sandboxed and extendable image loading and editing.

Sophie 🏳️‍🌈 🏳️‍⚧️ (she/her) says

Glycin, GNOME’s new image loading library that is already used by our Image Viewer (Loupe), can now also power the legacy image-loading library GdkPixbuf. This will significantly improve the safety of image handling and provide more feature in the future. The article Making GNOME’s GdkPixbuf Image Loading Safer contains more details.

Third Party Projects

nozwock announces

Packet has received several updates since the last time. Recent improvements include:

  • Desktop notifications for incoming transfers
  • The ability to run in the background and auto-start at login
  • Nautilus integration with a “Send with Packet” context menu option

As always, you can get the latest version from Flathub!

justinrdonnelly reports

Hot on the heels of the debut release of Bouncer, I’ve released a new version. Critically, this version includes a fix for non-English language users where Bouncer wouldn’t start. And if your non-English language happens to be Dutch, you get an extra bonus because it now includes Dutch translations thanks to Vistaus! Bouncer is available on Flathub!

Alexander Vanhee reports

Gradia has received a major facelift this week, both in terms of features and design:

  • A new background image mode has been added, offering six presets to choose from, or you can bring your own image!
  • A new solid colour background mode is now available, most notably including a fully transparent option. This allows you to ignore the background feature entirely and use Gradia purely for annotations.
  • Introduced an auto-increasing number stamp tool, useful for creating quick guides around an image.
  • The app now also finally persists the selected annotation tool and its options across sessions.

You can grab the app on Flathub.

Semen Fomchenkov says

Hello everyone! This week, at ALT Gnome and the ALT Linux Team, we’re happy to announce that Tuner is now available on Flathub!

This process took us longer than expected, as the Flathub team had concerns about the minimal functionality of the base Tuner app. As a result, the Flathub build of Tuner also includes the TunerTweaks module, which provides basic GNOME customization features across different distributions.

New Features in Development

We are actively working on expanding the functionality of plugins and adapting Tuner to various environments. Here are some of the features we are currently finalizing or developing and plan to include in future releases:

  • The ability to manage installed plugins directly from within Tuner, such as hiding unused ones without uninstalling them, and viewing information about plugin authors.
  • Improved API for modules to simplify the creation of basic modules and allow for more extensible functionality (already used in the Flathub build and in the TunerTweaks module).
  • Support for complex page structures, enabling more advanced modules with custom menus and submenus in the interface (thanks to the GNOME Builder team for the inspiration).

All current changes are available on the project page in ALT Linux Space

Documentation and Community

We recently launched a dedicated Matrix room for Tuner, which you can join here: Tuner Matrix Room

Once we complete major API changes in Tuner, we plan to update the module development documentation and present it as a community-driven Wiki project. We’ll be sure to notify you once it’s ready!

Pipeline

Follow your favorite video creators.

schmiddi says

Pipeline version 2.4.0 was released, making it easier to curate your video feed. Adding filters was simplified to remove videos from your feed, by adding a context menu to videos for filtering out similar videos. Based on the uploader and title of this video, you will be prompted which part of the title you want to filter on. You can now also hide videos from your feed which you already watched. Your video history is of course stored locally, and you can turn off keeping the history if you want.

Shell Extensions

Just Perfection says

We’ve updated the EGO review guidelines for clipboard access. If your extension uses the clipboard, you need to update the metadata description and follow the new guidelines.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Lennart Poettering

@mezcalero

ASG! 2025 CfP Closes Tomorrow!

The All Systems Go! 2025 Call for Participation Closes Tomorrow!

The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow, on 13th of June! We’d like to invite you to submit your proposals for consideration to the CFP submission site quickly!

Andy Wingo

@wingo

whippet in guile hacklog: evacuation

Good evening, hackfolk. A quick note this evening to record a waypoint in my efforts to improve Guile’s memory manager.

So, I got Guile running on top of the Whippet API. This API can be implemented by a number of concrete garbage collector implementations. The implementation backed by the Boehm collector is fine, as expected. The implementation that uses the bump-pointer-allocation-into-holes strategy is less good. The minor reason is heap sizing heuristics; I still get it wrong about when to grow the heap and when not to do so. But the major reason is that non-moving Immix collectors appear to have pathological fragmentation characteristics.

Fragmentation, for our purposes, is memory under the control of the GC which was free after the previous collection, but which the current cycle failed to use for allocation. I have the feeling that for the non-moving Immix-family collector implementations, fragmentation is much higher than for size-segregated freelist-based mark-sweep collectors. For an allocation of, say, 1024 bytes, the collector might have to scan over many smaller holes until you find a hole that is big enough. This wastes free memory. Fragmentation memory is not gone—it is still available for allocation!—but it won’t be allocatable until after the current cycle when we visit all holes again. In Immix, fragmentation wastes allocatable memory during a cycle, hastening collection and causing more frequent whole-heap traversals.

The value proposition of Immix is that if there is too much fragmentation, you can just go into evacuating mode, and probably improve things. I still buy it. However I don’t think that non-moving Immix is a winner. I still need to do more science to know for sure. I need to fix Guile to support the stack-conservative, heap-precise version of the Immix-family collector which will allow for evacuation.

So that’s where I’m at: a load of gnarly Guile refactors to allow for precise tracing of the heap. I probably have another couple weeks left until I can run some tests. Fingers crossed; we’ll see!

Alireza Shabani

@Revisto

Why GNOME’s Translation Platform Is Called “Damned Lies”

Damned Lies is the name of GNOME’s web application for managing localization (l10n) across its projects. But why is it named like this?

Damned Lies about GNOME

Screenshot of Gnome Damned Lies from Google search with the title: Damned Lies about GNOME

On the About page of GNOME’s localization site, the only explanation given for the name Damned Lies is a link to a Wikipedia article called “Lies, damned lies, and statistics.

“Damned Lies” comes from the saying “Lies, damned lies, and statistics” which is a 19th-century phrase used to describe the persuasive power of statistics to bolster weak arguments, as described on Wikipedia. One of its earliest known uses appeared in a 1891 letter to the National Observer, which categorised lies into three types:

“Sir, —It has been wittily remarked that there are three kinds of falsehood: the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate of national pensions relies …”

To find out more, I asked in GNOME’s i18n Matrix room, and Alexandre Franke helped a lot, he said:

Stats are indeed lies, in many ways.
Like if GNOME 48 gets 100% translated in your language on Damned Lies, it doesn’t mean the version of GNOME 48 you have installed on your system is 100% translated, because the former is a real time stat for the branch and the latter is a snapshot (tarball) at a specific time.
So 48.1 gets released while the translation is at 99%, and then the translators complete the work, but you won’t get the missing translations until 48.2 gets released.
Works the other way around: the translation is at 100% at the time of the release, but then there’s a freeze exception and the stats go 99% while the released version is at 100%.
Or you are looking at an old version of GNOME for which there won’t be any new release, which wasn’t fully translated by the time of the latest release, but then a translator decided that they wanted to see 100% because the incomplete translation was not looking as nice as they’d like, and you end up with Damned Lies telling you that version of GNOME was fully translated when it never was and never will be.
All that to say that translators need to learn to work smart, at the right time, on the right modules, and not focus on the stats.

So there you have it: Damned Lies is a name that reminds us that numbers and statistics can be misleading even on GNOME’s I10n Web application.

Varun R Mallya

@varunrmallya

The Design of Sysprof-eBPF

Sysprof

This is a tool that is used to profile applications on Linux. It tracks function calls and other events in the system to provide a detailed view of what is happening in the system. It is a powerful tool that can help developers optimize their applications and understand performance issues. Visit Sysprof for more information.

sysprof-ebpf

This is a project I am working on as part of GSoC 2025 mentored by Christian Hergert. The goal is to create a new backend for Sysprof that uses eBPF to collect profiling data. This will mostly serve as groundwork for the coming eBPF capabilities that will be added to Sysprof. This will hopefully also serve as the design documentation for anyone reading the code for Sysprof-eBPF in the future.

Testing

If you want to test out the current state of the code, you can do so by following these steps:

  1. Clone the repo and fetch my branch.
  2. Run the following script in the root of the project:
    #!/bin/bash
    set -euo pipefail
    GREEN="\033[0;32m"
    BLUE="\033[0;34m"
    RESET="\033[0m"
    
    prefix() {
     local tag="$1"
     while IFS= read -r line; do
     printf "%b[%s]%b %s\n" "$BLUE" "$tag" "$RESET" "$line"
     done
    }
    
    trap 'sudo pkill -f sysprofd; sudo pkill -f sysprof; exit 0' SIGINT SIGTERM
    
    meson setup build --reconfigure || true
    ninja -C build || exit 1
    sudo ninja -C build install || exit 1
    sudo systemctl restart polkit || exit 1
    
    # Run sysprofd and sysprof as root
    echo -e "${GREEN}Launching sysprofd and sysprof in parallel as root...${RESET}"
    
    sudo stdbuf -oL ./build/src/sysprofd/sysprofd 2>&1 | prefix "sysprofd" &
    sudo stdbuf -oL sysprof 2>&1 | prefix "sysprof" &
    
    wait
    

Capabilities of Sysprof-eBPF

alt text sysprof-ebpf will be a subprocess that will be created by sysprofd when the user selects the eBPF backend on the UI. I will be adding an options menu on the UI to choose which tracers to activate after I am done with the initial implementation. You can find my current dirty code here. As of writing this blog, this MR has the following capabilities:

  • A tiny toggle on the UI: Contains a tiny toggle on the UI to turn the activation of the eBPF backend on and off. This is a simple toggle that will start or stop the sysprof-ebpf subprocess.
  • Full eBPF compilation pipeline: This is the core of the sysprof-ebpf project. It compiles eBPF programs from C code to BPF bytecode, loads them into the kernel, and attaches them to the appropriate tracepoints. This is done using the libbpf library, which provides a high-level API for working with eBPF programs. All this is done at compile time which means that the user does not need to have a compiler to run the eBPF backend. This will soon be made modular to be able to add more eBPF programs in the future.

alt text

  • cpu-stats tracer: Can track CPU usage of the full system by reading the exit state of a struct after a function that runs on requesting /proc/stat executes inside the kernel. I am working on finding methods to make this process not random and instead triggering this manually using bpf-timers. In the current state, this just prints this info to the console, but I will be soon adding capabilities to store this directly into the syscap file.
  • sysprofd: My little program can now talk to sysprofd now and get the file descriptor to write the data to. I also accept an event-fd in this program that allows the the UI to stop this subprocess from running. I currently face a limitation on this where I have no option of choosing which tracers to activate. I am working on getting the tracer selection working by adding an options field to SysprofProxiedInstrument.

Follow up stuff

  • Adding a way to write to the syscap file: This will include adding a way to write the data collected by the tracers to the syscap file. I have already figured out how to do it, but it’ll require a bit of refactoring which I will be doing soon.
  • Adding more tracers: I will be adding more tracers to the sysprof-ebpf project. This will include tracers for memory usage, disk usage, and network usage. I will also be adding support for custom eBPF programs that can be written by the user if possible.
  • Adding UI: This will include adding options to choose which tracers to activate, and displaying the data collected by the tracers in a more readable format.

Structure of sysprof-ebpf

I planned on making this a single threaded process initially, but it dawned on me that not all ring-buffers will update at the same time and this will certainly block IO during polling, so I figured I’ll just put each tracer in it’s own DexFuture to do this capture in an async way. This has not been implemented as of writing this blog though.

alt text

The eBPF programs will follow the this block diagram in general. I haven’t made the config hashmap part of this yet, but I think I’ll make it only if it’s required in the future. All the currently planned features do not require this config map, but it certainly will be useful when I would need to make the program cross-platform or cross-kernel. This will be one of the last things I will be implementing in the project. alt text

Conclusion

I hope to make this a valuable addition to Sysprof. I will be writing more blogs as I make progress on the project. If you have any questions or suggestions, feel free to reach out to me on GitLab or Twitter. Also, I’d absolutely LOVE suggestions on how to improve the design of this project. I am still learning and I am open to any suggestions that can make this project better.

Adrian Vovk

@adrianvovk

Introducing stronger dependencies on systemd

Doesn’t GNOME already depend on systemd?

Kinda… GNOME doesn’t have a formal and well defined policy in place about systemd. The rule of thumb is that GNOME doesn’t strictly depend on systemd for critical desktop functionality, but individual features may break without it.

GNOME does strongly depend on logind, systemd’s session and seat management service. GNOME first introduced support for logind in 2011, then in 2015 ConsoleKit support was removed and logind became a requirement. However, logind can exist in isolation from systemd: the modern elogind service does just that, and even back in 2015 there were alternatives available. Some distributors chose to patch ConsoleKit support back into GNOME. This way, GNOME can run in environments without systemd, including the BSDs.

While GNOME can run with other init systems, most upstream GNOME developers are not testing GNOME in these situations. Our automated testing infrastructure (i.e. GNOME OS) doesn’t test any non-systemd codepaths. And many modules that have non-systemd codepaths do so with the expectation that someone else will maintain them and fix them when they break.

What’s changing?

GNOME is about to gain a few strong dependencies on systemd, and this will make running GNOME harder in environments that don’t have systemd available.

Let’s start with the easier of the changes. GDM is gaining a dependency on systemd’s userdb infrastructure. GNOME and systemd do not support running more than one graphical session under the same user account, but GDM supports multi-seat configurations and Remote Login with RDP. This means that GDM may try to display multiple login screens at once, and thus multiple graphical sessions at once. At the moment, GDM relies on legacy behaviors and straight-up hacks to get this working, but this solution is incompatible with the modern dbus-broker and so we’re looking to clean this up. To that end, GDM now leverages systemd-userdb to dynamically allocate user accounts, and then runs each login screen as a unique user.

In the future, we plan to further depend on userdb by dropping the AccountsService daemon, which was designed to be a stop-gap measure for the lack of a rich user database. 15 years later, this “temporary” solution is still in use. Now that systemd’s userdb enables rich user records, we can start work on replacing AccountsService.

Next, the bigger change. Since GNOME 3.34, gnome-session uses the systemd user instance to start and manage the various GNOME session services. When systemd is unavailable, gnome-session falls back to a builtin service manager. This builtin service manager uses .desktop files to start up the various GNOME session services, and then monitors them for failure. This code was initially implemented for GNOME 2.24, and is starting to show its age. It has received very minimal attention in the 17 years since it was first written. Really, there’s no reason to keep maintaining a bespoke and somewhat primitive service manager when we have systemd at our disposal. The only reason this code hasn’t completely bit rotted is the fact that GDM’s aforementioned hacks break systemd and so we rely on the builtin service manager to launch the login screen.

Well, that has now changed. The hacks in GDM are gone, and the login screen’s session is managed by systemd. This means that the builtin service manager will now be completely unused and untested. Moreover: we’d like to implement a session save/restore feature, but the builtin service manager interferes with that. For this reason, the code is being removed.

So what should distros without systemd do?

First, consider using GNOME with systemd. You’d be running in a configuration supported, endorsed, and understood by upstream. Failing that, though, you’ll need to implement replacements for more systemd components, similarly to what you have done with elogind and eudev.

To help you out, I’ve put a temporary alternate code path into GDM that makes it possible to run GDM without an implementation of userdb. When compiled against elogind, instead of trying to allocate dynamic users GDM will look-up and use the gdm-greeter user for the first login screen it spawns, gdm-greeter-2 for the second, and gdm-greeter-N for the Nth. GDM will have similar behavior with the gnome-initial-setup[-N] users. You can statically allocate as many of these users as necessary, and GDM will work with them for now. It’s quite likely that this will be necessary for GNOME 49.

Next: you’ll need to deal with the removal of gnome-session’s builtin service manager. If you don’t have a service manager running in the user session, you’ll need to get one. Just like system services, GNOME session services now install systemd unit files, and you’ll have to replace these unit files with your own service manager’s definitions. Next, you’ll need to replace the “session leader” process: this is the main gnome-session binary that’s launched by GDM to kick off session startup. The upstream session leader just talks to systemd over D-Bus to upload its environment variables and then start a unit, so you’ll need to replace that with something that communicates with your service manager instead. Finally, you’ll probably need to replace “gnome-session-ctl”, which is a tiny helper binary that’s used to coordinate between the session leader, the main D-Bus service, and systemd. It is also quite likely that this will be needed for GNOME 49

Finally: You should implement the necessary infrastructure for the userdb Varlink API to function. Once AccountsService is dropped and GNOME starts to depend more on userdb, the alternate code path will be removed from GDM. This will happen in some future GNOME release (50 or later). By then, you’ll need at the very least:

  • An implementation of systemd-userdbd’s io.systemd.Multiplexer
  • If you have NSS, a bridge that exposes NSS-defined users through the userdb API.
  • A bridge that exposes userdb-defined users through your libc’s native user lookup APIs (such as getpwent).

Apologies for the short timeline, but this blog post could only be published after I knew how exactly I’m splitting up gnome-session into separate launcher and main D-Bus service processes. Keep in mind that GNOME 48 will continue to receive security and bug fixes until GNOME 50 is released. Thus, if you cannot address these changes in time, you have the option of holding back the GNOME version. If you can’t do that, you might be able to get GNOME 49 running with gnome-session 48, though this is a configuration that won’t be tested or supported upstream so your mileage will vary (much like running GNOME on other init systems). Still, patching that scenario to work may buy you more time to upgrade to gnome-session 49.

And that should be all for now!

GNOME Foundation News

@foundationblog

GNOME Has a New Infrastructure Partner: Welcome AWS!

This post was contributed by Andrea Veri from the GNOME Foundation.

GNOME has historically hosted its infrastructure on premises. That changed with an AWS Open Source Credits program sponsorship which has allowed our team of two SREs to migrate the majority of the workloads to the cloud and turn the existing OpenShift environment into a fully scalable and fault tolerant one thanks to the infrastructure provided by AWS. By moving to the cloud, we have dramatically reduced the maintenance burden, achieved lower latency for our users and contributors and increased security through better access controls.

Our original infrastructure did not account for the exponential growth that GNOME has seen in its contributors and userbase over the past 4-5 years thanks to the introduction of GNOME Circle. GNOME Circle is composed of applications that are not part of core GNOME but are meant to extend the ecosystem without being bound to the stricter core policies and release schedules. Contributions on these projects also make contributors eligible for GNOME Foundation membership and potentially allow them to receive direct commit access to GitLab in case the contributions are consistent over a long period of time in order to gain more trust from the community. GNOME recently migrated to GitLab, away from cgit and Bugzilla.

In this post, we’d like to share some of the improvements we’ve made as a result of our migration to the cloud.

A history of network and storage challenges

In 2020, we documented our main architectural challenges:

  1. Our infrastructure was built on OpenShift in a hyperconverged setup, using OpenShift Data Foundations (ODF), running Ceph and Rook behind the scenes. Our control plane and workloads were also running on top of the same nodes.
  2. Because GNOME historically did not have an L3 network and generally had no plans to upgrade the underlying network equipment and/or invest time in refactoring it, we would have to run our gateway using a plain Linux VM with all the associated consequences.
  3. We also wanted to make use of an external Ceph cluster with slower storage, but this was not supported in ODF and required extra glue to make it work.
  4. No changes were planned on the networking equipment side to make links redundant. That meant a code upgrade on switches would have required full service downtime.
  5. We had to work with with Dell support for every broken hardware component, which added further toil.
  6. With the GNOME user and contributor base always increasing, we never really had a good way to scale our compute resources due to budget constraints.

Cloud migration improvements

In 2024, during a hardware refresh cycle, we started evaluating the idea of migrating to the public cloud. We have been participating in the AWS Open Source Credits program for many years and received sponsorship for a set of Amazon Simple Storage Service (S3) buckets that we use widely across GNOME services. Based on our previous experience with the program and the people running it, we decided to request sponsorship from AWS for the entire infrastructure, which was kindly accepted.

I believe it’s crucial to understand how AWS resolved the architectural challenges we had as a small SRE team (just two engineers!). Most importantly, the move dramatically reduced the maintenance toil we had:

  1. Using AWS’s provided software-defined networking services, we no longer have to rely on an external team to apply changes to the underlying networking layout. This also gave us a way to use a redundant gateway and NAT without having to expose worker nodes to the internet.
  2. We now use AWS Elastic Load Balancing (ELB) instances (classic load balancers are the only type supported by OpenShift for now) as a traffic ingress for our OpenShift cluster. This reduces latency as we now operate within the same VPC instead of relying on an external load balancing provider. This also comes with the ability to have access to the security group APIs which we can use to dynamically add IP addresses. This is critical when we have individuals or organizations abusing specific GNOME services with thousands of queries per minute.
  3. We also use Amazon Elastic Block Store (EBS) and Amazon Elastic File System (EFS) via the OpenShift CSI driver. This allows us to avoid having to manage a Ceph cluster, which is a major win in terms of maintenance and operability.
  4. With AWS Graviton instances, we now have access to ARM64 machines, which we heavily leverage as they’re generally cheaper than their Intel counterparts.
  5. Given how extensively we use Amazon S3 across the infrastructure, we were able to reduce latency and costs due to the use of internal VPC S3 endpoints.
  6. We took advantage of AWS Identity and Access Management (IAM) to provide granular access to AWS services, giving us the possibility to allow individual contributors to manage a limited set of resources without requiring higher privileges.
  7. We now have complete hardware management abstraction, which is vital for a team of only two engineers who are trying to avoid any additional maintenance burden.

Thank you, AWS!

I’d like to thank AWS for their sponsorship and the massive opportunity they are giving to the GNOME Infrastructure to provide resilient, stable and highly available workloads to GNOME’s users and contributors across the globe.

Log Detective: Google Summer of Code 2025

I'm glad to say that I'll participate again in the GSoC, as mentor. This year we will try to improve the RPM packaging workflow using AI, as part of the openSUSE project.

So this summer I'll be mentoring an intern that will research how to integrate Log Detective with openSUSE tooling to improve the packager workflow to maintain rpm packages.

Log Detective

Log Detective is an initiative created by the Fedora project, with the goal of

"Train an AI model to understand RPM build logs and explain the failure in simple words, with recommendations how to fix it. You won't need to open the logs at all."

As a project that was promoted by Fedora, it's highly integrated with the build tools around this distribution and RPM packages. But RPM packages are used in a lot of different distributions, so this "expert" LLM will be helpful for everyone doing RPM, and everyone doing RPM, should contribute to it.

This is open source, so if, at openSUSE, we want to have something similar to improve the OBS, we don't need to reimplement it, we can collaborate. And that's the idea of this GSoC project.

We want to use Log Detective, but also collaborate with failures from openSUSE to improve the training and the AI, and this should benefit openSUSE but also will benefit Fedora and all other RPM based distributions.

The intern

The selected intern is Aazam Thakur. He studies at University of Mumbai, India. He has experience in using SUSE as he has previously worked on SLES 15.6 during his previous summer mentorship at OpenMainFrame Project for RPM packaging.

I'm sure that he will be able to achieve great things during these three months. The project looks very promising and it's one of the things where AI and LLM will shine, because digging into logs is always something difficult and if we train a LLM with a lot of data it can be really useful to categorize failures and give a short description of what's happening.

Tanmay Patil

@txnmxy

Acrostic Generator for GNOME Crossword Editor

The experimental Acrostic Generator has finally landed inside the Crossword editor and is currently tagged as BETA.
I’d classify this as one of the trickiest and most interesting projects I’ve worked on.
Here’s how an acrostic puzzle loaded inside Crossword editor looks like:

In my previous blog post (published about a year ago), I explained one part of the generator. Since then, there have been many improvements.
I won’t go into detail about what an acrostic puzzle is, as I’ve covered that in multiple previous posts already.
If you’re unfamiliar, please check out my earlier post for a brief idea.

Coming to the Acrostic Generator, I’ll begin by showing an illustration that shows the input and the corresponding output generated by it. After that, I’ll walk through the implementation and challenges I faced.

Let’s take the quote: “CATS ALWAYS TAKE NAPS” whose author is a “CAT”.

Here’s what the Acrostic Generator essentially does

It generates answers like “CATSPAW”, “ALASKAN” and “TYES” which, as you can probably guess from the color coding, are made up of letters from the original quote.

Core Components

Before explaining how the Acrostic generator works, I want to briefly explain some of the key components involved.
1. Word list
The word list is an important part of Crosswords. It provides APIs to efficiently search for words. Refer to the documentation to understand how it works.
2. IpuzCharset
The performance of the Acrostic Generator heavily depends on IpuzCharset, which is essentially a HashMap that stores characters and their frequencies.
We perform numerous ipuz_charset_add_text and ipuz_charset_remove_text operations on the QUOTE charset. I'd especially like to highlight ipuz_charset_remove_text, which used to be computationally very slow. Last year, charset was rewritten in Rust by Federico. Compared to the earlier implementation in C using a GTree, the Rust version turned out to be quite faster.
Here’s Federico’s blog post on rustifying libipuz’s charset.

Why is ipuz_charset_remove_text latency so important? Let's consider the following example:

QUOTE: "CARNEGIE VISITED PRINCETON AND TOLD WILSON WHAT HIS YOUNG MEN NEEDED WAS NOT A LAW SCHOOL BUT A LAKE TO ROW ON IN ADDITION TO BEING A SPORT THAT BUILT CHARACTER AND WOULD LET THE UNDERGRADUATES RELAX ROWING WOULD KEEP THEM FROM PLAYING FOOTBALL A ROUGHNECK SPORT CARNEGIE DETESTED"
SOURCE: "DAVID HALBERSTAM THE AMATEURS"

In this case, the total number of maximum ipuz_charset_remove_text operations required in the worst case would be:

73205424239083486088110552395002236620343529838736721637033364389888000000

…which is a lot.

Terminology

I’d also like you guys to take a note of a few things.
1. Answers and Clues refer to the same thing, they are the solutions generated by the Acrostic Generator. I’ll be using them interchangeably throughout.
2. We’ve set two constants in the engine: MIN_WORD_SIZE = 3 and MAX_WORD_SIZE = 20. These make sure the answers are not too short or too long and help stop the engine from running indefinitely.
3. Leading characters here are all the characters of source. Each one is the first letter of corresponding answer.

Setting up things

Before running the engine, we need to set up some data structures to store the results.

typedef struct {
/* Representing a answer */
gunichar leading_char;
const gchar *letters;
guint word_length;

/* Searching the answer */
gchar *filter;
WordList *word_list;
GArray *rand_offset;
} ClueEntry;

We use a ClueEntry structure to store the answer for each clue. It holds the leading character (from the source), the letters of the answer, the word length, and some additional word list information.
Oh wait, why do we need the word length since we are already storing letters of the answer?
Let’s backtrack. Initially, I wrote the following brute-force recursive algorithm:

void
acrostic_generator_helper (AcrosticGenerator *self,
gchar nth_source_char)
{
// Iterate from min_word_size to max_word_size for every answer
for (word_length = min_word_size; word_length <= max_word_size; word_length++)
{
// get list of words starting from `nth_source_char`
// and with length equal to word_length
word_list = get_word_list (starting_letter = nth_source_char, word_length);

// Iterate throught the word list
for (guint i = 0; i < word_list_get_n_items (word_list); i++)
{
word = word_list[i];

// check if word is present in the quote charset
if (ipuz_charset_remove_text (quote_charset, word))
{
// if present we forward to the next source char
acrostic_generator_helper (self, nth_source_char + 1)
}
}
}
}

The problem with this approach is that it is too slow. We were iterating from MIN_WORD_SIZE to MAX_WORD_SIZE and trying to find a solution for every possible size. Yes, this would work and eventually we’ll find a solution, but it would take a lot of time. Also, many of the answers for the initial source characters would end up having length equal to MIN_WORD_SIZE .
To quantify this, compared to the latest approach (which I’ll discuss shortly), we would be performing roughly 20 times the current number (7.3 × 10⁷³) of ipuz_charset_remove_text operations.

To fix this, we added randomness by calculating and assigning random lengths to clue answers before running the engine.
To generate these random lengths, we break a number equal to the length of the quote string into n parts (where n is the number of source characters), each part having a random value.

static gboolean
generate_random_lengths (GArray *clues,
guint number,
guint min_word_size,
guint max_word_size)
{
if ((clues->len * max_word_size) < number)
return FALSE;

guint sum = 0;

for (guint i = 0; i < clues->len; i++)
{
ClueEntry *clue_entry;
guint len;
guint max_len = MAX (min_word_size,
MIN (max_word_size, number - sum));

len = rand() % (max_len - min_word_size + 1) + min_word_size;
sum += len;

clue_entry = &(g_array_index (clues, ClueEntry, i));
clue_entry->word_length = len;
}

return sum == number;
}

I have been continuously researching ways to generate random lengths that help the generator find answers as quickly as possible.
What I concluded is that the Acrostic Generator performs best when the word lengths follow a right-skewed distribution.

static void
fill_clue_entries (GArray *clues,
ClueScore *candidates,
WordListResource *resource)
{
for (guint i = 0; i < clues->len; i++)
{
ClueEntry *clue_entry;

clue_entry = &(g_array_index (clues, ClueEntry, i));

// Generate filter in order to get words with starting letter nth char of source string
// For eg. char = D, answer_len = 5
// filter = "D????"
clue_entry->filter = generate_individual_filter (clue_entry->leading_char,
clue_entry->word_length);


// Load all words with starting letter equal to nth char in source string
clue_entry->word_list = word_list_new ();
word_list_set_resource (clue_entry->word_list, resource);
word_list_set_filter (clue_entry->word_list, clue_entry->filter, WORD_LIST_MATCH);

candidates[i].index = i;
candidates[i].score = clue_entry->word_length;

// Randomise the word list which is sorted by default
clue_entry->rand_offset = generate_random_lookup (word_list_get_n_items (clue_entry->word_list));
}

Now that we have random lengths, we fill up the ClueEntry data structure.
Here, we generate individual filters for each clue, which are used to set the filter on each word list. For example, the filters for the example illustrated above are C??????, A??????, and T??? .
We also maintain a separate word list for each clue entry. Note that we do not store the huge word list individually for every clue. Instead, each word list object refers to the same memory-mapped word list resource.
Additionally, each clue entry contains a random offsets array, which stores a randomized order of indices. We use this to traverse the filtered word list in a random order. This randomness helps fix the problem where many answers for the initial source characters would otherwise end up with length equal to MIN_WORD_SIZE.
The advantage of pre-calculating all of this before running the engine is that the main engine loop only performs the heavy operations: ipuz_charset_remove_text and ipuz_charset_add_text.

static gboolean
acrostic_generator_helper (AcrosticGenerator *self,
GArray *clues,
guint index,
IpuzCharsetBuilder *remaining_letters,
ClueScore *candidates)
{
ClueEntry *clue_entry;

if (index == clues->len)
return TRUE;

clue_entry = &(g_array_index (clues, ClueEntry, candidates[index].index));

for (guint i = 0; i < word_list_get_n_items (clue_entry->word_list); i++)
{
const gchar *word;

g_atomic_int_inc (self->count);


// traverse based on random indices
word = word_list_get_word (clue_entry->word_list,
g_array_index (clue_entry->rand_offset, gushort, i));

clue_entry->letters = word;

if (ipuz_charset_builder_remove_text (remaining_letters, word + 1))
{
if (!add_or_skip_word (self, word) &&
acrostic_generator_helper (self, clues, index + 1, remaining_letters, candidates))
return TRUE;

clean_up_word (self, word);
ipuz_charset_builder_add_text (remaining_letters, word + 1);
clue_entry->letters = NULL;
}

}

clue_entry->letters = NULL;

return FALSE;
}

The approach is quite simple. As you can see in the code above, we perform ipuz_charset_remove_text many times, so it was crucial to make the ipuz_charset_remove_text operation efficient.
When all the characters in the charset have been used/removed and the index becomes equal to number of clues, it means we have found a solution. At this point, we return, store the answers in an array, and continue our search for new answers until we receive a stop signal.
We also maintain a skip list that is updated whenever we find an clue answer and is cleaned up during backtracking. This makes sure there are no duplicate answers in the answers list.

Performance Improvements

I compared the performance of the acrostic generator using the current Rust charset implementation against the previous C GTree implementation. I have used the following quote and source strings with the same RNG seed for both implementations:

QUOTE: "To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment."
SOURCE: "TBYIWTCTMYSEGA"
Results:
+-----------------+--------------------+
| Implementation | Time taken(secs) |
+-----------------+--------------------+
| C GTree | 74.39 |
| Rust HashMap | 17.85 |
+-----------------+--------------------+

The Rust HashMap implementation is nearly 4 times faster than the original C GTree version for the same random seed and traversal order.

I have also been testing the generator to find small performance improvements. Here are some of them:

  1. When searching for answers, looking for answers for clues with longer word lengths first helps find solutions faster
  2. We switched to using nohash_hasher for the hashmap because we are essentially storing {char: frequency} pairs. Trace reports showed that significant time and resources were spent computing hash using Rust’s default SipHash implementation which was unnecessary. MR
  3. Inside ipuz_charset_remove_text, instead of cloning the original data, we use a rollback mechanism that tracks all modifications and rolls back in case of failure. MR

I also remember running the generator on some quote and source input back in the early days. It ran continuously for four hours and still couldn’t find a single solution. We even overflowed the gint counter which tracks number of words tried. Now, the same generator can return 10 solutions in under 10 seconds. We’ve come a long way! 😀

Crossword Editor

Now that I’ve covered the engine, I’ll talk about the UI part.
We started off by sketching potential designs on paper. @jrb came up with a good design and we decided to move forward with it, making a few tweaks to it.

First, we needed to display a list of the generated answers.

For this, I implemented my own list model where each item stores a string for the answer and a boolean indicating whether the user wants to apply that answer.
To allow the user to run, stop the generator and then apply answers, we reused the compact version of the original autofill component used in normal crosswords. The answer list gets updated whenever the slider is moved.

We have tried to reuse as much code as possible for acrostics, keeping most of the code common between acrostics and normal crosswords.
Here’s a quick demo of the acrostic editor in action:

We also maintain a cute little histogram on the right side of the bottom panel to summarize clue lengths.

You can also try out the Acrostic Generator using our CLI app, which I originally wrote to quickly test the engine. To use the binary, you’ll need to build Crosswords Editor locally. Example usage:

$ ./_build/src/acrostic-generator -q "For most of history, Anonymous was a woman. I would venture to guess that Anon, who wrote so many poems without signing them, was often a woman. And it is for this reason that I would implore women to write all the more" -s "Virginia wolf"
Starting acrostic generator. Press Ctrl+C to cancel.
[ VASOTOMY ] [ IMFROMMISSOURI ] [ ROMANIANMONETARYUNIT ] [ GREATFEATSOFSTRENGTH ] [ ITHOUGHTWEHADADEAL ] [ NEWSSHOW ] [ INSTITUTION ] [ AWAYWITHWORDS ] [ WOOLSORTERSPNEUMONIA ] [ ONEWOMANSHOWS ] [ LOWMANONTHETOTEMPOLE ] [ FLOWOUT ]
[ VALOROUSNESS ] [ IMMUNOSUPPRESSOR ] [ RIGHTEOUSINDIGNATION ] [ GATEWAYTOTHEWEST ] [ IWANTYOUTOWANTME ] [ NEWTONSLAWOFMOTION ] [ IMTOOOLDFORTHISSHIT ] [ ANYONEWHOHADAHEART ] [ WOWMOMENT ] [ OMERS ] [ LAWUNTOHIMSELF ] [ FORMATWAR ]

Plans for the future

To begin with, we’d really like to improve the overall design of the Acrostic Editor and make it more user friendly. Let us know if you have any design ideas, we’d love to hear your suggestions!
I’ve also been thinking about different algorithms for generating answers in the Acrostic Generator. One idea is to use a divide-and-conquer approach, where we recursively split the quote until we find a set of sub-quotes that satisfy all constraints of answers.

To conclude, here’s an acrostic for you all to solve, created using the Acrostic Editor! You can load the file in Crosswords and start playing.

Thanks for reading!

Luis Villa

@luis

book reports, mid-2025

Some brief notes on books, at the start of a summer that hopefully will allow for more reading.

Monk and Robot (Becky Chambers); Mossa and Pleiti (Malka Older)

Summer reading rec, and ask for more recs: “cozy sci-fi” is now a thing and I love it. Characters going through life, drinking hot beverages, trying to be comfortable despite (waves hands) everything. Mostly coincidentally, doing all those things in post-dystopian far-away planets (one fictional, one Jupiter).

Novellas, perfect for summer reads. Find a sunny nook (or better yet, a rainy summer day nook) and enjoy. (New Mossa and Pleiti comes out Tuesday, yay!)

Buzz Aldrin, in the Apollo 11 capsule, with a bright window visible and many dials and switches behind him. He is wearing white clothing with NASA patches, but not a full space suit, and is focused on whatever is in front of him, out of frame.
A complex socio-technical system, bounding boldly, perhaps foolishly, into the future. (Original via NASA)

Underground Empire (Henry Farrell and Abraham Newman)

This book is about things I know a fair bit about, like international trade sanctions, money transfers, and technology (particularly the intersection of spying and data pipes). So in some sense I learned very little.

But the book efficiently crystallizes all that knowledge into a very dense, smart, important observation: that some aspects of American so-called “soft” (i.e., non-military) power are in increasingly very “hard”. To paraphrase, the book’s core claim is that the US has, since 2001, amassed what amounts to several, fragmentary “Departments of Economic War”. These mechanisms use control over financial and IP transfers to allow whoever is in power in DC to fight whoever it wants. This is primarily China, Russia, and Iran, but also to some extent entities as big as the EU and as small as individual cargo ship captains.

The results are many. Among other things, the authors conclude that because this change is not widely-noticed, it is undertheorized, and so many of the players lack the intellectual toolkit to reason about it. Relatedly, they argue that the entire international system is currently more fragile and unstable than it has been in a long time exactly because of this dynamic: the US’s long-standing military power is now matched by globe-spanning economic control that previous US governments have mostly lacked, which in turn is causing the EU and China to try to build their own countervailing mechanisms. But everyone involved is feeling their way through it—which can easily lead to spirals. (Threaded throughout the book, but only rarely explicitly discussed, is the role of democracy in all of this—suffice to say that as told here, it is rarely a constraining factor.)

Tech as we normally think of it is not a big player here, but nevertheless plays several illustrative parts. Microsoft’s historical turn from government fighter to Ukraine supporter, Meta’s failed cryptocurrency, and various wiretapping comes up for discussion—but mostly in contexts that are very reactive to, or provocative irritants to, the 800lb gorillas of IRL governments.

Unusually for my past book reports on governance and power, where I’ve been known to stretch almost anything into an allegory for open, I’m not sure that this has many parallels. Rather, the relevance to open is that these are a series of fights that open may increasingly be drawn into—and/or destabilize. Ultimately, one way of thinking about this modern form of power dynamics is that it is a governmental search for “chokepoints” that can be used to force others to bend the knee, and a corresponding distaste for sources of independent power that have no obvious chokepoints. That’s a legitimately complicated problem—the authors have some interesting discussion with Vitalik Buterin about it—and open, like everyone else, is going to have to adapt.

Dying Every Day: Seneca at the Court of Nero (James Romm)

Good news: this book documents that being a thoughtful person, seeking good in the world, in the time of a mad king, is not a new problem.

Bad news: this book mostly documents that the ancients didn’t have better answers to this problem than we moderns do.

The Challenger Launch Decision (Diane Vaughan)

The research and history in this book are amazing, but the terminology does not quite capture what it is trying to share out as learnings. (It’s also very dry.)

The key takeaway: good people, doing hard work, in systems that slowly learn to handle variation, can be completely unprepared for—and incapable of handling—things outside the scope of that variation.

It’s definitely the best book about the political analysis of the New York Times in the age of the modern GOP. Also probably good for a lot of technical organizations handling the radical-but-seemingly-small changes detailed in Underground Empire.

Spacesuit: Fashioning Apollo (Nicholas De Monchaux)

A book about how interfaces between humans and technology is hard. (I mean clothes, but also everything else.) Delightful and wide-ranging; maybe won’t really learn any deep lessons here but it’d be a great way to force undergrads to grapple with Hard Human Problems That Engineers Thought Would Be Simple.

Crosswords 0.3.15: Planet Crosswords

It’s summer, which means its time for GSoC/Outreachy. This is the third year the Crosswords team is participating, and it has been fantastic. We had a noticeably large number of really strong candidates who showed up and wrote high-quality submissions — significantly more than previous years. There were a more candidates then we could handle, and it was a shame to have to turn some down.

In the end, Tanmay, Federico, and I got together and decided to stretch ourselves and accept three interns for the summer: Nancy, Toluwaleke, and Victor. They will be working on word lists, printing, and overlays respectively, and I’m so thrilled to have them helping out.

A result of this is that there will be a larger number of Crossword posts on planet.gnome.org this summer. I hope everyone is okay with that, and encourages them so they stay involved with GNOME and Free Software.

Release

This last release was mostly a bugfix release. The intern candidates outdid themselves this year by fixing a large number of bugs — so many that I’m releasing this to get them to users. Some highlights:

  • Mahmoud added an open dialog to the game and got auto-download of puzzles working. He also created an arabic .ipuz file to test with which revealed quite a few rendering bugs.
Arabic Crossword
Arabic Crossword
  • Toluwaleke refined the selection code. This was accidentally marked as a newcomer issue, and was absolutely not supposed to be. Nevertheless, he nailed it and has left selection in a much healthier state.
    • [ It’s worth highlighting that the initial MR for this issue is a masterclass in contributions, and one of the best MRs I’ve ever received. If you’re a potential GSoC intern, you could learn a lot from reading it. ]
  • Victor fixed divided cells and a number of small behavior bugs. He also did methodical research into other crossword editors.
Divided Cells
Divided Cells
  • Patel and Soham contributed visual improvements for barred and acrostic puzzles

In addition, GSoC-alum Tanmay has kept plugging on his Acrostic editor. It’s gotten a lot more sophisticated, and for the first time we’re including it in the stable build (albeit as a Beta). This version can be used to create a simple acrostic puzzle. I’ll let Tanmay post about it in the coming days. 

Coordinates

Specs are hard, especially for file formats. We made an unfortunate discovery about the ipuz spec this cycle. The spec uses a coordinate system to refer to cells in a puzzle — but does not define what the coordinate system means. It provides an example with the upper left corner being (0,0) and that’s intuitively a normal addressing system. However, they refer to (ROW1, COL1) in the spec, and there are a few examples in the spec that start the upper left at (1, 1).

When we ran across this issue while writing libipuz we tried a few puzzles in puzzazz (the original implementation) to confirm that (0,0) was the intended origin coordinate. However, we have run across some implementations and puzzles in the wild starting at (1,1). This is going to be pretty painful to untangle, as they two interpretations are largely incompatible. We have a plan to detect the coordinate system being used, but it’ll be a rough heuristic at best until the spec gets clarified and revamped.

By the Numbers

With this release, I took a step back and took stock of my little project. The recent releases have seemed pretty substantial, and it’s worth doing a little introspection. As of this release, we’ve reached:

  • 85KLOC total. 60KLOC in the app and 25KLOC in the library
  • 27K words of design docs (development guide)
  • 126 distinct test cases
  • 33 different contributors. I’m now at 82% of the commits and dropping
  • 6 translations (and hopefully many more some day)
  • Over 100 unencumbered puzzles in the base puzzle sets. This number needs to grow.

All in all, not too shabby, and not so little anymore.

A Final Request

Crosswords has an official flatpak, an unofficial snap, and Fedora and Arch packages. People have built it on Macs, and there’s even an APK that exists. However, there’s still no Debian package. That distro is not my world: I’m hoping someone out there will be inspired to package this project for us.

Transparency report for May 2025

Transparency report for July 2024 to May 2025 – GNOME Code of Conduct Committee

GNOME’s Code of Conduct is our community’s shared standard of behavior for participants in GNOME. This is the Code of Conduct Committee’s periodic summary report of its activities from July 2024 to May 2025.

The current members of the CoC Committee are:

  • Anisa Kuci
  • Carlos Garnacho
  • Christopher Davis
  • Federico Mena Quintero
  • Michael Downey
  • Rosanna Yuen

All the members of the CoC Committee have completed Code of Conduct Incident Response training provided by Otter Tech, and are professionally trained to handle incident reports in GNOME community events.

The committee has an email address that can be used to send reports: conduct@gnome.org as well as a website for report submission: https://conduct.gnome.org/

Reports

Since July 2024, the committee has received reports on a total of 19 possible incidents. Of these, 9 incidents were determined to be actionable by the committee, and were further resolved during the reporting period.

  • Report about an individual in a GNOME Matrix room acting rudely toward others. A Committee representative discussed the issue with the reported individual and adjusted room permissions.
  • Report about an individual acting in a hostile manner toward a new GNOME contributor in a community channel. A Committee representative contacted the reported person to provide a warning and to suggest methods of friendlier engagement.
  • Report about a discussion on a community channel that had turned heated. After going through the referenced conversation, the Committee noted that all participants were using non-friendly language and that the turning point in the conversation was a disagreement over a moderator’s action. The committee contacted the moderator and reminded them to use kinder words in the future.
  • Report related to technical topics out of the scope of the CoC committee. The issue was forwarded to the Board of Directors.
  • Report about members’ replies in community channels; after reviewing the conversation the CoC committee decided that it was not actionable. The conversation did not violate the Code of Conduct.
  • Report about inappropriate and insulting comments made by a member in social moments during an offline event. The CoC Committee sent a warning to the reported person.
  • Report against two members making comments the reporter considered disrespectful in a community channel. After reading through the conversation, the Committee did not see any violations to the CoC. No actions were taken.
  • Report on someone using abrasive and aggressive language in a community channel. After reading the conversation, the Committee agrees with this assessment. As this person had previously been found to have violated the CoC, the Committee has banned the person from the channel for one month.
  • Report about ableist language in a GitLab merge request. The reported person was given warning not to use such language.
  • Report against GNOME in general without any specifics. A request for more information was sent, and after no reply after a number of months, the issue has been closed with no action.
  • Report against the moderating team’s efforts to keep discussions within the Code of Conduct. No action was taken.
  • Report about a contributor being aggressive to the reporter who is working with them, on multiple occasions. The CoC committee talked both to the reporter and the reported person, and also to other people working with them in order to solve the disagreements. The result was that the reporter had some patterns on their behavior that made it difficult to collaborate with them too. The conclusion was that all parties acknowledged their wrong behaviors and will work on improving that and be more collaborative.
  • Report about a disagreement with a maintainer’s decision. The report was non-actionable.
  • Report about a contributor who set up harassment campaigns against Foundation and non-Foundation members. This person has been suspended indefinitely from participation in GNOME.
  • Report about a moderator being hostile in community channels; this was not the first report we received about this member so they got banned from the channel.
  • Report about a blog syndicated in planet.gnome.org. The committee evaluated the blog in question and found it not to contravene the CoC, so it took no action afterwards.
  • Five reports, unrelated to each other, with technical support requests. These were marked as not actionable.
  • Report with a general comment about GNOME, marked as not actionable.
  • A question about where to report security issues; informed the reporter about security@gnome.org.

Changes to the CoC Committee procedures

The Foundation’s Executive Director commissioned an external review of the CoC Committee’s procedures in October of 2024. After discussion with the Foundation Board of Directors, we have made the following changes to the committee procedures:

  • Establish a “chain of command” for requesting tasks to be performed by sysadmins after an incident report.
  • Clarify the procedures for notifying affected people and teams or committees after a report.
  • Clarify the way notifications are made about a report’s consequences, and update the Committee’s communications infrastructure in general.
  • Specify how to handle reports related to Foundation staff or contractors.

The history of changes can be seen in this merge request to the repository for the Code of Conduct.

CoC Committee blog

We have a new blog at https://conduct.gnome.org/blog/, where you can read this transparency report. In the future, we hope to post materials about dealing with interpersonal conflict, non-violent communication, and other ideas to help the GNOME community.

Meetings of the CoC committee

The CoC committee has two meetings each month for general updates, and weekly ad-hoc meetings when they receive reports. There are also in-person meetings during GNOME events.

Ways to contact the CoC committee

  • https://conduct.gnome.org – contains the GNOME Code of Conduct and a reporting form.
  • conduct@gnome.org – incident reports, questions, etc.

Alley Chaggar

@AlleyChaggar

Compiler Knowledge

Intro

I apologize that I’m a little late updating my blog, but over the past two weeks, I’ve been diving into Vala’s compiler and exploring how JSON (de)serialization could be integrated. My mentor, Lorenz, and I agreed that focusing on JSON is a good beginning.

Understanding the Vala Compiler

Learning the steps it takes to go from Vala code to C code is absolutely fascinating.

Vala’s Compiler 101

  • The first step in the compiler is the lexical analysis. This is handled by valascanner.vala, where your Vala code gets tokenized, which breaks up your code into chunks called tokens that are easier for the compiler to understand.
switch (begin[0]) {
		case 'f':
			if (matches (begin, "for")) return TokenType.FOR;
			break;
		case 'g':
			if (matches (begin, "get")) return TokenType.GET;
			break;

The code above is a snippet of Vala’s scanner, it’s responsible for recognizing specific keywords like ‘for’ and ‘get’ and returning the appropriate token type.

  • Next is syntax analysis and the creation of the abstract syntax tree (AST). In Vala, it’s managed by valaparser.vala, which checks if your code structure is correct, for example, if that pesky ‘}’ is missing.

    inline bool expect (TokenType type) throws ParseError {
    	if (accept (type)) {
    		return true;
    	}
      
    	switch (type) {
    	case TokenType.CLOSE_BRACE:
    		safe_prev ();
    		report_parse_error (new ParseError.SYNTAX ("following block delimiter %s missing", type.to_string ()));
    		return true;
    	case TokenType.CLOSE_BRACKET:
    	case TokenType.CLOSE_PARENS:
    	case TokenType.SEMICOLON:
    		safe_prev ();
    		report_parse_error (new ParseError.SYNTAX ("following expression/statement delimiter %s missing", type.to_string ()));
    		return true;
    	default:
    		throw new ParseError.SYNTAX ("expected %s", type.to_string ());
    	}
    }
    

    This is a snippet of Vala’s parser, it tries to accept a specific token type, like again that ‘}’. If ‘}’ is there, it continues parsing. Else if not, it throws a syntax error.

  • Then comes semantic analysis, the “meat and logic,” as I like to call it. This happens in valasemanticanalyzer.vala, where the compiler checks if things make sense. Do the types match? Are you using the correct number of parameters?

    public bool is_in_constructor () {
          unowned Symbol? sym = current_symbol;
          while (sym != null) {
              if (sym is Constructor) {
                  return true;
              }
              sym = sym.parent_symbol;
          }
          return false;
      }
    

    This code is a snippet of Vala’s semantic analyzer, which helps the compiler understand if the current code is a constructor. Starting from the current symbol, which represents where the compiler is in the code, it then moves through its parent symbols. If it finds a parent symbol that is a constructor, it returns true. Else if the parent symbol is null, it returns false.

  • After that, the flow analysis phase, located in valaflowanalyzer.vala, analyzes the execution order of the code. It figures out how control flows through the program, which is useful for things like variable initialization and unreachable code.

    public override void visit_lambda_expression (LambdaExpression le) {
    	var old_current_block = current_block;
    	var old_unreachable_reported = unreachable_reported;
    	var old_jump_stack = jump_stack;
    	mark_unreachable ();
    	jump_stack = new ArrayList<JumpTarget> ();
      
    	le.accept_children (this);
      
    	current_block = old_current_block;
    	unreachable_reported = old_unreachable_reported;
    	jump_stack = old_jump_stack;
    	}
    

    The snippet of Vala’s flow analyzer ensures that control flow, like unreachable code or jump statements, is properly analyzed within the lambda expression.

  • After all that, we now want to convert the Vala code into C code using a variety of Vala files in the directories ccode and codegen.

All of these classes inherit from valacodevisitor.vala, which is basically the mother of classes that provides the visit_* methods that allow each phase in the compiler to walk the source code tree.

I know this brief isn’t all of what there is to understand about the compiler, but it’s a start. Also, let’s take a moment to appreciate everyone who has contributed to Vala’s compiler design, it’s truly an art 🎨

The Coding Period Begins!!!

Now that GSoC’s official coding period is here, I’m continuing my research on how to implement JSON support.

Right now, I’m still learning the codegen phase AKA the phase of converting vala into C, I’m exploring json glib and starting to work on a valajsonmodule.vala in the code gen.

Another thing I want to work on is the Vala docs. The docs aren’t bad, but I’ve realized the information is pretty limited the deeper you get into the compiler.

I’m excited that this is starting to slowly make sense, little by little.

Using Portals with unsandboxed apps

Nowadays XDG Desktop Portal plays an important part in interaction between apps and the system, providing much needed security and unifying the experience, regardless of the desktop environment or toolkit you're using. While one could say it was created for sandboxed Flatpak apps, portals could bring major advantages to unsandboxed, host apps as well:

- Writing universal code: you don't need to care about writing desktop-specific code, as different desktops and toolkits will provide their own implementations

- Respecting the privacy of the user: portals use a permission system, which can be granted, revoked and controlled by the user. While host apps could bypass them, user can still be presented with dialogs, which will ask for permission to perform certain actions or obtain information.

Okay, so they seem like a good idea after all. Now, how do we use them?

More often than not, you don't actually have to manually call the D-Bus API - for many of the portals, toolkits and desktop will interact with them on your behalf, exposing easy to use high-level APIs. For example, if you're developing an app using GTK4 on GNOME and want to inhibit suspend or logout, you would call gtk_application_inhibit  which will actually prefer using the Inhibit portal over directly talking to gnome-session-manager. There are also convenience libraries to help you, available for different programming languages.

That sounds easy, is that all? Unfortunately, there are some caveats.

The fact that we can safely say that flatpaks are first-class citizen when interacting with portals, compared to host apps, is a good thing - they offer many benefits, and we should embrace them. However, in the real world there are many instances of apps installed without sandbox, and the transition will take time, so in the meantime we need to make sure they play correctly with portals as well.

One such instance is the getting the information about the app - in flatpak land, it's obtained from a special .flatpak-info file located in the sandbox. In the host apps though, xdg-desktop-portal tries to parse the app id from the systemd unit name, only accepting "app-" prefixed format, specified in the XDG standardization for applications. This works for some applications, but unfortunately not all, at least at this time. One such example is D-Bus activated apps, which are started with "dbus-" prefixed systemd unit name, or the ones started from the terminal with even different prefixes. In all those cases, the app id exposed to the portal is empty.

One major problem, when xdg-desktop-portal doesn't have access to the app-id, is undoubtedly failure of inhibiting logout/suspend when using the Inhibit portal. Applications on GNOME using GTK4 will call gtk_application_inhibit, which in turn calls xdg-desktop-portal-gtk inhibit portal implementation, which finally talks to the gnome-session-manager D-Bus API. However, it requires app-id to function correctly, and will not inhibit the session without it. The situation should get better in the next release of gnome-session but it could still cause problems for the user, not knowing the name of the application that is preventing logout/suspend.

Moreover, while not as critical, other portals also rely on that information in some way. Account portal used for obtaining the information about the user will mention the app display name when asking for confirmation, otherwise will call it the "requesting app", which the user may not recognize, and is more likely to cancel. Location portal will do the same, and Background portal won't allow autostart if it's requested.

GNOME Shell logout dialog when Nautilus is copying files, inhibiting indirectly via portal 


How can we make sure our host apps play well with portals?

Fortunately, there are many ways to make sure your host app interacts correctly with portals. First and foremost, you should always try to follow the XDG cgroup pathname standardization for applications. Most desktop environments already follow the standard, and if they don't, you should definitely report it as a bug. There are some exceptions, however - D-Bus activated apps are started by the D-Bus message bus implementations on behalf of desktops, and currently they don't put the app in the correct systemd unit. There is an effort to fix that on the dbus-broker side, but these things take time, and there is also the case of apps started from the terminal, which have different unit names altogether.

When for some reason your app was launched in a way that doesn't follow the standard, you can use the special interface for registering with XDG Desktop Portal, the host app Registry, which overwrites the automatic detection. It should be considered a temporary solution, as it is expected to be eventually deprecated (with the details of the replacement specified in the documentation), nevertheless it lets us fix the problem at present. Some toolkits, like GTK, will register the application for you, during the GtkApplication startup call.

There is one caveat, though - it needs to be the first call to the portal, otherwise it will not overwrite the automatic detection. This means that when relying on GTK to handle the registration, you need to make sure you don't interact with the portal before the GtkApplication startup chain-up call. So no more gtk_init in main.c, which on Wayland uses Settings portal to open display, all such code needs to be moved just after the application startup chain-up. If for some reason you really cannot do that, you'll have to call the D-Bus method yourself, before any portal interaction is made.

The end is never the end...

If you made it this far, congratulations and thanks for taking this rabbit hole with me. If it's still not enough, you can check out the ticket I reported and worked on in nautilus, giving even more context to how we ended up here. Hope you learned something that will make your app better :)

Victor Ma

@victorma

Coding begins!

Today marks the end of the community bonding period, and the start of the coding period, of GSoC.

In the last two weeks, I’ve been looking into other crossword editors that are on the market, in order to see what features they have that we should implement. I compiled everything I saw into a findings document.

Once that was done, I went through the document and distilled it down into a final list. I also added other feature ideas that I already had in mind.

Eventually, through a discussion with my mentor, we decided that I should start by tackling a bug that I found. This will help me get more familiar with the fill algorithm code, and it will inform my decisions going forward, in terms of what features I should work on.

Tobias Bernard

@tbernard

Summer of GNOME OS

So far, GNOME OS has mostly been used for testing in virtual machines, but what if you could just use it as your primary OS on real hardware?

Turns out you can!

While it’s still early days and it’s not recommended for non-technical audiences, GNOME OS is now ready for developers and early adopters who know how to deal with occasional bugs (and importantly, file those bugs when they occur).

The Challenge

To get GNOME OS to the next stage we need a lot more hardware testing. This is why this summer (June, July, and August) we’re launching a GNOME OS daily-driving challenge. This is how it works:

  • 10 points for daily driving GNOME OS on your primary computer for at least 4 weeks
  • 1 point for every (valid, non-duplicate) issue created
  • 3 points for every (merged) merge request
  • 5 points for fixing an open issue

You can sign up for the challenge and claim points by adding yourself to the list of participants on the Hedgedoc. As the challenge progresses, add any issues and MRs you opened to the list.

The person with the most points on September 1 will receive a OnePlus 6 (running postmarketOS, unless someone gets GNOME OS to work on it by then). The three people with the most points on September 1 (noon UTC) will receive a limited-edition shirt (stay tuned for designs!).

Important links:

FAQ

Why GNOME OS?

Using GNOME OS Nightly means you’re running the latest latest main for all of our projects. This means you get all the dope new features as they land, months before they hit Fedora Rawhide et al.

For GNOME contributors that’s especially valuable because it allows for easy testing of things that are annoying/impossible to try in a VM or nested session (e.g. notifications or touch input). For feature branches there’s also the possibility to install a sysext of a development branch for system components, making it easy to try things out before they’ve even landed.

More people daily driving Nightly has huge benefits for the ecosystem, because it allows for catching issues early in the cycle, while they’re still easy to fix.

Is my device supported?

Most laptops from the past 5 years are probably fine, especially Thinkpads. The most important specs you need are UEFI and if you want to test the TPM security features you need a semi-recent TPM (any Windows 11 laptop should have that). If you’re not sure, ask in the GNOME OS channel.

Does $APP work on GNOME OS?

Anything available as a Flatpak works fine. For other things, you’ll have to build a sysext.

Generally we’re interested in collecting use cases that Flatpak doesn’t cover currently. One of the goals for this initiative is finding both short-term workarounds and long-term solutions for those cases.

Please add such use cases to the relevant section in the Hedgedoc.

Any other known limitations?

GNOME OS uses systemd-sysupdate for updating the system, which doesn’t yet support delta updates. This means you have to download a new 2GB image from scratch for every update, which might be an issue if you don’t have regular access to a fast internet connection.

The current installer is temporary, so it’s missing many features we’ll have in the real installer, and the UI isn’t very polished.

Anything else I should know before trying to install GNOME OS?

Update the device’s firmware, including the TPM’s firmware, before nuking the Windows install the computer came with (I’m speaking from experience)!

I tried it, but I’m having problems :(

Ask in the GNOME OS Matrix channel!

Michael Hill

@mdhill

Publishing a book from the GNOME desktop

My first two books were written online using Pressbooks in a browser. A change in the company’s pricing model prompted me to migrate another edition of the second book to LaTeX. Many enjoyable hours were spent searching online for how to implement everything from the basics to special effects. After a year and a half a nearly finished book suddenly congealed.

Here’s what I’m using: Fedora’s TeX Live stack, Emacs (with AUCTeX and the memoir class), Evince, and the Citations flatpak, all on a GNOME desktop. The cover of the first book was done professionally by a friend. For the second book (first and second editions) I’ve used the GNU Image Manipulation Program.

For print on demand, Lulu.com. The company was founded by Bob Young, who (among other achievements) rejuvenated a local football team, coincidentally my dad’s (for nearly 80 years and counting). Lulu was one of the options recommended by Adam Hyde at the end of the Mallard book sprint hosted by Google. Our book didn’t get printed in time to take home, so  I uploaded it to Lulu and ordered a few copies with great results. My second book is also on Amazon’s KDP under another ISBN; I’m debating whether to do that again.

Does this all need to be done from GNOME? For me, yes. The short answer came from Richard Schwarting on the occasion of our Boston Summit road trip: “GNOME makes me happy.”

The long answer…
In my career working as a CAD designer in engineering, I’ve used various products by Autodesk (among others). I lived through the AutoCAD-MicroStation war of the 1990s on the side of MicroStation (using AutoCAD when necessary). MicroStation brought elegance to the battle, basing their PC and UNIX ports on their revolutionary new Mac interface. They produced a student version for Linux. After Windows 95 the war was over and mediocrity won.

Our first home computer was an SGI Indy, purchased right in the middle of that CAD war. Having experienced MicroStation on IRIX I can say it’s like running GNOME on a PC: elegant if not exquisite compared to the alternative.

For ten years I was the IT guy at a small engineering company. While carrying out my insidious plan of installing Linux servers and routers, I was able to indulge certain pastimes, building and testing XEmacs (formerly Lucid Emacs) and fledgling GNOME on Debian unstable/experimental. Through the SGI Linux effort I got to meet online acquaintances from Sweden, Mexico, and Germany in person at Ottawa Linux Symposium and Debconf .

At the peak of my IT endeavours, I was reading email in Evolution from OpenXchange Server on SuSE Enterprise Server while serving a Windows workstation network with Samba. When we were acquired by a much larger company, my Linux servers met with expedient demise as we were absorbed into their global Windows Server network. The IT department was regionalized and I was promoted back into the engineering side of things. It was after that I encountered the docs team.

These days I’m compelled to keep Windows in a Box on my GNOME desktop in order to run Autodesk software. It’s not unusual for me to grind my teeth while I’m working. A month ago a surprise hiatus in my day job was announced, giving me time to enjoy GNOME, finish the book, and write a blog post.

So yes, it has to be GNOME.

In 2004 I used LaTeX in XEmacs to write a magazine article that was ultimately published in the UK. This week, for old times’ sake, I installed XEmacs (no longer packaged for Fedora) on my desktop. This requires an EPEL 8 package on CentOS 9 in Boxes. It can be seen in the screenshot. The syntax highlighting is real but LaTeX-mode isn’t quite operational yet.

Nancy Nyambura

@nwnyambura

Outreachy Internship:My First Two Weeks with GNOME:


Diving into Word Scoring for Crosswords

In my first two weeks as an Outreachy intern with GNOME, I’ve been getting familiar with the project I’ll be contributing to and settling into a rhythm with my mentor, Jonathan Blandford. We’ve agreed to meet every Monday to review the past week and plan goals for the next — something I’ve already found incredibly grounding and helpful.

What I’m Working On: The Word Score Project

My project revolves around improving how GNOME’s crossword tools (like GNOME Crosswords) assess and rank words. This is part of a larger effort to support puzzle constructors by helping them pick better words for their grids — ones that are fun, fresh, and fair.

But what makes a “good” crossword word?

This is what the Word Score project aims to answer. It proposes a scoring system that assigns numerical values to words based on multiple measurable traits, such as:

  • Lexical interest (e.g. does it contain unusual bigrams/trigrams like “KN” or “OXC”?),
  • Frequency in natural language (based on datasets like Google Ngrams),
  • Familiarity to solvers (which may differ from frequency),
  • Definition count (some words like SET or RUN are goldmines for cryptic clues),
  • Sentiment and appropriateness (nobody wants a vulgar word in a breakfast puzzle).

The goal is to build a system that supports both the autofill functionality and the word list interface in GNOME Crosswords, giving human setters better tools while respecting editorial judgment. In other words, this project isn’t about replacing setters — it’s about enhancing their toolkit.

You can read more about the project’s goals and philosophy in our draft document: Thoughts on Scoring Words (final link coming soon).

Week 1: Building and Breaking Puzzles

During my first week, I spent time getting familiar with the project environment and experimenting with crossword puzzle generation. I created test puzzles to better understand how word placement, scoring, and validation work under the hood.

This hands-on experimentation helped me form a clearer mental model of how GNOME Crosswords structures and fills puzzles — and why scoring matters. The way words interact in a grid can make some fills elegant and others feel forced or unplayable.

Week 2: Wrestling with libipuz and Introspection

In the second week, my focus shifted to working on libipuz, a C library that parses and exports puzzles using the IPUZ format. but getting libipuz working with GNOME’s introspection system proved more challenging than expected.

Initially, I tried to use it inside the crosswords container, but it wasn’t cooperating. After some digging (and rebuilding), we decided to create a separate container specifically for libipuz to enable introspection and allow scripting in languages like Python and JavaScript to interact with it.

This also gave me a deeper understanding of how GNOME handles language bindings via GObject Introspection — something I hadn’t worked with before, but I’m quickly getting the hang of.

Bonus: Scrabble-Inspired Scoring Script

As a side exploration, I also wrote a quick Python script that calculates Scrabble-style scores for words. While Scrabble scoring isn’t the same as what we want in crosswords (it values rare letters like Z and Q), it gave me a fun way to experiment with scoring mechanics and visualize how simple rules change the ranking of word lists. This mini-project helped me warm up to the idea of building more complex scoring systems later on.


What’s Next?

In the coming weeks, I’ll continue refining the scoring dimensions, writing more scripts to calculate traits (especially frequency and lexical interest), and exploring how this scoring system can be surfaced in GNOME Crosswords. I’m excited to see how this evolves — and even more excited to share updates as I go.

Thanks for reading!


Ahmed Fatthi

@ausername1040

GSoC 2025: First Two Weeks Progress Report

The first two weeks of my Google Summer of Code (GSoC) journey with GNOME Papers have been both exciting and productive. I had the opportunity to meet my mentors, discuss the project goals, and dive into my first major task: improving the way document mutex locks are handled in the codebase.


🤝 Mentor Meeting & Planning

We kicked off with a meeting to get to know each other and to discuss the open Merge Request 499. The MR focuses on moving document mutex locks from the libview/shell layer down to the individual backends (DjVu, PDF, TIFF, Comics). We also outlined the remaining work and clarified how to approach it for the best results.

Alireza Shabani

@Revisto

We Started a Podcast for This Week in GNOME (in Farsi)

Hi, we’ve started a new project: a Farsi-language podcast version of This Week in GNOME.

Each week, we read and summarise the latest TWIG post in Farsi, covering updates from GNOME Core, GNOME Circle apps, and other community-related news. Our goal is to help Persian-speaking users and contributors stay connected with the GNOME ecosystem.

The podcast is hosted by me (Revisto), along with Mirsobhan and Hadi. We release one short episode per week.

Since I also make music, I created a short theme for the podcast to give it more identity and consistency across episodes. It’s simple, but it adds a nice touch of production value that we hope makes the podcast feel more polished.

We’re also keeping a GitHub repository in which I’m uploading each of my episode scripts (in Farsi) in Markdown + audio files. The logo and banner assets have been uploaded in SVG as well for transparency.

Partial screenshot of 201st script of TWIG podcast in Obsidian in Farsi, written in markdown.

You can listen to the podcast on:

Let us know what you think, and feel free to share it with Farsi-speaking friends or communities interested in GNOME.

Ahmed Fatthi

@ausername1040

About This Blog & My GSoC Journey

Learn more about this blog, my GSoC 2025 project with GNOME, and my background in open source development.

Christian Hergert

@hergertme

Sysprof in your Mesa

Thanks to the work of Christian Gmeiner, support for annotating time regions using Sysprof marks has landed in Mesa.

That means you’ll be able to open captures with Sysprof and see the data along other useful information including callgraphs and flamegraphs.

I do think there is a lot more we can do around better visualizations in Sysprof. If that is something you’re interested in working on please stop by #gnome-hackers on Libera.chat or drop me an email and I can find things for you to work on.

See the merge request here.

Hans de Goede

@hansdg

IPU6 cameras with ov02c10 / ov02e10 now supported in Fedora

I'm happy to share that 3 major IPU6 camera related kernel changes from linux-next have been backported to Fedora and have been available for about a week now the Fedora kernel-6.14.6-300.fc42 (or later) package:

  1. Support for the OV02C10 camera sensor, this should e.g. enable the camera to work out of the box on all Dell XPS 9x40 models.
  2. Support for the OV02E10 camera sensor, this should e.g. enable the camera to work out of the box on Dell Precision 5690 laptops. When combined with item 3. below and the USBIO drivers from rpmfusion this should also e.g. enable the camera on other laptop models like e.g. the Dell Latitude 7450.
  3. Support for the special handshake GPIO used to turn on the sensor and allow sensor i2c-access on various new laptop models using the Lattice MIPI aggregator FPGA / USBIO chip.

If you want to give this a test using the libcamera-softwareISP FOSS stack, run the following commands:

sudo rm -f /etc/modprobe.d/ipu6-driver-select.conf
sudo dnf update 'kernel*'
sudo dnf install libcamera-qcam
reboot
qcam

Note the colors being washed out and/or the image possibly being a bit over or under exposed is expected behavior ATM, this is due to the software ISP needing more work to improve the image quality. If your camera still does not work after these changes and you've not filed a bug for this camera already please file a bug following these instructions.

See my previous blogpost on how to also test Intel's proprietary stack from rpmfusion if you also have that installed.

comment count unavailable comments

Status update, 22/05/2025

Hello. It is May, my favourite month. I’m in Manchester, mainly as I’m moving projects at work, and its useful to do that face-to-face.

For the last 2 and a half years, my job has mostly involved a huge, old application inside a big company, which I can’t tell you anything about. I learned a lot about how to tackle really, really big software problems where nobody can tell you how the system works and nobody can clearly describe the problem they want you to solve. It was the first time in a long time that I worked on production infrastructure, in that, we could have caused major outages if we rolled out bad changes. Our team didn’t cause any major outages in all that time. I will take that as a sign of success. (There’s still plenty of legacy application to decommission, but it’s no longer my problem).

A green tiled outside wall with graffiti

During that project I tried to make time to work on end to end testing of GNOME using openQA as well… with some success, in the sense that GNOME OS still has working openQA tests, but I didn’t do very well at making improvements, and I still don’t know if or when I’ll ever have time to look further at end-to-end testing for graphical desktops. We did a great Outreachy internship at least with Tanju and Dorothy adding quite a few new tests.

Several distros test GNOME downstream, but we still don’t have much of a story of how they could collaborate upstream. We do still have the monthly Linux QA call so we have a space to coordinate work in that area… but we need people who can do the work.

My job now, for the moment, involves a Linux-based operating system that is intended to be used in safety-critical contexts. I know a bit about operating systems and not much about functional safety. I have seen enough to know there is nothing magic about a “safety certificate” — it represents some thinking about risks and how to detect and mitigate them. I know Codethink is doing some original thinking in this area. It’s interesting to join in and learn about what we did so far and where it’s all going.

Giving credit to people

The new GNOME website, which I really like, describes the project as “An independent computing platform for everyone”.

There is something political about that statement: it’s implying that we should work towards equal access to computer technology. Something which is not currently very equal. Writing software isn’t going to solve that on its own, but it feels like a necessary part of the puzzle.

If I was writing a more literal tagline for the GNOME project, I might write: “A largely anarchic group maintaining complex software used by millions of people, often for little or no money.” I suppose that describes many open source projects.

Something that always bugs me is how a lot of this work is invisible. That’s a problem everywhere: from big companies and governments, down to families and local community groups, there’s usually somebody who does more work than they get credit for.

But we can work to give credit where credit is due. And recently several people have done that!

Outgoing ED Richard Littauer in “So Long and Thanks For All the Fish” shouted out a load of people who work hard in the GNOME Foundation to make stuff work.

Then incoming GNOME ED, Steven Deobald wrote a very detailed “2025-05-09 Foundation Report” (well done for using the correct date format, as well), giving you some idea about how much time it takes to onboard a new director, and how many people are involved.

And then Georges wrote about some people working hard on accessibility in “In celebration of accessibility”.

Giving credit is important and helpful. In fact, that’s just given me an idea, but explaining that will have to wait til next month.

canal in manchester