Recently Node.js started to support building with clang-cl on Windows. I happened to have the chance to try it out this week and while it still needs some fixups in my case, it’s mostly working very well now. Here are some notes about this
Hey all, quick post today to mention that I added tracing support to the Whippet GC library. If the support library for LTTng is available when Whippet is compiled, Whippet embedders can visualize the GC process. Like this!
Click above for a full-scale screenshot of the Perfetto trace explorer processing the nboyer microbenchmark with the parallel copying collector on a 2.5x heap. Of course no image will have all the information; the nice thing about trace visualizers like is that you can zoom in to sub-microsecond spans to see exactly what is happening, have nice mouseovers and clicky-clickies. Fun times!
Adding tracepoints to a library is not too hard in the end. You need to pull in the lttng-ust library, which has a pkg-config file. You need to declare your tracepoints in one of your header files. Then you have a minimal C file that includes the header, to generate the code needed to emit tracepoints.
Annoyingly, this header file you write needs to be in one of the -I directories; it can’t be just in the the source directory, because lttng includes it seven times (!!) using computed includes (!!!) and because the LTTng file header that does all the computed including isn’t in your directory, GCC won’t find it. It’s pretty ugly. Ugliest part, I would say. But, grit your teeth, because it’s worth it.
Finally you pepper your source with tracepoints, which probably you wrap in some macro so that you don’t have to require LTTng, and so you can switch to other tracepoint libraries, and so on.
I wrote up a little guide for Whippet users about how to actually get traces. It’s not as easy as perf record, which I think is an error. Another ugly point. Buck up, though, you are so close to graphs!
By which I mean, so close to having to write a Python script to make graphs! Because LTTng writes its logs in so-called Common Trace Format, which as you might guess is not very common. I have a colleague who swears by it, that for him it is the lowest-overhead system, and indeed in my case it has no measurable overhead when trace data is not being collected, but his group uses custom scripts to convert the CTF data that he collects to... GTKWave (?!?!?!!).
In my case I wanted to use Perfetto’s UI, so I found a script to convert from CTF to the JSON-based tracing format that Chrome profiling used to use. But, it uses an old version of Babeltrace that wasn’t available on my system, so I had to write a new script (!!?!?!?!!), probably the most Python I have written in the last 20 years.
Yes. God I love blinkenlights. As long as it’s low-maintenance going forward, I am satisfied with the tradeoffs. Even the fact that I had to write a script to process the logs isn’t so bad, because it let me get nice nested events, which most stock tracing tools don’t allow you to do.
I fixed a small performance bug because of it – a worker thread was spinning waiting for a pool to terminate instead of helping out. A win, and one that never would have shown up on a sampling profiler too. I suspect that as I add more tracepoints, more bugs will be found and fixed.
I think the only thing that would be better is if tracepoints were a part of Linux system ABIs – that there would be header files to emit tracepoint metadata in all binaries, that you wouldn’t have to link to any library, and the actual tracing tools would be intermediated by that ABI in such a way that you wouldn’t depend on those tools at build-time or distribution-time. But until then, I will take what I can get. Happy tracing!
Update on what happened in WebKit in the week from February 3 to February 11.
Fixed an assertion crash in the remote Web Inspector when its resources contain an UTF-8 “non-breaking space” character.
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
Media playback now supports choosing the output audio device on a per element basis, using the setSinkId() API. This also added the support needed for enumerating audio outputs, which is needed by Web applications to obtain the identifiers of the available devices. Typical usage includes allowing the user to choose the audio output used in WebRTC-based conference calls.
For now feature flags ExposeSpeakers
, ExposeSpeakersWithoutMicrophone
, and PerElementSpeakerSelection
need to be enabled for testing.
Set proper playbin flags which are needed to properly use OpenMax on the RaspberryPi.
Landed a change that adds a
visualization for damage rectangles, controlled by the WEBKIT_SHOW_DAMAGE
environment variable. This highlights areas damaged during rendering of every
frame—as long as damage propagation is enabled.
Stable releases of WebKitGTK 2.46.6 and WPE WebKit 2.46.6 are now available. These come along with the first security advisory of the year (WSA-2025-0001: GTK, WPE): they contain mainly security fixes, and everybody is advised to update.
The unstable release train continues as well, with WebKitGTK 2.47.4 and WPE WebKit 2.47.4 available for testing. These are snapshots of the current development status, and while expected to work there may be rough edges—if you encounter any issue, reports at the WebKit Bugzilla are always welcome.
The recently released libwpe 1.16.1 accidentally introduced an ABI break, which has been corrected in libwpe 1.16.2. There are no other changes, and the latter should be preferred.
That’s all for this week!
Hey all, the video of my FOSDEM talk on Whippet is up:
Slides here, if that’s your thing.
I ended the talk with some puzzling results around generational collection, which prompted yesterday’s post. I don’t have a firm answer yet. Or rather, perhaps for the splay benchmark, it is to be expected that a generational GC is not great; but there are other benchmarks that also show suboptimal throughput in generational configurations. Surely it is some tuning issue; I’ll be looking into it.
Happy hacking!
This blog post is to announce that Igalia has gotten a grant from NLnet Foundation to work on solving cross-root ARIA issues in Shadow DOM. My colleague Alice Boxhall, which has been working on sorting out this issue since several years ago, together with support form other igalians is doing the work related to this grant.
Shadow DOM has some issues that prevent it to be used in some situations if you want to have an accessible application. This has been identified by the Web Components Working Group as one of the top priority issues that need to be sorted out.
Briefly speaking, there are mainly two different problems when you want to reference elements for ARIA attributes cross shadow root boundaries.
First issue is that you cannot reference things outside the Shadow DOM. Imagine you have a custom element (#customButton
) which contains a native button in its Shadow DOM, and you want to associate the internal button with a label (#label
) which is outside in the light DOM.
<label id="label">Label</label>
<custom-button id="customButton">
<template shadowrootmode="open">
<div>foo</div>
<button aria-labelledby="label">Button</button>
<div>bar</div>
</template>
</custom-button>
And the second problem is that you cannot reference things inside a Shadow DOM from the outside. Imagine the opposite situation where you have a custom label (#customLabel
) with a native label in its Shadow DOM that you want to reference from a button (#button
) in the light DOM.
<custom-label id="customLabel">
<template shadowrootmode="open">
<div>foo</div>
<label>Label</label>
<div>bar</div>
</template>
</custom-label>
<button id="button" aria-labelledby="customLabel">Button</button>
This is a huge issue for web components because they cannot use Shadow DOM, as they would like due to its encapsulation properties, if they want to provide an accessible experience to users. For that reason many of the web components libraries don’t use yet Shadow DOM and have to rely on workarounds or custom polyfills.
If you want to know more on the problem, Alice goes deep on the topic in her blog post How Shadow DOM and accessibility are in conflict.
The Accessibility Object Model (AOM) effort was started several years ago aiming to solve several issues including the one described before, that had a wider scope and tried to solve many different things including the problems described in this blog post. At that time Alice was at Google and Alex Surkov at Mozilla, both were part of this effort. Coincidentally, they are now at Igalia, which together with Joanmarie Diggs and Valerie Young create a dream team of accessibility experts in our company.
Even when the full problem hasn’t been sorted out yet, there has been some progress with the Element Reflection feature which allows ARIA relationship attributes to be reflected as element references. Whit this users can specify them without the need to assign globally unique ID attributes to each element. This feature has been implemented in Chromium and WebKit by Igalia. So instead of doing something like:
<button id="button" aria-describedby="description">Button</button>
<div id="description">Description of the button.</div>
You could specify it like:
<button id="button">Button</button>
<div id="description">Description of the button.</div>
<script>
button.ariaDescribedByElements = [description];
</script>
Coming back to Shadow DOM, this feature also enables authors to specify ARIA relationships pointing to things outside the Shadow DOM (the first kind of problem described in the previous section), however it doesn’t allow to reference elements inside another Shadow DOM (the second problem). Anyway let’s see an example of how this will solve the first issue:
<label id="label">Label</label>
<custom-button id="customButton"></custom-button>
<script>
const shadowRoot = customButton.attachShadow({mode: "open"});
const foo = document.createElement("div");
foo.textContent = "foo";
shadowRoot.appendChild(foo);
const button = document.createElement("button");
button.textContent = "Button";
/* Here is where we reference the outer label from the button inside the Shadow DOM. */
button.ariaLabelledByElements = [label];
shadowRoot.appendChild(button);
const bar = document.createElement("div");
bar.textContent = "bar";
shadowRoot.appendChild(bar);
</script>
Apart from Element Reflection, which only solves part of the issues, there have been other ideas about how to solve these problems. Initially Cross-root ARIA Delegation proposal by Leo Balter at Salesforce. A different one called Cross-root ARIA Reflection by Westbrook Johnson at Adobe. And finally the Reference Target for Cross-root ARIA proposal by Ben Howell at Microsoft.
Again if you want to learn more about the different nuances of the previous proposals you can revisit Alice’s blog post.
At this point this is the most promising proposal is the Reference Target one. This proposal allows the web authors to use Shadow DOM and still don’t break the accessibility of their web applications. The proposal is still in flux and it’s currently being prototyped in Chromium and WebKit. Anyway as an example this is the kind of API shape that would solve the second problem described in the initial section, where we reference a label (#actualLabel
) inside the Shadow DOM from a button (#button
) in the light DOM.
<custom-label id="customLabel">
<template shadowrootmode="open"
shadowrootreferencetarget="actualLabel">
<div>foo</div>
<label id="actualLabel">Label</label>
<div>bar</div>
</template>
</custom-label>
<button id="button" aria-labelledby="customLabel">Button</button>
As part of this grant we’ll work on all the process to get the Reference Target proposal ready to be shipped in the web rendering engines. Some of the tasks that will be done during this project include work in different fronts:
We’re really grateful that NLnet has trusted us to this project, and we really hope this will allow to fix an outstanding accessibility issue in the web platform that has been around for too many time already. At the same point it’s a bit sad, that the European Union through the NGI funds is the one sponsoring this project, when it will have a very important impact for several big fishes that are part of the Web Components WG.
If you want to follow the evolution of this project, I’d suggest you to follow Alice’s blog where she’ll be updating us about the progress of the different tasks.
Usually in this space I like to share interesting things that I find out; you might call it a research-epistle-publish loop. Today, though, I come not with answers, but with questions, or rather one question, but with fractal surface area: what is the value proposition of generational garbage collection?
The conventional wisdom is encapsulated in a 2004 Blackburn, Cheng, and McKinley paper, “Myths and Realities: The Performance Impact of Garbage Collection”, which compares whole-heap mark-sweep and copying collectors to their generational counterparts, using the Jikes RVM as a test harness. (It also examines a generational reference-counting collector, which is an interesting predecessor to the 2022 LXR work by Zhao, Blackburn, and McKinley.)
The paper finds that generational collectors spend less time than their whole-heap counterparts for a given task. This is mainly due to less time spent collecting, because generational collectors avoid tracing/copying work for older objects that mostly stay in the same place in the live object graph.
The paper also notes an improvement for mutator time under generational GC, but only for the generational mark-sweep collector, which it attributes to the locality and allocation speed benefit of bump-pointer allocation in the nursery. However for copying collectors, generational GC tends to slow down the mutator, probably because of the write barrier, but in the end lower collector times still led to lower total times.
So, I expected generational collectors to always exhibit lower wall-clock times than whole-heap collectors.
In whippet, I have a garbage collector with an abstract API that specializes at compile-time to the mutator’s object and root-set representation and to the collector’s allocator, write barrier, and other interfaces. I embed it in whiffle, a simple Scheme-to-C compiler that can run some small multi-threaded benchmarks, for example the classic Gabriel benchmarks. We can then test those benchmarks against different collectors, mutator (thread) counts, and heap sizes. I expect that the generational parallel copying collector takes less time than the whole-heap parallel copying collector.
So, I ran some benchmarks. Take the splay-tree benchmark, derived from Octane’s splay.js. I have a port to Scheme, and the results are... not good!
In this graph the “pcc” series is the whole-heap copying collector, and “generational-pcc” is the generational counterpart, with a nursery sized such that after each collection, its size is 2 MB times the number of active mutator threads in the last collector. So, for this test with eight threads, on my 8-core Ryzen 7 7840U laptop, the nursery is 16MB including the copy reserve, which happens to be the same size as the L3 on this CPU. New objects are kept in the nursery one cycle before being promoted to the old generation.
There are also results for “mmc” and “generational-mmc” collectors, which use an Immix-derived algorithm that allows for bump-pointer allocation but which doesn’t require a copy reserve. There, the generational collectors use a sticky mark-bit algorithm, which has very different performance characteristics as promotion is in-place, and the nursery is as large as the available heap size.
The salient point is that at all heap sizes, and for these two very different configurations (mmc and pcc), generational collection takes more time than whole-heap collection. It’s not just the splay benchmark either; I see the same thing for the very different nboyer benchmark. What is the deal?
I am honestly quite perplexed by this state of affairs. I wish I had a narrative to tie this together, but in lieu of that, voici some propositions and observations.
Sometimes people say that the reason generational collection is good is because you get bump-pointer allocation, which has better locality and allocation speed. This is misattribution: it’s bump-pointer allocators that have these benefits. You can have them in whole-heap copying collectors, or you can have them in whole-heap mark-compact or immix collectors that bump-pointer allocate into the holes. Or, true, you can have them in generational collectors with a copying nursery but a freelist-based mark-sweep allocator. But also you can have generational collectors without bump-pointer allocation, for free-list sticky-mark-bit collectors. To simplify this panorama to “generational collectors have good allocators” is incorrect.
It’s true, generational GC does lower median pause times:
But because a major collection is usually slightly more work under generational GC than in a whole-heap system, because of e.g. the need to reset remembered sets, the maximum pauses are just as big and even a little bigger:
I am not even sure that it is meaningful to compare median pause times between generational and non-generational collectors, given that the former perform possibly orders of magnitude more collections than the latter.
Doing fewer whole-heap traces is good, though, and in the ideal case, the less frequent major traces under generational collectors allows time for concurrent tracing, which is the true mitigation for long pause times.
Could it be that the test harness I am using is in some way unrepresentative? I don’t have more than one test harness for Whippet yet. I will start work on a second Whippet embedder within the next few weeks, so perhaps we will have an answer there. Still, there is ample time spent in GC pauses in these benchmarks, so surely as a GC workload Whiffle has some utility.
One reasons that Whiffle might be unrepresentative is that it is an ahead-of-time compiler, whereas nursery addresses are assigned at run-time. Whippet exposes the necessary information to allow a just-in-time compiler to specialize write barriers, for example the inline check that the field being mutated is not in the nursery, and an AOT compiler can’t encode this as an immediate. But it seems a small detail.
Also, Whiffle doesn’t do much compiler-side work to elide write barriers. Could the cost of write barriers be over-represented in Whiffle, relative to a production language run-time?
Relatedly, Whiffle is just a baseline compiler. It does some partial evaluation but no CFG-level optimization, no contification, no nice closure conversion, no specialization, and so on: is it not representative because it is not an optimizing compiler?
How big should the nursery be? I have no idea.
As a thought experiment, consider the case of a 1 kilobyte nursery. It is probably too small to allow the time for objects to die young, so the survival rate at each minor collection would be high. Above a certain survival rate, generational GC is probably a lose, because your program violates the weak generational hypothesis: it introduces a needless copy for all survivors, and a synchronization for each minor GC.
On the other hand, a 1 GB nursery is probably not great either. It is plenty large enough to allow objects to die young, but the number of survivor objects in a space that large is such that pause times would not be very low, which is one of the things you would like in generational GC. Also, you lose out on locality: a significant fraction of the objects you traverse are probably out of cache and might even incur TLB misses.
So there is probably a happy medium somewhere. My instinct is that for a copying nursery, you want to make it about as big as L3 cache, which on my 8-core laptop is 16 megabytes. Systems are different sizes though; in Whippet my current heuristic is to reserve 2 MB of nursery per core that was active in the previous cycle, so if only 4 threads are allocating, you would have a 8 MB nursery. Is this good? I don’t know.
I don’t have a very large set of benchmarks that run on Whiffle, and they might not be representative. I mean, they are microbenchmarks.
One question I had was about heap sizes. If a benchmark’s maximum heap size fits in L3, which is the case for some of them, then probably generational GC is a wash, because whole-heap collection stays in cache. When I am looking at benchmarks that evaluate generational GC, I make sure to choose those that exceed L3 size by a good factor, for example the 8-mutator splay benchmark in which minimum heap size peaks at 300 MB, or the 8-mutator nboyer-5 which peaks at 1.6 GB.
But then, should nursery size scale with total heap size? I don’t know!
Incidentally, the way that I scale these benchmarks to multiple mutators is a bit odd: they are serial benchmarks, and I just run some number of threads at a time, and scale the heap size accordingly, assuming that the minimum size when there are 4 threads is four times the minimum size when there is just one thread. However, multithreaded programs are unreliable, in the sense that there is no heap size under which they fail and above which they succeed; I quote:
"Consider 10 threads each of which has a local object graph that is usually 10 MB but briefly 100MB when calculating: usually when GC happens, total live object size is 10×10MB=100MB, but sometimes as much as 1 GB; there is a minimum heap size for which the program sometimes works, but also a minimum heap size at which it always works."
A generational collector partitions objects into old and new sets, and a minor collection starts by visiting all old-to-new edges, called the “remembered set”. As the program runs, mutations to old objects might introduce new old-to-new edges. To maintain the remembered set in a generational collector, the mutator invokes write barriers: little bits of code that run when you mutate a field in an object. This is overhead relative to non-generational configurations, where the mutator doesn’t have to invoke collector code when it sets fields.
So, could it be that Whippet’s write barriers or remembered set are somehow so inefficient that my tests are unrepresentative of the state of the art?
I used to use card-marking barriers, but I started to suspect they cause too much overhead during minor GC and introduced too much cache contention. I switched to precise field-logging barriers some months back for Whippet’s Immix-derived space, and we use the same kind of barrier in the generational copying (pcc) collector. I think this is state of the art. I need to see if I can find a configuration that allows me to measure the overhead of these barriers, independently of other components of a generational collector.
A few months ago, my only generational collector used the sticky mark-bit algorithm, which is an unconventional configuration: its nursery is not contiguous, non-moving, and can be as large as the heap. This is part of the reason that I implemented generational support for the parallel copying collector, to have a different and more conventional collector to compare against. But generational collection loses on some of these benchmarks in both places!
On one benchmark which repeatedly constructs some trees and then verifies them, I was seeing terrible results for generational GC, which I realized were because of cooperative safepoints: generational GC collects more often, so it requires that all threads reach safepoints more often, and the non-allocating verification phase wasn’t emitting any safepoints. I had to change the compiler to emit safepoints at regular intervals (in my case, on function entry), and it sped up the generational collector by a significant amount.
This is one instance of a general observation, which is that any work that doesn’t depend on survivor size in a GC pause is more expensive with a generational collector, which runs more collections. Synchronization can be a cost. I had one bug in which tracing ephemerons did work proportional to the size of the whole heap, instead of the nursery; I had to specifically add generational support for the way Whippet deals with ephemerons during a collection to reduce this cost.
Looking deeper at the data, I have partial answers for the splay benchmark, and they are annoying :)
Splay doesn’t actually allocate all that much garbage. At a 2.5x heap, the stock parallel MMC collector (in-place, sticky mark bit) collects... one time. That’s all. Same for the generational MMC collector, because the first collection is always major. So at 2.5x we would expect the generational collector to be slightly slower. The benchmark is simply not very good – or perhaps the most generous interpretation is that it represents tasks that allocate 40 MB or so of long-lived data and not much garbage on top.
Also at 2.5x heap, the whole-heap copying collector runs 9 times, and the generational copying collector does 293 minor collections and... 9 major collections. We are not reducing the number of major GCs. It means either the nursery is too small, so objects aren’t dying young when they could, or the benchmark itself doesn’t conform to the weak generational hypothesis.
At a 1.5x heap, the copying collector doesn’t have enough space to run. For MMC, the non-generational variant collects 7 times, and generational MMC times out. Timing out indicates a bug, I think. Annoying!
I tend to think that if I get results and there were fewer than, like, 5 major collections for a whole-heap collector, that indicates that the benchmark is probably inapplicable at that heap size, and I should somehow surface these anomalies in my analysis scripts.
Doing a similar exercise for nboyer at 2.5x heap with 8 threads (4GB for 1.6GB live data), I see that pcc did 20 major collections, whereas generational pcc lowered that to 8 major collections and 3471 minor collections. Could it be that there are still too many fixed costs associated with synchronizing for global stop-the-world minor collections? I am going to have to add some fine-grained tracing to find out.
I just don’t know! I want to believe that generational collection was an out-and-out win, but I haven’t yet been able to prove it is true.
I do have some homework to do. I need to find a way to test the overhead of my write barrier – probably using the MMC collector and making it only do major collections. I need to fix generational-mmc for splay and a 1.5x heap. And I need to do some fine-grained performance analysis for minor collections in large heaps.
Enough for today. Feedback / reactions very welcome. Thanks for reading and happy hacking!
Embracing an IndieWeb thing...
Over the years, I've been there as things have come and gone on the interwebs. I had accounts on AIM, MySpace, Blogger, FaceBook, WordPress, Google Plus, Twitter, Instagram, and plenty more. On each one, I wind up with a profile where I want to link people to other places I am online - and those places don't always make it easy to do that. So, something short and memorable that you could even type if you had to is ideal - like a handle: @briankardell or @bkardell or, in the end: bkardell.com is pretty great.
Back in 2016, some folks felt the same frustration and instead created this Linktree thing. But... It's just like, a fremium thing that gives you the most basic single page website ever? Yes, and today they have 50 million users and are valued at $1.3 billion - that's a value of $26/user. I mean, I found that amazing.
But, here's the thing: After a while I realized that there's something to it. Not the business model, specifically, but the link in bio idea. The link in bio is generally not a great website on its own, but it's a page where people can pretty quickly navigate to the thing they're looking for without other noise or fluff. Which is something that a home page often isn't. So, I've learned to really appreciate them, actually.
Back in about 2020, some IndieWeb folks began thinking about, criticizing and brainstorming around it too. They began writing about Link in Bio, and why it might be useful to have your own version. There are several examples of people doing something similar.
Recently a bunch of things smashed together in my mind. First, on ShopTalk, I heard Chris and Dave talking about "slash pages" (pages right off the root domain with well known names). Second, I've been working on social media plans - getting away from some platforms and moving to others and thinking about these problems. An IndieWeb style `/links` page, that adds rel="me"
links is a really nice, simple way to achieve a whole lot of things if we'd all adopt it - not the least of which is that it's an evergreen link, almost as simple as a handle, to where people can find you should you choose to leave...
So, starting with me, and Igalia, you can find our links at bkardell.com/links and igalia.com/links respectively.
Introduction
If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.
As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.
However this does not include configuration files: the /etc
directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var
(together with all the usual contents that go in that filesystem such as logs, cached data, etc).
/etc
contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc
don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.
SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.
However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d
, listing the additional files that need to be kept.
There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf
that shows what this configuration looks like.
Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer:
https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273
As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!
/etc/previous
to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup
Maintaining a downstream of Chromium is hard, because of the speed upstream moves. and how hard it is to keep our downstream up to date.
A critical aspect is how big what we build on top of Chromium is: in other words, the size of our downstream. In this blog post I will review how to measure it, and the impact it has on the costs of maintaining a downstream.
Last year, I started a series of blog posts about the challenges, the organization and the implementation details of maintaining a project that is a downstream of Chromium. This is the third blog post in the series.
The previous posts were:
But, first… What do I mean by the size of the downstream? I am interested in a definition that can be used as a metric, something we can measure and track. A number that allows to know if the downstream is increasing or decreasing, measure if a change has impact on it.
The rough idea is: the bigger the downstream is, the more complex it is to maintain it. I will provide a few metrics that can be used for this purpose.
The most obvious metric is the delta, the difference between upstream Chromium and the downstream. For this, and assuming the downstream uses Git, the definition I use is essentially the result of this command:
git diff --shortstat BASELINE..DOWNSTREAM
BASELINE
is a commit reference that represents the pure upstream repository status our downstream is based on (our baseline). DOWNSTREAM
is the commit reference we want to compare the baseline to.
As a recommendation, it is useful to maintain in our downstream repository tags or branches that represent strictly the baseline. This way we can use diff tools to represent our delta more easily.
This command is going to return 3 values:
We will be mostly interested in tracking the number of lines added and removed.
This definition is interesting as it gives an idea of the amount of lines of code that we need to maintain. It may not reflect the full amount to maintain, as some files are maintained out of the Chromium repository. Aggregating these with other repositories changed or added to the build could be useful.
One interesting thing with this approach is also that we can measure the delta of specific paths in the repository. I.e. if we want to measure the delta of the content/
path, it is just as easy as doing:
git diff --shortstat BASELINE..DOWNSTREAM content/
The regular delta definition we considered has a problem. All the line changes have the same weight. But, when we update our baseline, a big part of the complexity comes from the conflicts found when rebasing or merging.
So, I am introducing a new definition. Modifying delta: the changes between the baseline and the downstream that affect upstream lines. In this case, we ignore completely any file added only by the downstream, as that is not going to create conflicts.
In Git, we can use filters for that purpose:
git diff --diff-filter=CDMR --shortstat BASELINE..DOWNSTREAM
This will only account these changes:
M
: changes affecting existing files.R
: files that were renamed.C
: files that were copied.D
: files that were deleted.So, these numbers are going to more accurately represent which parts of our delta can conflict with the changes coming from upstream when we rebase or merge.
Tracking the modifying delta, and reorganizing the project to reduce it, is usually a good strategy for reducing maintenance costs.
An issue we have with the Git diff stats is that it represents modified lines as a block of lines removed and another of lines added.
Fortunately, we can use another tool. Diffstat, will do a best effort to identify which lines are actually modified. It can be easily installed in your distribution of choice (i.e. the package diffstat
in Debian/Ubuntu/Redhat).
This behavior is enabled with the parameter -m
:
git diff ...parameters... | diffstat -m
This is the kind of output that is generated. On top of the typical +
and -
we see the !
for the lines that have been detected to be modified.
$ git show | diffstat -m
paint/timing/container_timing.cc | 5 ++++!
paint/timing/container_timing.h | 1 +
timing/performance_container_timing.cc | 20 ++++++++++++++++++!!
timing/performance_container_timing.h | 5 +++++
timing/performance_container_timing.idl | 1 +
timing/window_performance.cc | 4 ++!!
timing/window_performance.h | 1 +
7 files changed, 32 insertions(+), 5 modifications(!)
Coloring is also available, with the parameter -C
.
Using diffstat
gives a more accurate insight of both the total delta and the modifying delta.
Now we have the tools to provide numbers, we can track them in the time to know if our downstream is growing or shrinking.
That can be used also for measuring the impact of different strategies or changes in the downstream maintenance complexity.
But deltas are not the only tool to measure the complexity, specially regarding the effort maintaining a downstream.
I can enumerate just a few ideas that provide insight of different problems:
Let’s focus now on other factors, not always measurable easily, when we maintain a downstream project.
The complexity of a downstream, specially the one measured by regular delta, is impacted heavily by what is built on top of Chromium.
A full web browser is usually bigger, because it includes the required user experience, and many components that conform what we nowadays consider a browser. History, bookmarks, user profiles, secrets management…
An application runtime for hybrid applications may just have minimal wrappers for integrating a web view, but then maybe a complex set of components for easing the integration with a native toolkit or a specific programming language.
How much you build on top of Chrome?
For maintenance complexity, as important as what we build on top, is the set of boundaries and dependencies:
These questions are specially relevant, as Chromium does not really provide any warranty about the stability, or even availability, of existing components.
Though, different layers provided by Chromium change less often than others. Some examples:
//content/shell
and //chrome
.//components
that may be useful for different downstreams.//chrome
, and modify it for the specific downstream user experience. This means a higher modifying delta. But, as the upstream Chrome browser UI may also often changes heavily, the frequency of conflicts also increases.In this post I reviewed different ways to measure the downstream size, and how what we build impacts the complexity of maintenance.
Understanding and tracking our downstream allows to implement strategies to keep things under control. It also allows to better understand the cost of a specific feature or an implementation approach.
In the next post in this series, I will write about how the upstream Chromium community helps the downstreams.
Update on what happened in WebKit in the week from January 27 to February 3.
The documentation now has a section on how to use the Web Inspector remotely. This makes information on this topic easier to find, as it was previously scattered around a few different locations.
Jamie continues her Coding Experience work around bringing WebExtensions to the WebKitGTK port. A good part of this involves porting functionality from Objective-C, which only the Apple WebKit ports would use, into C++ code that all ports may use. The latest in this saga was WebExtensionStorageSQLiteStore.
The experimental support for Invoker Commands has been updated to match latest spec changes.
WPE and WebKitGTK now have support for the Cookie Store API.
Implemented experimental support for the CloseWatcher API.
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
The GStreamer WebRTC backend can now recycle inactive senders and support for inactive receivers was also improved. With these changes, support for screen sharing over WebRTC is now more reliable.
On the playback front, a bug on the silent video automatic pause optimization was fixed, the root case for certain VP9 videos appearing as empty sometimes was found out to be in GStreamer, and there is effort ongoing to solve racy crashes when flushing MSE streams.
Support for WebDriver BiDi has been enabled in WebKitGTK as an experimental feature.
That’s all for this week!
Nushell is a new shell (get it?) in development since 2019. Where other shells like bash and zsh treat all data as raw text, nu instead provides a type system for all data flowing through its pipelines, with many commands inspired by functional languages to manipulate that data. The examples on their homepage and in the README.md demonstrate this well, and I recommend taking a quick look if you’re not familiar with the language.
I have been getting familiar with Nu for a few months now, and found it a lot more approachable and user-friendly than traditional shells, and particularly helpful for exploring logs.
I won’t go over all the commands I use in detail, so if anything is ever unclear, have a look at the Command Reference.
The most relevant categories for our use case are probably Strings
and Filters
.
From inside nushell, you can also use help some_cmd
or some_cmd --help
, or help commands
for a full table of commands that can be manipulated and searched like
any other table in nu. And for debugging a pipeline, describe
is a very useful command that describes the type of its input.
First of all, we need some custom commands to parse the raw logs into a nu table. Luckily, nushell provides a parse command for exacly this use case, and we can define this regex to use with it:
let gst_regex: ([
'(?<time>[0-9.:]+) +'
'(?<pid>\w+) +'
'(?<thread>\w+) +'
'(?<level>\w+) +'
'(?<category>[\w-]+) +'
'(?<file>[\w.-]+)?:'
'(?<line>\w+):'
'(?<function>[\w()~-]+)?:'
'(?<object><[^>]*>)? +'
'(?<msg>.*)$'
] | str join)
(I use a simple pipeline here to split the string over multiple lines for better readability, it just concatenates the list elements.)
Lets run a simple pipeline to get some logs to play around with:
GST_DEBUG=*:DEBUG GST_DEBUG_FILE=sample.log gst-launch-1.0 videotestsrc ! videoconvert ! autovideosink
For parsing the file, we need to be careful to remove any ansi escapes, and split the input into lines. On top of that, we will also store the result to a variable for ease of use:
let gst_log = open sample.log | ansi strip | lines | parse --regex $gst_regex
You can also define a custom command for this, which would look something like:
def "from gst logs" []: string -> table {
$in | ansi strip | lines | parse --regex ([
'(?<time>[0-9.:]+) +'
'(?<pid>\w+) +'
'(?<thread>\w+) +'
'(?<level>\w+) +'
'(?<category>[\w-]+) +'
'(?<file>[\w.-]+)?:'
'(?<line>\w+):'
'(?<function>[\w()~-]+)?:'
'(?<object><[^>]*>)? +'
'(?<msg>.*)$'
] | str join)
}
Define it directly on the command line, or place it in your configuration files. Either way, use the command like this:
let gst_log = open sample.log | from gst logs
If you take a look at a few lines of the table, it should look something like this:
$gst_log | skip 10 | take 10
╭────┬────────────────────┬───────┬────────────┬────────┬─────────────────────┬────────────────┬───────┬──────────────────────────────┬─────────────┬───────────────────────────────────────────────╮
│ # │ time │ pid │ thread │ level │ category │ file │ line │ function │ object │ msg │
├────┼────────────────────┼───────┼────────────┼────────┼─────────────────────┼────────────────┼───────┼──────────────────────────────┼─────────────┼───────────────────────────────────────────────┤
│ 0 │ 0:00:00.003607288 │ 5161 │ 0x1ceba80 │ DEBUG │ GST_ELEMENT_PADS │ gstelement.c │ 315 │ gst_element_base_class_init │ │ type GstBin : factory (nil) │
│ 1 │ 0:00:00.003927025 │ 5161 │ 0x1ceba80 │ INFO │ GST_INIT │ gstcontext.c │ 86 │ _priv_gst_context_initialize │ │ init contexts │
│ 2 │ 0:00:00.004117399 │ 5161 │ 0x1ceba80 │ INFO │ GST_PLUGIN_LOADING │ gstplugin.c │ 328 │ _priv_gst_plugin_initialize │ │ registering 0 static plugins │
│ 3 │ 0:00:00.004164980 │ 5161 │ 0x1ceba80 │ DEBUG │ GST_REGISTRY │ gstregistry.c │ 592 │ gst_registry_add_feature │ <registry0> │ adding feature 0x1d08c70 (bin) │
│ 4 │ 0:00:00.004176720 │ 5161 │ 0x1ceba80 │ DEBUG │ GST_REFCOUNTING │ gstobject.c │ 710 │ gst_object_set_parent │ <bin> │ set parent (ref and sink) │
│ 5 │ 0:00:00.004197201 │ 5161 │ 0x1ceba80 │ DEBUG │ GST_ELEMENT_PADS │ gstelement.c │ 315 │ gst_element_base_class_init │ │ type GstPipeline : factory 0x1d09310 │
│ 6 │ 0:00:00.004243022 │ 5161 │ 0x1ceba80 │ DEBUG │ GST_REGISTRY │ gstregistry.c │ 592 │ gst_registry_add_feature │ <registry0> │ adding feature 0x1d09310 (pipeline) │
│ 7 │ 0:00:00.004254252 │ 5161 │ 0x1ceba80 │ DEBUG │ GST_REFCOUNTING │ gstobject.c │ 710 │ gst_object_set_parent │ <pipeline> │ set parent (ref and sink) │
│ 8 │ 0:00:00.004265272 │ 5161 │ 0x1ceba80 │ INFO │ GST_PLUGIN_LOADING │ gstplugin.c │ 236 │ gst_plugin_register_static │ │ registered static plugin "staticelements" │
│ 9 │ 0:00:00.004276813 │ 5161 │ 0x1ceba80 │ DEBUG │ GST_REGISTRY │ gstregistry.c │ 476 │ gst_registry_add_plugin │ <registry0> │ adding plugin 0x1d084d0 for filename "(NULL)" │
╰────┴────────────────────┴───────┴────────────┴────────┴─────────────────────┴────────────────┴───────┴──────────────────────────────┴─────────────┴───────────────────────────────────────────────╯
skip
and take
do exactly what it says on the tin - removing the first N rows, and showing only the first N rows, respectively. I use them here to keep the examples short.
To ignore columns, use reject
:
$gst_log | skip 10 | take 5 | reject time pid thread
╭───┬───────┬────────────────────┬───────────────┬──────┬──────────────────────────────┬─────────────┬────────────────────────────────╮
│ # │ level │ category │ file │ line │ function │ object │ msg │
├───┼───────┼────────────────────┼───────────────┼──────┼──────────────────────────────┼─────────────┼────────────────────────────────┤
│ 0 │ DEBUG │ GST_ELEMENT_PADS │ gstelement.c │ 315 │ gst_element_base_class_init │ │ type GstBin : factory (nil) │
│ 1 │ INFO │ GST_INIT │ gstcontext.c │ 86 │ _priv_gst_context_initialize │ │ init contexts │
│ 2 │ INFO │ GST_PLUGIN_LOADING │ gstplugin.c │ 328 │ _priv_gst_plugin_initialize │ │ registering 0 static plugins │
│ 3 │ DEBUG │ GST_REGISTRY │ gstregistry.c │ 592 │ gst_registry_add_feature │ <registry0> │ adding feature 0x1d08c70 (bin) │
│ 4 │ DEBUG │ GST_REFCOUNTING │ gstobject.c │ 710 │ gst_object_set_parent │ <bin> │ set parent (ref and sink) │
╰───┴───────┴────────────────────┴───────────────┴──────┴──────────────────────────────┴─────────────┴────────────────────────────────╯
Or its counterpart, select
, which is also useful for reordering columns:
$gst_log | skip 10 | take 5 | select msg category level
╭───┬────────────────────────────────┬────────────────────┬───────╮
│ # │ msg │ category │ level │
├───┼────────────────────────────────┼────────────────────┼───────┤
│ 0 │ type GstBin : factory (nil) │ GST_ELEMENT_PADS │ DEBUG │
│ 1 │ init contexts │ GST_INIT │ INFO │
│ 2 │ registering 0 static plugins │ GST_PLUGIN_LOADING │ INFO │
│ 3 │ adding feature 0x1d08c70 (bin) │ GST_REGISTRY │ DEBUG │
│ 4 │ set parent (ref and sink) │ GST_REFCOUNTING │ DEBUG │
╰───┴────────────────────────────────┴────────────────────┴───────╯
Meanwhile, get
returns a single column as a list, which can for example be used with uniq
to get a list of all objects in the log:
$gst_log | get object | uniq | take 5
╭───┬──────────────╮
│ 0 │ │
│ 1 │ <registry0> │
│ 2 │ <bin> │
│ 3 │ <pipeline> │
│ 4 │ <capsfilter> │
╰───┴──────────────╯
Filtering rows by different criteria works really well with where
.
$gst_log | where thread in ['0x7f467c000b90' '0x232fefa0'] and category == GST_STATES | take 5
╭────┬────────────────────┬───────┬─────────────────┬────────┬─────────────┬──────────┬──────┬───────────────────────┬──────────────────┬───────────────────────────────────────────────────────────╮
│ # │ time │ pid │ thread │ level │ category │ file │ line │ function │ object │ msg │
├────┼────────────────────┼───────┼─────────────────┼────────┼─────────────┼──────────┼──────┼───────────────────────┼──────────────────┼───────────────────────────────────────────────────────────┤
│ 0 │ 0:00:01.318390245 │ 5158 │ 0x7f467c000b90 │ DEBUG │ GST_STATES │ gstbin.c │ 1957 │ bin_element_is_sink │ <autovideosink0> │ child autovideosink0-actual-sink-xvimage is sink │
│ 1 │ 0:00:01.318523898 │ 5158 │ 0x7f467c000b90 │ DEBUG │ GST_STATES │ gstbin.c │ 1957 │ bin_element_is_sink │ <pipeline0> │ child autovideosink0 is sink │
│ 2 │ 0:00:01.318558109 │ 5158 │ 0x7f467c000b90 │ DEBUG │ GST_STATES │ gstbin.c │ 1957 │ bin_element_is_sink │ <pipeline0> │ child videoconvert0 is not sink │
│ 3 │ 0:00:01.318569169 │ 5158 │ 0x7f467c000b90 │ DEBUG │ GST_STATES │ gstbin.c │ 1957 │ bin_element_is_sink │ <pipeline0> │ child videotestsrc0 is not sink │
│ 4 │ 0:00:01.338298058 │ 5158 │ 0x7f467c000b90 │ INFO │ GST_STATES │ gstbin.c │ 3408 │ bin_handle_async_done │ <autovideosink0> │ committing state from READY to PAUSED, old pending PAUSED │
╰────┴────────────────────┴───────┴─────────────────┴────────┴─────────────┴──────────┴──────┴───────────────────────┴──────────────────┴───────────────────────────────────────────────────────────╯
It provides special shorthands called row conditions - have a look at the reference for more examples.
Of course, get
and where
can also be combined:
$gst_log | get category | uniq | where $it starts-with GST | take 5
╭───┬────────────────────╮
│ 0 │ GST_REGISTRY │
│ 1 │ GST_INIT │
│ 2 │ GST_MEMORY │
│ 3 │ GST_ELEMENT_PADS │
│ 4 │ GST_PLUGIN_LOADING │
╰───┴────────────────────╯
And if you need to merge multiple logs, I recommend using sort-by time
. This could look like
let gst_log = (open sample.log) + (open other.log) | from gst logs | sort-by time
While there are many other useful commands, there is one more command I find incredbly useful: explore
.
It is essentially the nushell equivalent to less
, and while it is still quite rough around the edges,
I’ve been using it all the time, mostly for its interactive REPL.
First, just pipe the parsed log into explore
:
$gst_log | explore
Now, using the :try
command opens its REPL. Enter any pipeline at the top, and you will be able to explore its output below:
Switch between the command line and the pager using Tab
, and while focused on the pager, search forwards or backwards using /
and ?
, or enter :help
for explanations.
Also have a look at the documentation on explore
in the Nushell Book.
Update on what happened in WebKit in the week from January 20 to January 27.
GLib 2.70 will be required starting with the upcoming 2.48 stable releases. This made it possible to remove some code that is no longer needed.
Fixed unlimited memory consumption in case of playing regular video playback and using web inspector.
Speed up reading of large messages sent by the web inspector.
Implemented support for dialog.requestClose()
.
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
Fixed the assertion error "pipeline and player states are not synchronized" related to muted video playback in the presence of scroll. Work is ongoing regarding other bugs reproduced with the same video, some of them related to scroll and some likely indepedent.
Fixed lost initial audio samples played using WebAudio on 32-bit Raspberry Pi devices, by preventing the OpenMAX subsystem to enter standby mode.
Landed a change that fixes damage propagation of 3D-transformed layers.
Fixed a regression visiting any web page making use of accelerated ImageBuffers (e.g. canvas) when CPU rendering is used. We were unconditionally creating OpenGL fences, even in CPU rendering mode, and tried to wait for completion in a worker thread, that had no OpenGL context (due to CPU rendering). This is an illegal operation in EGL and fired an assertion, crashing the WebProcess.
Despite the work on the WPE Platform API, we continue to maintain the “classic” stack based on libwpe. Thus, we have released libwpe 1.16.1 with the small—but important—addition of support for representing analog button inputs for devices capable of reporting varying amounts of pressure.
That’s all for this week!
Recently, I fixed a macOS-specific startup performance regression in Node.js after an extensive investigation. Along the way, I learned a lot about tools for macOS and Node
Update on what happened in WebKit in the week from January 13 to January 20.
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
The JavaScriptCore GLib API has gained support for creating Promise objects. This allows integrating asynchronous functionality more ergonomically when interfacing between native code and JavaScript.
Elements with outlines inside scrolling containers now render their outlines properly.
Landed a change that adds multiple fixes to the damage propagation functionality in scenarios such as:
Layers with custom transforms.
Pages with custom viewport scale.
Dynamic layer size changes.
Scrollbar layers.
Landed a change that improves damage propagation in terms of animations handling.
Landed a change that prevents any kind of damage propagation when the feature is disabled at runtime using its corresponding flag. Before that, even though the functionality was runtime-disabled some memory usage and unneeded calculations were being done.
New, modern platform API that supersedes usage of libwpe and WPE backends.
Drag gesture threshold, and key repeat delay/interval are now handled through the WPESettings API instead of using hardcoded values. While defaults typically work well, being able to tweak them for certain setups without rebuilding WPE is a welcome addition.
Sylvia has also improved the WPE Platform DRM/KMS backend to pick the
default output device scaling factor
using WPESettings
.
That’s all for this week!
Just as 2025 is starting, we got a new Linux release in mid January, tagged as 6.13. In the spirit of holidays, Linus Torvalds even announced during 6.13-rc6 that he would be building and raffling a guitar pedal for a random kernel developer!
As usual, this release comes with a pack of exciting news done by the kernel community:
This release has two important improvements for task scheduling: lazy preemption and proxy execution. The goal with lazy preemption is to find a better balance between throughput and response time. A secondary goal is being able to make it the preferred non-realtime scheduling policy for most cases. Tasks that really need a reschedule in a hurry will use the older TIF_NEED_RESCHED
flag. A preliminary work for proxy execution was merged, which will let us avoid priority-inversion scenarios when using real time tasks with deadline scheduling, for use cases such as Android.
New important Rust abstractions arrived, such as VFS data structures and interfaces, and also abstractions for misc devices.
Lightweight guard pages: guard pages are used to raise a fatal signal when accessed. This feature had the drawback of having a heavy performance impact, but in this new release the flag MADV_GUARD_INSTALL
was added for the madvise()
syscall, offering a lightweight way to guard pages.
To know more about the community improvements, check out the summary made by Kernel Newbies.
Now let’s highlight the contributions made by Igalians for this release.
Case sensitivity has been a traditional difference between Linux distros and MS Windows, with the most popular filesystems been in opposite sides: while ext4 is case sensitive, NTFS is case insensitive. This difference proved to be challenging when Windows apps, mainly games, started to be a common use case for Linux distros (thanks to Wine!). For instance, games running through Steam’s Proton would expect that the path assets/player.png
and assets/PLAYER.PNG
would point to be the same file, but this is not the case in ext4. To avoid doing workarounds in userspace, ext4 has support for casefolding since Linux 5.2.
Now, tmpfs joins the group of filesystems with case-insensitive support. This is particularly useful for running games inside containers, like the combination of Wine + Flatpak. In such scenarios, the container shares a subset of the host filesystem with the application, mounting it using tmpfs. To keep the filesystem consistent, with the same expectations of the host filesystem about the mounted one, if the host filesystem is case-insensitive we can do the same thing for the container filesystem too. You can read more about the use case in the patchset cover letter.
While the container frameworks implement proper support for this feature, you can play with it and try it yourself:
$ mount -t tmpfs -o casefold fs_name /mytmpfs
$ cd /mytmpfs # case-sensitive by default, we still need to enable it
$ mkdir a
$ touch a; touch A
$ ls
A a
$ mkdir B; cd b
cd: The directory 'b' does not exist
$ # now let's create a case-insensitive dir
$ mkdir case_dir
$ chattr +F case_dir
$ cd case_dir
$ touch a; touch A
$ ls
a
$ mkdir B; cd b
$ pwd
$ /home/user/mytmpfs/case_dir/B
As part of Igalia’s effort for enhancing the graphics stack for Raspberry Pi, the V3D DRM driver now has support for Super Pages, improving performance and making memory usage more efficient for Raspberry Pi 4 and 5. Using Linux 6.13, the driver will enable the MMU to allocate not only the default 4KB pages, but also 64KB “Big Pages” and 1MB “Super Pages”.
To measure the difference that Super Pages made to the performance, a series of benchmarks where used, and the highlights are:
v3dv-rpi5-vk-full:arm64
You can read a detailed post about this, with all benchmark results, in Maíra’s blog post, including a super cool PlayStation 2 emulation showcase!
transparent_hugepage_shmem=
command-line parameterIgalia contributed new kernel command-line parameters to improve the configuration of multi-size Transparent Huge Pages (mTHP) for shmem. These parameters, transparent_hugepage_shmem=
and thp_shmem=
, enable more flexible and fine-grained control over the allocation of huge pages when using shmem.
The transparent_hugepage_shmem=
parameter allows users to set a global default huge page allocation policy for the internal shmem mount. This is particularly valuable for DRM GPU drivers. Just as CPU architectures, GPUs can also take advantage of huge pages, but this is possible only if DRM GEM objects are backed by huge pages.
Since GEM uses shmem to allocate anonymous pageable memory, having control over the default huge page allocation policy allows for the exploration of huge pages use on GPUs that rely on GEM objects backed by shmem.
In addition, the thp_shmem=
parameter provides fine-grained control over the default huge page allocation policy for specific huge page sizes.
By configuring page sizes and policies of huge-page allocations for the internal shmem mount, these changes complement the V3D Super Pages feature, as we can now tailor the size of the huge pages to the needs of our GPUs.
As usual in Linux releases, this one collects a list of improvements made by our team in DRM and AMDGPU driver from the last cycle.
Cosmic (the desktop environment behind Pop! OS) users discovered some bugs in the AMD display driver regarding the handling of overlay planes. These issues were pre-existing and came to light with the introduction of cursor overlay mode. They were causing page faults and divide errors. We debugged the issue together with reporters and proposed a set of solutions that were ultimately accepted by AMD developers in time for this release.
In addition, we worked with AMD developers to migrate the driver-specific handling of EDID data to the DRM common code, using drm_edid opaque objects to avoid handling raw EDID data. The first phase was incorporated and allowed the inclusion of new functionality to get EDID from ACPI. However, some dependencies between the AMD the Linux-dependent and OS-agnostic components were left to be resolved in next iterations. It means that next steps will focus on removing the legacy way of handling this data.
Also in the AMD driver, we fixed one out of bounds memory write, fixed one warning on a boot regression and exposed special GPU memory pools via the fdinfo common DRM framework.
In the DRM scheduler code, we added some missing locking, removed a couple of re-lock cycles for slightly reduced command submission overheads and clarified the internal documentation.
In the common dma-fence code, we fixed one memory leak on the failure path and one significant runtime memory leak caused by incorrect merging of fences. The latter was found by the community and was manifesting itself as a system out of memory condition after a few hours of gameplay.
The sched_ext landed in kernel 6.12 to enable the efficient development of BPF-based custom schedulers. During the 6.13 development cycle, the sched_ext community has made efforts to harden the code to make it more reliable and clean up the BPF APIs and documentation for clarity.
Igalia has contributed to hardening the sched_ext core code. We fixed the incorrect use of the scheduler run queue lock, especially during initializing and finalizing the BPF scheduler. Also, we fixed the missing RCU lock protections when the sched_core selects a proper CPU for a task. Without these fixes, the sched_ext core, in the worst case, could crash or raise a kernel oops message.
syzkaller, a kernel fuzzer, has been an important instrument to find kernel bugs. With the help of KASAN, a memory error detector, and syzbot, numerous such bugs have been reported and fixed.
Igalians have contributed to such fixes around a lot of subsystems (like media, network, etc), helping reduce the number of open bugs.
vc4_perfmon_find()
get_order_from_str()
to internal.hvc4_perfmon_find()
scx_bpf_dispatch[_vtime]()
to scx_bpf_dsq_insert[_vtime]()
scx_bpf_consume()
to scx_bpf_dsq_move_to_local()
scx_bpf_dispatch[_vtime]_from_dsq*()
-> scx_bpf_dsq_move[_vtime]*()
2024 was another busy year for Igalia CSR. In the past 12 months, Igalia has been continuing the traditional effort on the Non-Governmental Organizations (NGOs), Reforestation, and Social Investment projects. We added a new NGO to the list and started a couple of new Social Investment projects. The CSR commission has also been looking at creating guidance on how to create and organize a cooperative based on our experience and exploring new communication channels. And we are excited about our first CSR podcast!
In July 2024 Igalia published the first CSR podcast, thanks to Paulo Matos, Eric Meyer, and Brian Kardell!
The podcast discusses Igalia’s flat structure and why we believe that CSR is interesting for Igalia. It also covers Igalia’s approach and perspective on our social responsibilities, the projects we have, Igalia’s approach and conscience, the impact of CSR, and our vision for the future.
If interested, check out Igalia Chats: Social Responsibility At Igalia.
Since 2007 Igalia has been donating 0.7% of our income annually to a list of NGOs proposed by the Igalians. Working with these partners, Igalia continued the effort in a wide range of areas including development aid and humanitarian action, health, functional disabilities, ecology and animal welfare, transparency, and information, etc.
These organizations reported regularly to the commission on finance, progress, and outcomes of the dedicated projects. Most projects have been progressing nicely and steadily in 2024. Here we’d like to talk about a couple of new NGO projects we recently added.
The Degen Foundation is a small private foundation, based in A Coruña that has been working for more than ten years on neurodegenerative diseases. The Foundation was born as Foundation “Curemos el Parkinson” in 2015 when its founder and president, Alberto Amil, was diagnosed with a particularly severe and complex version of Parkinson’s Disease.
Igalia started its collaboration with the Degen Foundation in 2023, mainly engaged in the development of the first phase of the Degen Community platform, a virtual meeting and emotional support point for patients. Studies consistently show that emotional support is as crucial as clinical support for neurodegenerative disease patients. The Degen Community platform aims to provide emotional support via a pack of tools/apps. The platform also will act as an information portal to publish relevant and up-to-date information for patients and carers. The platform has been under design and volunteers have been sourced to collaborate on content etc. The organization plans to launch the platform in 2025.
In 2024, we introduced a new NGO, Hevya, to Igalia’s NGO list. Heyva Sor a Kurdistanê is a humanitarian aid organization established to assist people in the harsh conditions of the ongoing war in Kurdistan. The organization conducts relief efforts on fundamental needs such as food, health, shelter, and education. They have been providing continuous assistance and promoting solidarity, sacrifice, and mutual support in society since 1993. The organization has become a beacon of hope for the population in Kurdistan.
Storm DANA, which hit the Valencian territory in October 2024, has had a particular impact on Horta Sud, a region that has been devastated by the catastrophe.
The CSR Commission responded quickly to this emergency incident. After collecting the votes from Igalians, the commission decided to allocate the remaining undistributed NGO donation budget to aid Horta Sud in rebuilding their community. The first donation was made via Fundació Horta Sud and the second contribution via Cerai. Both Fundació Horta Sud and Cerai are local organizations working in the affected area and they were proposed by our colleague Jordi Mallach. We also bought a nice drawing by Mariscal, a well-known Valencian artist.
This year we started two new social investments: Extension of the Yoff Library project and Biomans Project. Meanwhile, after preparation was completed in 2023, UNICEF’s Casitas Infantiles project started on time.
– Casitas Infantiles (Children’s Small Houses in Cuba)
In Cuba, state educational centers only care for around 19% of children between 1 – 6 years old. Casitas Infantiles was proposed by UNICEF to Igalia to help provide children with “Children’s Small Houses”, a concept of using adapted premises in workplaces, companies, and cooperatives as shelters for children’s education. This solution has been applied over the years in several provinces. It’s approved to work well and in high demand recently. After collecting feedback/thoughts from Igalians, the CSR commission reached the decision of supporting this for a period of 24 months, targeting setting up 28 small houses to accommodate 947 children.
The project started in March 2024. We received reports in June and December detailing the 16 first small houses selected, resource acquisition and distribution, and training activities carried out for 186 educational agents and 856 parents or childminders to raise awareness of positive methods of education and parenting. Workshops and training also were carried out to raise awareness of the opening and continuity of children’s houses in key sectors.
– Extension of the Yoff Library Project
This is an extension of our Library in Yoff project.
This project progressed as planned. The construction work (Phase 5) was completed. An on-site visit in June carried out the Training action (phase 6), and Furniture and bibliography sourcing operations (phase 7). A follow-up on-site visit in November brought back some lovely videos showing how the library looks and works today and the positive feedback from the locals.
The extension project was to support completing the library with a few final bits, including kitchen extension, school furniture renovation, and computer and network equipment. It’s great to see the impact the library has on the local community.
– Biomans Project
Biomans is a circular economy project that focuses its activity on the sustainable use of residual wood for its conversion into wood biomass for heating. The goal of the project is to promote green and inclusive employment in rural Galicia for people at risk of social exclusion, mainly those with intellectual disabilities.
AMICOS Association Initiated the project and has acquired a plot of land as the premise for a factory and training unit to develop the activity. Igalia’s donation would be used for the construction of the factory.
Igalia started the Reforestation project in 2019. Partnering with Galnus , the Reforestation project focuses on conserving and expanding native, old-growth forests to capture, and long-term storing, carbon emissions.
Check on our blog, Igalia on Reforestation, for the projects carried out in the past few years.
In 2024, Galnus proposed ROIS III to Igalia. ROIS III is an extension of the project we are running at the Rois community land. The additional area to work in this project is around 1 hectare, adjacent to the 4 hectares we have already been working on. This would mean that we are building a new native forest of over 5 hectares. Funding for this extension work was in place in November and we shall hear more about this in 2025.
The other proposal from Galnus in 2024 was A Coruña Urban Forest project.
The concept of the urban forest project is to create an urban forest in the surroundings of “Parque de Bens. This project would become a model of public-private collaboration, encouraging the participation of other companies and public institutions in the development of environmental and social projects. It also incorporates a new model of green infrastructure, different from the usual parks and green areas, with high maintenance and low natural interest.
This is an exciting proposal. It’s different from our past and existing reforestation projects. After some discussions and feasibility studies, the commission decided to take a step forward and this proposal has now moved to the agreement handling stage.
With some exciting project proposals received from the Igalians for 2025, we are looking forward to another good year!
Update on what happened in WebKit in the week from December 31, 2024 to January 13, 2025.
Landed a fix to the experimental Trusted Types implementation for certain event handler content attributes not being protected even though they are sinks.
Landed a
fix
to experimental Trusted Types implementation where the
SVGScriptElement.className
property was being protected even though it's not
a sink.
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
Support for the H.264 “constrained-high” and “high” profiles was improved in the GStreamer WebRTC backend.
The GStreamer WebRTC backend now has basic support for network conditions simulation, that will be useful to improve error recovery and packet loss coping mechanisms.
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
JSC got a fix for a tricky garbage-collection issue.
Landed a change that enables testing the "damage propagation" functionality. This is a first step in a series of fixes and improvements that should stabilize that feature.
Damage propagation passes extra information that describes the viewport areas that have visually changed since the last frame across different graphics subsystems. This allows the WebKit compositor and the system compositor to reduce the amount of painting being done thus reducing usage of resources (CPU, GPU, and memory bus). This is especially helpful on constrained, embedded platforms.
A patch landed to add metadata (title and creation/modification date) to PDF documents generated for printing.
The “suspended” toplevel state is now handled in GTK port to pause rendering when web views are fully obscured.
Jamie Murphy is doing a Coding Experience focused on adding support for WebExtensions. After porting a number of Objective-C classes to C++, to allow using them in all WebKit ports, she has recently made the code build on Linux, and started adding new public API to expose the functionality to GTK applications that embed web views. There is still plenty of work to do, but this is great progress nevertheless.
Sylvia Li, who is also doing a Coding Experience, has updated WPEView so it will pick its default configuration values using the recently added WPESettings API.
That’s all for this week!
I just found a funny failure mode in the Whippet garbage collector and thought readers might be amused.
Say you have a semi-space nursery and a semi-space old generation. Both are block-structured. You are allocating live data, say, a long linked list. Allocation fills the nursery, which triggers a minor GC, which decides to keep everything in the nursery another round, because that’s policy: Whippet gives new objects another cycle in which to potentially become unreachable.
This causes a funny situation!
Consider that the first minor GC doesn’t actually free anything. But, like, nothing: it’s impossible to allocate anything in the nursery after collection, so you run another minor GC, which promotes everything, and you’re back to the initial situation, wash rinse repeat. Copying generational GC is strictly a pessimization in this case, with the additional insult that it doesn’t preserve object allocation order.
Consider also that because copying collectors with block-structured heaps are unreliable, any one of your minor GCs might require more blocks after GC than before. Unlike in the case of a major GC in which this essentially indicates out-of-memory, either because of a mutator bug or because the user didn’t give the program enough heap, for minor GC this is just what we expect when allocating a long linked list.
Therefore we either need to allow a minor GC to allocate fresh blocks – very annoying, and we have to give them back at some point to prevent the nursery from growing over time – or we need to maintain some kind of margin, corresponding to the maximum amount of fragmentation. Or, or, we allow evacuation to fail in a minor GC, in which case we fall back to promotion.
Anyway, I am annoyed and amused and I thought others might share in one or the other of these feelings. Good day and happy hacking!
2024 marked another year of exciting developments and accomplishments for Igalia's Compilers team packed with milestones, breakthroughs, and a fair share of long debugging sessions. From advancing JavaScript standards, improving LLVM RISC-V performance, to diving deep into Vulkan and FEX emulation, we did it all.
From shipping require(esm)
in Node.js to porting LLVM’s libc
to RISC-V, and enabling WebAssembly’s highest optimization tier in JavaScriptCore, last year was been nothing short of transformative. So, grab a coffee (or your preferred debugging beverage), and let’s take a look back at the milestones, challenges, and just plain cool stuff we've been up to last year.
We secured a few significant wins last year when it comes to JavaScript standards. First up, we got Import attributes (alongside JSON modules) to Stage 4. Import attributes allow customizing how modules are imported. For example, in all JavaScript environments you'll be able to natively import JSON files using
import myData from "./data" with { type: "json" };
Not far behind, the Intl.DurationFormat
proposal also reached Stage 4. Intl.DurationFormat
provides a built-in way to format durations (e.g., days, hours, minutes) in a locale-sensitive manner, enhancing internationalization support.
We also advanced ShadowRealm, the JavaScript API that allows you to execute code in a fresh and isolated environment, to Stage 2.7, making significant progress in resolving the questions about which web APIs should be included. We addressed open issues related to HTML integration and ensured comprehensive WPT coverage.
We didn't stop there though. We implemented MessageFormat 2.0 in ICU4C; you can read more about it in this blog post.
We also continued working on AsyncContext, an API that would let you persist state across await
s and other ways of running code asynchronously. The main blocker for Stage 2.7 is figuring out how it should interact with web APIs, and events in particular, and we have made a lot of progress in that area.
Meanwhile, the source map specification got a major update, with the publication of ECMA-426. This revamped spec, developed alongside Bloomberg, brings much-needed precision and new features like ignoreList
, all aimed at improving interoperability.
We also spent time finishing Temporal, the modern date and time API for JavaScript—responding to feedback, refining the API, and reducing binary size. After clearing those hurdles, we moved forward with Test262 coverage and WebKit implementation.
Speaking of Test262, our team continued our co-stewardship of this project that ensures compatibility between JavaScript implementations across browsers and runtimes, thanks to support from the Sovereign Tech Fund. We worked on tests for everything from resizable ArrayBuffers to deferred imports, keeping JavaScript tests both thorough and up to date. To boost Test262 coverage, we successfully ported the first batch of SpiderMonkey's non-262 test suite to Test262. This initiative resulted in the addition of approximately 1,600 new tests, helping to expand and strengthen the testing framework. We would like to thank Bloomberg for supporting this work.
The decimal proposal started the year in Stage 1 and remains so, but it has gone through a number of iterative refinements after being presented at the TC39 plenary.
It’s was a productive year, and we’re excited to keep pushing these and more proposals forward.
In 2024, we introduced several key enhancements in Node.js.
We kicked things off by adding initial support for CPPGC-based wrapper management, which helps making the C++/JS corss-heap references visible to the garbage collector, reduces risks of memory leaks/use-after-frees, and improves garbage collection performance.
Node.js contains a significant amount of JavaScript internals, which are precompiled and preloaded into a custom V8 startup snapshot for faster startup. However, embedding these snapshots and code caches introduced reproducibility issues in Node.js executables. In 2024, We made the built-in snapshot and code cache reproducible, which is a major milestone in making the Node.js executables reproducible.
To help user applications start up faster, we also shipped support for on-disk compilation cache for user modules. Using this feature, TypeScript made their CLI start up ~2.5x faster, for example.
One of the impactful work we've done in 2024 was implementing and shipping require(esm)
, which is set to accelerate EcmaScript Modules (ESM) adoption in the Node.js ecosystem, as now package maintainers can ship ESM directly without having to choose between setting up dual shipping or losing reach, and it allows many frameworks/tools to load user code in ESM directly instead of doing hacky ESM -> CJS conversion , which tend to be bug-prone, or outright rejecting ESM code. Additionally, we landed module.registerHooks()
to help the ecosystem migrate away from dependency of CJS loader internals and improve the state of ESM customization.
We also shipped a bunch of other smaller semver-minor features throughout 2024, such as support for embedded assets in single executable applications, crypto.hash()
for more efficient one-off hashing, and v8.queryObjects()
for memory leak investigation, to name a few.
Apart from project work, we also co-organized the Node.js collaboration summit in Bloomberg's London office, and worked on Node.js's Bluesky content automation for a more transparent and collaborative social media presence of the project.
You can learn more about the new module loading features from our talk at ViteConf Remote, and about require(esm)
from our NodeConf EU talk.
In JavaScriptCore, we've ported BBQJIT, the first WebAssembly optimizing tier to 32-bits. It should be a solid improvement over the previous fast-and-reasonably-performant tier (BBQ) for most workloads. The previous incarnation of this tier generated the Air IR (the low-level); BBQJIT generates machine code more directly, which means JSC can tier-up to it faster.
We're also very close to enabling (likely this month) the highest optimizing tier (called "OMG") for WebAssembly on 32-bits. OMG generates code in the B3 IR, for which JSC implements many more optimizations. B3 then gets lowered to Air and finally to machine code. OMG can increase peak performance for many workloads, at the cost of more time spent on compilation. This has been a year-long effort by multiple people.
In V8, we introduced a new entity called Isolate Groups to break the limit of 4Gb for pointer compression usage. It should help V8 embedders like node, deno, and others to allocate more isolate per process. We also supported multi-cage mode for the newly added sandbox feature of V8. You can read more about this in the blog post.
In LLVM's RISC-V backend, we added full scalable vectorization support for the BF16 vector extensions zvfbfmin
and zvfbfwma
. This means that code like the following C snippet:
void f(float * restrict dst, __bf16 * restrict a, __bf16 * restrict b, int n) {
for (int i = 0; i < n; i++)
dst[i] += ((float)a[i] * (float)b[i]);
}
Now gets efficiently vectorized into assembly like this:
vsetvli t4, zero, e16, m1, ta, ma
.LBB0_4:
vl1re16.v v8, (t3)
vl1re16.v v9, (t2)
vl2re32.v v10, (t1)
vfwmaccbf16.vv v10, v8, v9
vs2r.v v10, (t1)
add t3, t3, a4
add t2, t2, a4
sub t0, t0, a6
add t1, t1, a7
bnez t0, .LBB0_4
On top of that, we’ve made significant strides in overall performance last year. Here's a bar plot showing the improvements in performance from LLVM 17 last November to now.
Note: This accomplishment is the result of the combined efforts of many developers, including those at Igalia!
We also ported most of LLVM's libc to rv32 and rv64 in September (~91% of functions enabled). We presented the results at LLVM Developer's meeting 2024, you can watch the video of the talk to learn more about this.
Shader compilation: In shader compilation we've been busy improving the ir3 compiler backend for the freedreno/turnip drivers for Adreno GPUs in Mesa. Some of the highlights include:
Dynamic Binary Translation
In 2024, Igalia had the exciting opportunity to contribute to FEX (https://fex-emu.com/), marking our first year working on the project. Last year, our primary focus was improving the x87 FPU emulation. While we worked on several pull requests with targeted optimizations, we also took on a few larger tasks that made a significant impact:
Introducing a new x87 stack optimization pass was one of our major contributions. You can dive deeper into the details of it in the blog post and explore the work itself in the pull request.
Another key feature we added was explicit mode switching between MMX and x87 modes, details can be found in the pull request.
We also focused on SVE optimization for x87 load/store operations. The details of this work can be found in the pull request here.
As we look ahead, we are excited to continue driving the evolution of these technologies while collaborating with our amazing partners and communities.
Happy new year, hackfolk! Today, a note about ephemerons. I thought I was done with them, but it seems they are not done with me. The question at hand is, how do we efficiently and correctly implement ephemerons in a generational collector? Whippet‘s answer turns out to be simple but subtle.
The deal is, I want to be able to evaluate different collector constructions and configurations, and for that I need a performance oracle: a known point in performance space-time against which to compare the unknowns. For example, I want to know how a sticky mark-bit approach to generational collection does relative to the conventional state of the art. To do that, I need to build a conventional system to compare against! If I manage to do a good job building the conventional evacuating nursery, it will have similar performance characteristics as other nurseries in other unlike systems, and thus I can use it as a point of comparison, even to systems I haven’t personally run myself.
So I am adapting the parallel copying collector I described last July to have generational support: a copying (evacuating) young space and a copying old space. Ideally then I’ll be able to build a collector with a copying young space (nursery) but a mostly-marking nofl old space.
A copying nursery has different operational characteristics than a sticky-mark-bit nursery, in a few ways. One is that a sticky mark-bit nursery will promote all survivors at each minor collection, leaving the nursery empty when mutators restart. This has the pathology that objects allocated just before a minor GC aren’t given a chance to “die young”: a sticky-mark-bit GC over-promotes.
Contrast that to a copying nursery, which can decide to promote a survivor or leave it in the young generation. In Whippet the current strategy for the parallel-copying nursery I am working on is to keep freshly allocated objects around for another collection, and only promote them if they are live at the next collection. We can do this with a cheap per-block flag, set if the block has any survivors, which is the case if it was allocated into as part of evacuation during minor GC. This gives objects enough time to die young while not imposing much cost in the way of recording per-object ages.
Recall that during a GC, all inbound edges from outside the graph being traced must be part of the root set. For a minor collection where we just trace the nursery, that root set must include all old-to-new edges, which are maintained in a data structure called the remembered set. Whereas for a sticky-mark-bit collector the remembered set will be empty after each minor GC, for a copying collector this may not be the case. An existing old-to-new remembered edge may be unnecessary, because the target object was promoted; we will clear these old-to-old links at some point. (In practice this is done either in bulk during a major GC, or the next time the remembered set is visited during the root-tracing phase of a minor GC.) Or we could have a new-to-new edge which was not in the remembered set before, but now because the source of the edge was promoted, we must adjoin this old-to-new edge to the remembered set.
To preserve the invariant that all edges into the nursery are part of the roots, we have to pay special attention to this latter kind of edge: we could (should?) remove old-to-promoted edges from the remembered set, but we must add promoted-to-survivor edges. The field tracer has to have specific logic that applies to promoted objects during a minor GC to make the necessary remembered set mutations.
In Whippet, “small” objects (less than 8 kilobytes or so) are allocated into block-structed spaces, and large objects have their own space which is managed differently. Notably, large objects are never moved. There is generational support, but it is currently like the sticky-mark-bit approach: any survivor is promoted. Probably we should change this at some point, at least for collectors that don’t eagerly promote all objects during minor collections.
Finalizers keep their target objects alive until the finalizer is run, which effectively makes each finalizer part of the root set. Ideally we would have a separate finalizer table for young and old objects, but currently Whippet just has one table, which we always fully traverse at the end of a collection. This effectively adds the finalizer table to the remembered set. This is too much work—there is no need to visit finalizers for old objects in a minor GC—but it’s not incorrect.
So what about ephemerons? Recall that an ephemeron is an object E×K⇒V in which there is an edge from E to V if and only if both E and K are live. Implementing this conjunction is surprisingly gnarly; you really want to discover live ephemerons while tracing rather than maintaining a global registry as we do with finalizers. Whippet’s algorithm is derived from what SpiderMonkey does, but extended to be parallel.
The question is, how do we implement ephemeron-reachability while also preserving the invariant that all old-to-new edges are part of the remembered set?
For Whippet, the answer turns out to be simple: an ephemeron E is never older than its K or V, by construction, and we never promote E without also promoting (if necessary) K and V. (Ensuring this second property is somewhat delicate.) In this way you never have an old E and a young K or V, so no edge from an ephemeron need ever go into the remembered set. We still need to run the ephemeron tracing algorithm for any ephemerons discovered as part of a minor collection, but we don’t need to fiddle with the remembered set. Phew!
As long all promoted objects are older than all survivors, and that all ephemerons are younger than the objects referred to by their key and value edges, Whippet’s parallel ephemeron tracing algorithm will efficiently and correctly trace ephemeron edges in a generational collector. This applies trivially for a sticky-mark-bit collector, which always promotes and has no survivors, but it also holds for a copying nursery that allows for survivors after a minor GC, as long as all survivors are younger than all promoted objects.
Until next time, happy hacking in 2025!
Back in 2023, I belatedly jumped on the bandwagon of people posting their CSS wish lists for the coming year. This year I’m doing all that again, less belatedly! (I didn’t do it last year because I couldn’t even. Get it?)
I started this post by looking at what I wished for a couple of years ago, and a small handful of my wishes came true:
color-mix()
):has()
useNote that by “came true”, I mean “reached at least Baseline Newly Available”, not “reached Baseline Universal”; that latter status comes over time. And more :has()
isn’t really a feature you can track, but I do see more people sharing cool :has()
tricks and techniques these days, so I’ll take that as a positive signal.
A couple more of my 2023 wishes are on the cusp of coming true:
Those are both in the process of rolling out, and look set to reach Baseline Newly Available before the year is done. I hope.
That leaves the other half of the 2023 list, none of which has seen much movement. So those will be the basis of this year’s list, with some new additions.
WebKit has been the sole implementor of this very nice typographic touch for almost a decade now. The lack of any support by Blink and Gecko is now starting to verge on feeling faintly ridiculous.
Trim off the leading block margin on the first child in an element, or the trailing block margin of the last child, so they don’t stick out of the element and mess with margin collapsing. Same thing with block margins on the first and last line boxes in an element. And then, be able to do similar things with the inline margins of elements and line boxes! All these things could be ours.
We can already fake text stroking with text-shadow
and paint-order
, at least in SVG. I’d love to have a text-stroke
property that can be applied to HTML, SVG, and MathML text. And XML text and any text that CSS is able to style. It should be at least as powerful as SVG stroking, if not more so.
attr()
supportThis has seen some movement specification-wise, but last I checked, no implementation promises or immediate plans. Here’s what I want to be able to do:
td {width: attr(data-size em, 1px));
<td data-size="5">…</td>
The latest Values and Units module describes this, so fingers crossed it starts to gain some momentum.
Yes, I still want CSS Exclusions, a lot. They would make some layout hacks a lot less hacky, and open the door for really cool new hacks, by letting you just mark an element as creating a flow exclusions for the content of other elements. Position an image across two columns of text and set it to exclude, and the text of those columns will flow around or past it like it was a float. This remains one of the big missing pieces of CSS layout, in my view. Linked flow regions is another.
This one is a bit stalled because the basic approach still hasn’t been decided. Is it part of CSS Grid or its own display type? It’s a tough call. There are persuasive arguments for both. I myself keep flip-flopping on which one I prefer.
Designers want this. Implementors want this. In some ways, that’s what makes it so difficult to pick the final syntax and approach: because everyone wants this, everyone wants to make the exactly perfect right choices for now, for the future, and for ease of teaching new developers. That’s very, very hard.
Yeah, I still want a Grid equivalent of column-rule
, except more full-featured and powerful. Ideally this would be combined with a way to select individual grid tracks, something like:
.gallery {display: grid;}
.gallery:col-track(4) {gap-rule: 2px solid red;}
…in order to just put a gap rule on that particular column. I say that would be ideal because then I could push for a way to set the gap
value for individual tracks, something like:
.gallery {gap: 1em 2em;}
.gallery:row-track(2) {gap: 2em 0.5em;}
…to change the leading and trailing gaps on just that row.
This was listed as “Media query variables” in 2023. With these, you could define a breakpoint set like so:
@custom-media --sm (inline-size <= 25rem);
@custom-media --md (25rem < inline-size <= 50rem);
@custom-media --lg (50rem < inline-size);
body {margin-inline: auto;}
@media (--sm) {body {inline-size: auto;}}
@media (--md) {body {inline-size: min(90vw, 40em);}
@media (--lg) {body {inline-size: min(90vw, 55em);}
In other words, you can use custom media queries as much as you want throughout your CSS, but change their definitions in just one place. It’s CSS variables, but for media queries! Let’s do it.
Since we decided to abandon vendor prefixing in favor of feature flags, I want to see anything that’s still prefixed get unprefixed, in all browsers. Keep the support for the prefixed versions, sure, I don’t care, just let us write the property and value names without the prefixes, please and thank you.
I still would like a way to indicate when a shorthand property is meant for logical rather than physical directions, a way to apply a style sheet to a single element, the ability to add or subtract values from a shorthand without having to rewrite the whole thing, and styles that cross resource boudnaries. They’re all in the 2023 post.
Okay, that’s my list. What’s yours?
Have something to say to all that? You can add a comment to the post, or email Eric directly.
I’ve been working on chromium input method editor integration for linux wayland at Igalia over the past several months, and I thought I’d share some insights I’ve gained along the way and some highlights from my work.
This is the first in a series of blog posts about input method editors, or IME in short. Here I will try to explain what an IME really is at a high level before diving deeper into some of the technical details of IME support in linux and chromium in upcoming posts.
Update on what happened in WebKit in the week from December 23 to December 30.
Published an article on CSS Anchor Positioning. It discusses the current status of the support across browsers, Igalia's contributions to the WebKit's implementation, and the predictions for the future.
That’s all for this week!
CSS Anchor Positioning is a novel CSS specification module that allows positioned elements to size and position themselves relative to one or more anchor elements anywhere on the web page. In simpler terms, it is a new web platform API that simplifies advanced relative-positioning scenarios such as tooltips, menus, popups, etc.
To better understand the true power it brings, let’s consider a non-trivial layout presented in Figure 1:
In the past, creating a context menu with position: fixed
and positioned relative to the button required doing positioning-related calculations manually.
The more complex the layout, the more complex the situation. For example, if the table in the above example was in a scrollable container,
the position of the context menu would have to be updated manually on every scroll event.
With the CSS Anchor Positioning the solution to the above problem becomes trivial and requires 2 parts:
<button>
element must be marked as an anchor element by adding anchor-name: --some-name
.anchor()
function: left: anchor(--some-name right); top: anchor(--some-name bottom)
.The above is enough for the web engine to understand that the context menu element’s left
and top
must be positioned to the anchor element’s right
and bottom
.
With that, the web engine can carry out the job under the hood, so the result is as in Figure 2:
As the above demonstrates, even with a few simple API pieces, it’s now possible to address very complex scenarios in a very elegant fashion from the web developer’s perspective. Moreover, CSS Anchor Positioning offers even more than that. There are numerous articles with great examples such as this MDN article, this css-tricks article, or this chrome blog post, but the long story short is that both positioning and sizing elements relative to anchors are now very simple.
The first draft of the specification was published in early 2023, which in the web engines field is not so long time ago. Therefore - as one can imagine - not all the major web engines support it yet. The first (and so far the only) web engine to support CSS Anchor Positioning was Chromium (see the introduction blog post) - thus the information on caniuse.com. However, despite the information visible on the WPT results page, the other web engines are currently implementing it (see the meta bug for Gecko and bug list for WebKit). The lack of progress on the WPT results page is due to the feature not being enabled by default yet in those cases.
From the commits visible publicly, one can deduce that the work on CSS Anchor Positioning in WebKit has been started by Apple early 2024.
The implementation was initiated by adding a core part - support for anchor-name
, position-anchor
, and anchor()
. Those 2 properties and function are enough to start using the feature
in real-world scenarios as well as more sophisticated WPT tests.
The work on the above had been finished by the end of Q3 2024, and then - in Q4 2024 - the work significantly intensified. A parsing/computing support has been added for numerous properties and functions and moreover, a lot of new functionalities and bug fixes landed afterwards. One could expect some more things to land by the end of the year even if there’s not much time left.
Overall, the implementation is in progress and is far from being done, but can already be tested in many real-world scenarios.
This can be done using custom WebKit builds (across various OSes) or using Safari Technology Preview on Mac.
The precondition for testing is, however, that the runtime preference called CSSAnchorPositioning
is enabled.
Since the CSS Anchor Positioning in WebKit is still work in progress, and since the demand for the set of features this module brings is high, I’ve been privileged to contribute a little to the implementation myself. My work so far has been focused around the parts of API that allow creating menu-like elements becoming visible on demand.
The first challenge with the above was to fix various problems related to toggling visibility status such as:
The obvious first step towards addressing the above was to isolate elegant scenarios to reproduce the above. In the process, I’ve created some test cases, and added them to WPT. With tests in place, I’ve imported them into WebKit’s source tree and proceeded with actual bug fixing. The result was the fix for the above crash, and the fix for the layout being broken. With that in place, the visibility of menu-like elements can be changed without any problems now.
The second challenge was about the missing features allowing automatic alignment to the anchor. In a nutshell, to get the alignment like in the Figure 3:
there are 2 possibilities:
position-area
CSS property can be used: position-area: bottom center;
.anchor-center
value of
justify-self
can be used: justify-self: anchor-center;
.At first, I wasn’t aware of the anchor-center
and hence I’ve started initial work towards supporting position-area
.
Once I became aware, however, I’ve switched my focus to implementing anchor-center
and left the above for Apple to continue - not to block them.
Until now, both the initial and core parts of anchor-center
implementation have landed.
It means, the basic support is in place.
Despite anchor-center
layout tests passing, I’ve already discovered some problems such as:
and I anticipate more problems may appear once the testing intensifies.
To address the above, I’ll be focusing on adding extra WPT coverage along with fixing the problems one by one. The key is to make sure that at the end of the day, all the unexpected problems are covered with WPT test cases. This way, other web engines will also benefit.
With WebKit’s implementation of CSS Anchor Positioning in its current shape, the work can be very much parallel. Assuming that Apple will keep working on that at the same pace as they did for the past few months, I wouldn’t be surprised if CSS Anchor Positioning would be pretty much done by the end of 2025. If the implementation in Gecko doesn’t stall, I think one can also expect a lot of activity around testing in the WPT. With that, the quality of implementation across the web engines should improve, and eventually (perhaps in 2026?) the CSS Anchor Positioning should reach the state of full interoperability.
Update on what happened in WebKit in the week from December 16 to December 23.
Improved logging in WebDriver service executables, using the same infrastructure as the browser (e.g. journald logging and different levels).
Added support for the first WebDriver-BiDi event in WebKit: monitoring console messages.
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
JavaScriptCore got a fix for a wasm test that was flaky on 32-bits. We also submitted a new PR to fix handling of Air (Air is an intermediate representation) Args with offsets that are not directly addressable in the O0 register allocator.
Since the switch to Skia we have been closely following upstream changes, and making small contributions when needed. After adding support for OpenType-SVG fonts the build with Clang was broken, and needed a fix to allow building Skia in C++ 23 mode (as we do in WebKit). The Skia update for this week resulting in a fix to avoid SK_NO_SANITIZE("cfi")
when using GCC.
Stable releases of WebKitGTK 2.46.5 and WPE WebKit 2.46.5 are now available. While they include some minor fixes, these are focused on patches for security issues, and come accompanied with a new security advisory (WSA-2024-0008: GTK, WPE). As usual, it is recommended to stay up to date, and fresh packages have been already making their way to mainstream Linux distributions.
That’s all for this week!
Excited to get started on my blogging journey!
I’ve been planning to get this going for a while, and finally got around to it during the Christmas break :) .
Here’s a little bit about me…
I started working at Igalia a little over a year ago, after having decided I really wanted to work in open source software, and contributing to the chromium project was a natural choice, having worked on it previously. Igalia as a company doesn’t need any introductions in the open source community, and so I ended up here and have the privilege to be working with some amazing people, and getting to learn a lot.
2024 has been an exciting year for the Igalia’s Graphics Team. We’ve been making a lot of progress on Turnip, AMD display driver, the Raspberry Pi graphics stack, Vulkan video, and more.
Igalia’s Ricardo Garcia has been working hard on adding support for
the new VK_EXT_device_generated_commands
extension in the
Vulkan Conformance Test Suite. He wrote an excellent blog post on the
extension and on his work that you can read here. Ricardo also
presented the extension at XDC 2024 in Montréal, which he also blogged about. Take a look
and see what generating Vulkan commands directly on the GPU looks
like!
Our very own Maíra Canal made a big contribution to improve the graphics performance of Raspberry Pi 4 & 5 devices by introducing support for “Super Pages”. She wrote an excellent and detailed blog post on what Super Pages are, how they improve performance, and comparing performance of different apps and games. You can read all the juicy details here.
She also worked on introducing CPU jobs to the Broadcom GPU kernel driver in Linux. These changes allow user space to implement jobs that get executed on the CPU in sync with the work on the GPU. She wrote a great blog post detailing what CPU jobs allow you to do and how they work that you can read here.
Christian Gmeiner on the Graphics team has also been working on adding Perfetto support to Broadcom GPUs. Perfetto is a performance tracing tool and support for it in Broadcom drivers will allow to developers to gain more insight into bottlenecks of their GPU applications. You can check out his changes to add support in the following MRs: - MR 31575 - MR 32277 - MR 31751
The Raspberry Pi team here at Igalia presented all of their work at XDC 2024 in Montréal. You can see a video below.
A number of Igalians made several contributions to the Linux 6.8 kernel release back in March of this year. Our colleague Maíra wrote a great blog post outlining these contributions that you can read here. To highlight some of these contributions:
Dhruv Mark Collins has been very hard at work to try and bring performance parity between Qualcomm’s proprietary driver and the open source Turnip driver. Two of his big contributions to this were improving the 2D buffer to image copies on A7XX devices, and implementing unidirectional Low Resolution Z (LRZ) on A7XX devices. You can see the MR for these changes here and here.
A new member of the Igalia Graphics Team Karmjit Mahil has been
working on different parts of the Turnip stack, but one notable
improvement he made was to improve fmulz
handling for
Direct3D 9. You can check out his changes here
and read more about them.
Danylo Piliaiev has been hard at work adding support for the latest generation of Adreno GPUs. This included getting support for the A750 working, and then implementing performance improvements to bring it up to parity with other Adreno GPUs in Turnip. All-together the turnip team implemented a number of Vulkan extensions and performance improvements such as:
Igalia hosted the 2024 version of the Display Next Hackfest. This community event is a way to get Linux display developers together to work on improving the Linux display stack. Our Melissa Wen wrote a blog post about the event and what it was like to organize it. You can read all about it here.
Just in-case you thought you couldn’t get enough Linux display stack, Melissa also helped organize a Display/KMS meet-up at XDC 2024. She wrote all about that meet-up and the progress the community made on her blog here.
Melissa Wen has also been hard at work improving AMDGPU’s display driver. She made a number of changes including improving display debug log to include hardware color capabilities, Migrating EDID handling to EDID common code and various bug fixes such as:
Tvrtko Ursulin, a recent addition to our team, has been working on fixing issues in AMDGPU and some of the Linux kernel’s common code. For example, he worked on fixing bugs in the DRM scheduler around missing locks, optimizing the re-lock cycle on the submit path, and cleaned up the code. On AMDGPU he worked on improving memory usage reporting, fixing out of bounds writes, and micro-optimized ring emissions. For DMA fence he simplified fence merging and resolved a potential memory leak. Lastly, on workqueue he fixed false positive sanity check warnings that AMDGPU & DRM scheduler interactions were triggering. You can see the code for some of changes below: - https://lore.kernel.org/amd-gfx/20240906180639.12218-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20241008150532.23661-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20241227111938.22974-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20240813135712.82611-1-tursulin@igalia.com/ - https://lore.kernel.org/amd-gfx/20240712152855.45284-1-tursulin@igalia.com/
GL_EXT_texture_offset_non_const
VK_KHR_video_encode_av1
&
VK_KHR_video_decode_av1
Christian Gmeiner, one of the maintainers of the Etnaviv driver for Vivante GPUs, has been hard at work this year to make a number of big improvements to Etnaviv. This includes using hwdb to detect GPU features, which he wrote about here. Another big improvement was migrating Etnaviv to use isaspec for the GPU isa description, allowing an assembler and disassembler to be generated from XML. This also allowed Etnaviv to reuse some common features in Mesa for assemblers/disassemblers and take advantage of the python code generation features others in the community have been working on. He wrote a detailed blog about it, that you can find here. On the same vein of Etnaviv infrastructure improvements, Christian has also been working on a new shader compiler, written in Rust, called “EBC”. Christian presented this new shader compiler at XDC 2024 this year. You can check out his presentation below.
On the side of new features, Christian landed a big one in Mesa 24.03 for Etnaviv: Multiple Render Target (MRT) support! This allows games and applications to render to multiple render targets (think framebuffers) in a single graphics operations. This feature is heavily used by deferred rendering techniques, and is a requirement for later versions of desktop OpenGL and OpenGL ES 3. Keep an eye on Christian’s blog to see any of his future announcements.
I had a busy year working on improving Lavapipe/LLVMpipe platform integration. This started with adding support for DMABUF import/export, so that the display handles from Android Window system could be properly imported and mapped. Next came Android window system integration for DRI software rendering backend in EGL, and lastly but most importantly came updating the documentation in Mesa for building Android support. I wrote all about this effort here.
The latter half on the year had me working on improving lavapipe’s integration with ChromeOs, and having Lavapipe work as a host Vulkan driver for Venus. You can see some of the changes I made in virglrenderer here and crosvm here. This work is still ongoing.
We’re not planning to stop our 2024 momentum, and we’re hopping for 2025 to be a great year for Igalia and the Linux graphics stack! I’m booked to present about Lavapipe at Vulkanised 2025, where Ricardo will also present about Device-Generated Commands. Maíra & Chema will be presenting together at FOSDEM 2025 about improving performance on Raspberry Pi GPUs, and Melissa will also present about kworkflow there. We’ll also be at XDC 2025, networking and presenting about all the work we are doing on the Linux graphics stack. Thanks for following our work this year, and here’s to making 2025 an even better year for Linux graphics!
As 2024 draws to a close, it’s a perfect time to reflect on the year’s accomplishments done by the Multimedia team in Igalia. In our consideration, there were three major achievements:
WPE and WebKitGTK are WebKit ports maintained by Igalia, the former for embedded devices and the latter for applications with a full-featured Web integration.
WebRTC is a web API that allows real-time communication (RTC) directly between web browser and applications. Examples of these real-time communications are video conferencing, cloud gaming, live-streaming, etc.
Some WebKit ports support libwebrtc, an open-source library that implements the WebRTC specification, developed and maintained by Google. WPE and WebKitGTK originally also supports libwebrtc, but we started to use also GstWebRTC, a set of GStreamer plugins and libraries that implement WebRTC, which adapts perfectly to the multimedia implementation in both ports, also in GStreamer.
This year the fruits of this work have been unlocked by enabling Amazon Luna gaming:
https://www.youtube.com/watch?v=lyO7Hqj1jMs
And also enabling a CAD modeling, server-side rendered service, known as Zoo:
https://www.youtube.com/watch?v=CiuYjSCDsUM
WebKit made significant improvements in multimedia handling, addressing various
issues to enhance stability and playback quality. Key updates include preventing
premature play()
calls during seeking, fixing memory leaks. The management of
track identifiers was also streamlined by transitioning from string-based to
integer-based IDs. Additionally, GStreamer-related race conditions were resolved
to prevent hangs during playback state transitions. Memory leaks in WebAudio and
event listener management were addressed, along with a focus on memory usage
optimizations.
The handling of media buffering and seeking was enhanced with buffering
hysteresis for smoother playback. Media Source
Extensions (MSE) behavior was refined to
improve playback accuracy, such as supporting markEndOfStream()
before
appendBuffer()
and simplifying playback checks. Platform-specific issues were
also tackled, including AV1 and Opus support for encrypted media and better
detection of audio sinks. And other improvements on multimedia performance and
efficiency.
GStreamer Editing Services (GES) is a set of GStreamer plugins and a library that allow non-linear video editing. For example, GES is what’s behind of Pitivi, the open source video editor application.
Last year, GES was deployed in web-based video editors, where the actual video processing is done server-side. These projects allowed, in great deal, the enhancement and maturity of the library and plugins.
Tella is a browser-based tool that allow to screen record and webcam, without any extra software. Finished the recording, the user can edit, in the browser, the video and publish it.
https://www.youtube.com/watch?v=uSWqWHBRDWE
Sequence is a complete, browser-based, video editor with collaborative features. GES is used in the backend to render the editing operations.
https://www.youtube.com/watch?v=bXNdDIiG9lE
The last but not the least, this year we continue our work with the Vulkan Video ecosystem by working the task subgroup (TSG) enabling H.264/H.265 encoding, and AV1 decoding and encoding.
Early this year we delivered a talk in the Vulkanised about our work, which ranges from the Conformance Test Suite (CTS), Mesa, and GStreamer.
https://www.youtube.com/watch?v=z1HcWrmdwzI
As we wrap up 2024, it’s clear that the year has been one of significant progress, driven by innovation and collaboration. Here’s to continuing the momentum and making 2025 even better!
Update on what happened in WebKit in the week from December 9 to December 16.
Shipped the X25519 algorithm of the WebCrypto API for the Mac, GTK+ and WPE ports.
Fixed corner case invoker issue with popover inside invoker, matching the updated spec.
Form controls have long standing interoperability issues and <textarea>
is no
exception. This patch
fixes space being reserved for scrollbars despite overlay scrollbars being
enabled. This brings WebKit in line with Firefox's behaviour.
Implemented improvements to the popover
API
to allow imperative invokers relationships, this brings the JavaScript APIs
inline with the declarative popovertarget
attribute.
Implemented the CanvasRenderingContext2D letterSpacing
/wordSpacing
support.
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
Due to on-going work on improving memory usage in WebRTC use-cases, several patches landed in WebKit (1, 2,3) and GStreamer (4). Another related task is under review in libnice.
Several WebCodecs GStreamer backend fixes landed, mostly related with Opus and LPCM decoding support.
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
JavaScriptCore now has Wasm tail call support on ARMv7.
OpenType color fonts with SVG outlines stopped working with the transition from Cairo to Skia. This was unintentional, and support for this kind of fonts has been re-enabled for Skia.
Building the OpenType-SVG support required building Skia's SVG module, which uses Expat as its XML parser. Packagers will need to add it as a build dependency, or configure the compilation passing -DUSE_SKIA_OPENTYPE_SVG=OFF
, which disables the feature.
That’s all for this week!
Hello, world!
Greetings from far north of Brazil! My name is Nick Diego Yamane. I’m originally from Maués, a small town that sits in the heart of the Amazon rainforest. Since 2002, I live in Manaus, from where I work remotely for Igalia, a spanish flat worker-owned consultancy specialized in core open source technologies. I’m part of its amazing Chromium team since 2018, along with other talented people spread around the world.
by nickdiego@igalia.com (Nick Yamane) at December 10, 2024 11:40 PM
Update on what happened in WebKit in the week from December 3 to December 10.
Improved interoperability somewhat by including font-weight
in the
CanvasRenderingContext2D.font
serialization.
Support for cross-thread transfer of accelerated ImageBitmap
objects
landed upstream for the GTK
and WPE ports. It improves performance of applications that use worker
threads and pass accelerated ImageBitmap
objects (with ownership) around.
That’s all for this week!
I’m happy to announce that the decimal proposal—a proposed extension of JavaScript to support decimal numbers—is now available as an NPM package called proposal-decimal
!
(Actually, it has been available for some time, made available not long after we decided to pursue IEEE 754 Decimal128 as a data model for the decimal proposal rather than some alternatives. The old package was—and still is—available under a different name—decimal128
—but I’ll be sunsetting that package in favor of the new one announced here. If you’ve been using decimal128, you can continue to use it, but you’ll probably want to switch to proposal-decimal
.)
To use proposal-decimal
in your project, install the NPM package. If you’re looking to use this code in Node.js or other JS engines that support ESM, you'll want to import the code like this:
import { Decimal128 } from 'proposal-decimal'; const x = new Decimal128("0.1"); // etc.
For use in a browser, the file dist/Decimal128.mjs
contains the Decimal128
class and all its internal dependencies in a single file. Use it like this:
<script type="module">
import { Decimal128 } from 'path/to/Decimal128.mjs';
const x = new Decimal128("0.1");
// keep rocking decimals!
</script>
The intention of this polyfill is to track the spec text for the decimal proposal. I cannot recommend this package for production use just yet, but it is usable and I’d love to hear any experience reports you may have. We’re aiming to be as faithful as possible to the spec, so we don’t aim to be blazingly fast. That said, please do report any wild deviations in performance compared to other decimal libraries for JS as an issue. Any crashes or incorrect results should likewise be reported as an issue.
Enjoy!
Suppose you have some files you want to directly commit to a branch in your current git repository, doing so without perturbing your current branch. Why would you want to do that? My current motivating use case is to commit all my draft muxup.com posts to a separate branch so I can get some tracking and backups without needing to add WIP work to the public repo. But I also use essentially the same approach to make a throw-away commit of the current repo state (including any non-staged or non-committed changes) to be pushed to a remote machine for building.
Our goal is to create a commit, so a sensible starting point is to break down what's involved. Referring to Git documentation we can break down the different object types that we need to put together a commit:
Although it's possible to build a tree
object semi-manually using git hash-object
to create blobs and git mktree
for trees, fortunately this isn't necessary. Using a throwaway git index file
allows us to rely on git to create the tree object for us after indicating the
files to be included. The basic approach is:
GIT_INDEX_FILE
environment variable to a throwaway/temporary name.git update-index
is the 'plumbing'
way of doing this, but git add
can work just fine as well).git write-tree
.git update-ref
to update the branch ref to point to the new commit.Here is how I implemented this in the site generator I use for muxup.com:
def commit_untracked() -> None:
def exec(*args: Any, **kwargs: Any) -> tuple[str, int]:
kwargs.setdefault("encoding", "utf-8")
kwargs.setdefault("capture_output", True)
kwargs.setdefault("check", True)
result = subprocess.run(*args, **kwargs)
return result.stdout.rstrip("\n"), result.returncode
result, _ = exec(["git", "status", "-uall", "--porcelain", "-z"])
untracked_files = []
entries = result.split("\0")
for entry in entries:
if entry.startswith("??"):
untracked_files.append(entry[3:])
if len(untracked_files) == 0:
print("No untracked files to commit.")
return
bak_branch = "refs/heads/bak"
show_ref_result, returncode = exec(
["git", "show-ref", "--verify", bak_branch], check=False
)
if returncode != 0:
print("Branch {back_branch} doesn't yet exist - it will be created")
parent_commit = ""
parent_commit_tree = None
commit_message = "Initial commit of untracked files"
extra_write_tree_args = []
else:
parent_commit = show_ref_result.split()[0]
parent_commit_tree, _ = exec(["git", "rev-parse", f"{parent_commit}^{{tree}}"])
commit_message = "Update untracked files"
extra_write_tree_args = ["-p", parent_commit]
# Use a temporary index in order to create a commit. Add any untracked
# files to the index, create a tree object based on the index state, and
# finally create a commit using that tree object.
temp_index = pathlib.Path(".drafts.gitindex.tmp")
atexit.register(lambda: temp_index.unlink(missing_ok=True))
git_env = os.environ.copy()
git_env["GIT_INDEX_FILE"] = str(temp_index)
nul_terminated_untracked_files = "\0".join(file for file in untracked_files)
exec(
["git", "update-index", "--add", "-z", "--stdin"],
input=nul_terminated_untracked_files,
env=git_env,
)
tree_sha, _ = exec(["git", "write-tree"], env=git_env)
if tree_sha == parent_commit_tree:
print("Untracked files are unchanged vs last commit - nothing to do.")
return
commit_sha, _ = exec(
["git", "commit-tree", tree_sha] + extra_write_tree_args,
input=commit_message,
)
exec(["git", "update-ref", bak_branch, commit_sha])
diff_stat, _ = exec(["git", "show", "--stat", "--format=", commit_sha])
print(f"Backup branch '{bak_branch}' updated successfully.")
print(f"Created commit {commit_sha} with the following modifications:")
print(diff_stat)
For my particular use case, creating a commit containing only the untracked
files is what I want. I'm happy to lose the ability to precisely recreate
the repository state for the combination of tracked and untracked files in
return for avoiding noise in the changes for the bak
branch that would
otherwise be present from changes to tracked files. Using paths separated by
NUL
via stdin is overkill here, but as it doesn't increase complexity of the
code much, I've opted for the most universal approach in case I copy the logic
to other projects.
This is based on this previous blog post by Alicia, and I recommend taking a look - many things mentioned in it are still useful here.
Reading a symbol’s documentation in a popup
The most straight-forward option would have been to just install another instance of my IDE inside the container. However, I use NixOS + Home Manager to manage and configure my packages declaratively, so the Ubuntu-based container environment would be a quite frustrating difference:
Package versions will be lagging behind, and sooner or later I will have to deal with differences with configuration, features, or bugs. For example, at time of writing, neovim is packaged in Debian 24.10 at version 0.9.5, while nixpkgs ships 0.10.2. (To be fair, Flathub and Snapcraft would be up-to-date as well, but I have my gripes with those too.)
Either way, I now have a new set of configurations to manage and keep in sync with their canonical versions on the host system.
Any other tools I don’t install in the container, I won’t have access to - for example, for running commands from inside my IDE.
Overall, this will waste time and disk space better used for other things. So, after trying out a few different approaches, a clangd wrapper script that bridges the disconnect between my host system and the container was the first satisfying solution I found.
Conveniently, this fits well with my approach of writing wrappers around wkdev scripts to expose as much functionality as possible to my host system, to avoid manually entering the container - in effect abstracting it out of sight.
This is roughly the script I currently use. I personally prefer nushell, but I will go into details below so you can write your own version in whatever language you prefer.
The idea is to start clangd inside the container, and use socat to expose its stdin/out to the IDE over TCP. That is to avoid this podman issue I ran into if I tried using stdin.
#!/usr/bin/env -S nu --stdin
def main [
--name (-n): string = "wkdev-sdk"
--show-config
] {
# picking a random port for the connection avoids colliding with itself in case an earlier instance of this script is still around
let port = random int 2000..5000
let workdir = $"/host(pwd)"
# the container SDK mounts your home directory to `/host/home/...`,
# so as long as the WebKit checkout is somewhere within your $HOME,
# mapping paths is as easy as just prepending `/host`
let mappings_table = ["Source" "WebKitBuild/GTK/Debug" "WebKitBuild/GTK/Release"]
| each {|path| {host: $"($env.WEBKIT_DIR)/($path)" container: $"/host($env.WEBKIT_DIR)/($path)"}}
let mappings = $mappings_table
| each {|it| $"($it.host)=($it.container)" }
| str join ","
let podman_args = [
exec
--detach
--user
1000:100
$name
]
let clangd_args = [
$"--path-mappings=($mappings)"
--header-insertion=never # clangd has the tendency to insert unnecessary includes, so I prefer to just disable the feature.
--limit-results=5000 # The default limit for reference search results is too low for WebKit
--background-index
--enable-config # Enable reading .clangd file
-j 8
]
# Show results of above configuration when called with --show-config, particularly helpful for debugging
if $show_config {
{
port: $port
work_dir: $workdir
mappings: $mappings_table
podman_args: $podman_args
clangd_args: $clangd_args
}
} else {
# ensure that the container is running
podman start $name | ignore
# container side
( podman ...$podman_args /usr/bin/env $'--chdir=($workdir)' socat
$"tcp-l:($port),fork"
$"exec:'clangd ($clangd_args | str join (char space))'"
) | ignore
# host side
nc localhost $port
}
}
IDE setup is largely the same as it would usually be, aside from pointing the clangd path at our wrapper script instead.
I use helix, where I just need to add a .helix/languages.toml
to the WebKit checkout directory:
[language-server.clangd]
command = "/path/to/clangd_wrapper"
In VS Code, you need the clangd extension, then you can enter the absolute path to the script under
File
> Preferences
> Settings
> Extensions/clangd
> Clangd: Path
, ideally in the Workspace
tab so the setting only applies to WebKit.
Clangd will require two things to be set up at the root of your WebKit checkout:
First, create a compile_commands.json
symlink for the build you will use, for example to WebKitBuild/GTK/Debug/compile_commands.json
.
Secondly, a .clangd
(which is what we needed the --enable-config
flag for) at the root of the WebKit checkout:
If:
PathMatch: "(/host/home/vivienne/dev/work/metro/wk/up)?Source/.*\\.h"
PathExclude: "(/host/home/vivienne/dev/work/metro/wk/up)?Source/ThirdParty/.*"
CompileFlags:
Add: [-include, config.h]
I created both files manually, but as of [cmake] Auto-complete via clangd auto-setup, there seem to be new scripts to help with setting up and updating both files. (Thanks Alicia!) I haven’t tried it so far, but I recommend you take a look yourself.
Searching for a field using the symbol picker
Overall, I’m very satisfied with the results, so far everything is working like I expected it to. Finally having a working language server brought me the usual benefits - I mostly got rid of the manual compile-fix cycles that introduced so much friction and waiting times, and trivial mistakes and typos are much less of a headache now. But the biggest improvement, to me, is Goto definition/references and the symbol picker, making it easier to grasp how things interact. Much better than using grep over and over!
As I was fighting clangd/podman, I also came across some other options that I didn’t try, but might be interesting to look at:
VSCode dev containers
Probably the most polished option, though it is exclusive to VSCode - from what I understand, the extension isn’t even available to forks for licensing reasons.
Distant
Its main purpose is to act as a tool for working remotely, but I don’t see why it couldn’t be used with a container. It is still in alpha, and so far only has support in Neovim.
I can’t tell how well it would play with LSP, but it might be worth a shot if you already use Neovim.
As I have spent the last 6 months starting to explore the rather precarious position we've placed web ecosystem in, with regard to how browsers are maintained and funded, I thought I'd dive into another angle: The ways that web platform features get prioritized and built.
I worked on Microsoft Edge, so I have direct experience working on a browser team. My current work is at Igalia, which is an open source consultancy that is hired by companies to work on many things across technology. My team, the web platform team, implements web platform features and APIs, and works on their specifications. Yes. You can pay for browser features to be built and for specifications to be written/updated/continued. We'll talk about both.
Browser vendors are the companies that develop, maintain and distribute a web browser. Some browser vendors are also stewards of a whole engine/browser (blink, WebKit and gecko). Google, Apple, Mozilla, Microsoft, Opera, Vivaldi, etc are all browser vendors. Google, Apple and Mozilla are engine stewards.
There are many teams that make up an entire browser team. A browser is so much more than just the web platform too. There is quite a lot of thought and design that goes into even the smallest user experience updates.
General consumer facing features, which typically have a UI component, tend to often get prioritized over more "hidden" web platform features for developers. The general consumer base is larger than the developer base. The goal is more market share (more people using your browser) which helps bring money to the browser vendor.
The web platform team works on browser features and specifications and making sure the implementation matches the spec, so you don't get different behavior in different browsers. But they're also there to enable what we'll refer to as "first party" needs.
First party refers to groups within the same company as the browser vendor. Microsoft Office/Microsoft 365 is an example of a first party within Microsoft with web platform needs. Subsequently, their needs for the web will get prioritized.
Surface Duo is another example. I spent a lot of time talking about the web platform primitives and design considerations for dual screen devices. Having layout capabilities that adapted to this new form factor was incredibly important so the specification and implementing those features were also prioritized.
In my experience, first party development is typically prioritized above all else as you're enabling/enhancing another product in the company. Especially if those products are money-makers. These are also broader company strategic initiatives and very visible ways to make impact.
Come yearly review time, these things matter for compensation and bonuses. It is all deeply intertwined in company politics. These are things that make the company money and make the business case for having a browser. Bill Gates' The Internet Tidal Wave memo from 1995 even points out how access to the internet through PCs is vital for the business. Enhancing user experience and moving the web forward will be what wins consumers over.
Another scenario is when an external, or third party, company has needs for the web platform. My experience with this while working at a browser vendor company is more limited. Third party can also mean general web developers. It was much harder to get the needs of the general web development community prioritized when first party often takes priority.
I truthfully can't remember whether it was/is possible for third parties to ask Microsoft to work on enabling certain features. I mean, of course you can ask, but I'm not sure how often an agreement would be made. With a relatively small team working on enabling web platform features, this probably wasn't/isn't a common scenario unless there's some big underlying strategic initiative that would benefit the browser and company. This type of contract could entirely have been outside my sphere of work.
The trouble with being a third party is that it's not as easy to align priorities or business cases. In fact, you might even be a competitor. Regardless, since resources are finite, it's likely that it's difficult to convince vendors to pay attention to your specific needs. At the end of the day, practically speaking, funding is required to advance features, fix bugs, etc.
Guess what? Yes! You can hire experts to implement web browser features and/or give you that attention and priority! Do you need a new feature implemented or spec'd? A consultancy can help. Do you need bugs that are affecting your organization fixed? A consultancy can help.
If you have a need for a web platform feature, there are consultancies available for hire to help write and edit the specifications, work with standards groups, write web platform tests and get that feature shipped (or ready to be shipped).
I work for Igalia, and you can hire us for many things across many technologies and areas including web platform development.
In fact, we've been pivotal in moving forward a whole lot of things in the Web Platform, including features like CSS Grid, :has
, container queries, MathML, classes in JavaScript, scroll snap, list-stlye-type: <string>
...the list does go on and on. We work on lots of specifications and implementations for the web platform.
Instead of waiting or relying on a browser vendor to implement the features you need, which could potentially be years or even possibly never, you can hire experts like Igalia to do this work.
The most obvious answer is: It works. We've helped a lot of happy customers do amazing things.
Aside from needing features more quickly, hiring a consultancy like Igalia has advantages. We are experts in these processes and the dynamics of working in standards bodies, and our strength comes from not only our technical expertise, but our ability to navigate between the three main browser vendors with web engines to ensure feature design is agreed upon. This is a lot of work and often times it can be slow because there are only a handful of people at browser vendor companies who are responsible for reviewing patches, proposed features, design documents, etc etc.
Let's say you are a customer with a web platform need. You most likely have a backlog of work for your engineering team. There could be a few different scenarios that prevent you from internally prioritizing the web platform need: No one on the engineering team has the technical background for the type of work you need done, someone might have the technical background but not enough time to manage the entire process of spec writing, test writing and implementation, or maybe the team just doesn't have the capacity based on broader company priorities and product roadmaps.
When you hire a consultancy to do this work, then your product engineering team can spend time on the product work and roadmap while we work on the spec, implementation and coordinating among browser vendors. This stuff takes a lot of time, because it's the nature of the work, and it's our area of expertise.
There have also been instances where specific features have been funded by the community or donors, primarily driven by a want for better support and not by a business need, even though there most likely are business needs for such feature improvements out there somewhere.
The MathML work Igalia has been doing is an example of that. Igalia also ran an open prioritization experiment where the community collectively selected and funded a feature.
Sometimes there are really vital features the web needs, but for whatever reason, they're not a priority. With that being said, if anyone's interested in helping to advance and improve SVG, drop Igalia an email. We'd love to work on it.
I have encountered many people since starting at Igalia earlier this year, who didn't know you could hire someone to build a browser feature, or work on a specification, or fix browser bugs. You can even hire us to work on improving a novel web browser engine (say hello to Servo), because you might need a web browser solution that is more lightweight than the major open source options.
Or maybe you need a browser for your Extended Reality/Virtual Reality device. With 50% of Meta Quest users spending time in the browser, it would be a missed opportunity to not offer the same on your device. This is where we come in with Wolvic. It's designed with browsing in XR in mind, and you don't have to build a browser from the ground up.
There are so many benefits to hiring someone to work on The Web in whatever way you may need. It also means the web platform can advance more quickly (in browser timescales anyway) because more people outside of browser vendors are working on things.
And that's good for the overall health of the web ecosystem.
Note: Thank you to my colleague Brian Kardell for reviewing & editing this post, which had been taking up a lot of space in my mind for a long while.
Editing text in HTML canvas has never been easy. It requires identifying which character is under the hit point in order to place a caret, and it requires computing bounds for a range of text that is selected. The existing implementations of Canvas TextMetrics made these things possible, but not without a lot of Javascript making multiple expensive calls to compute metrics for substrings.
Three new additions to the TextMetrics API are intended to support editing use cases in Canvas text. They are in the standards pipeline, and implemented in Chromium-based browsers behind the ExtendedTextMetrics
flag:
getIndexFromOffset
gives the location in a string corresponding to a pixel length along the string. Use it to identify where the caret is in the string, and what the bounds of a selection range are.getSelectionRects
returns the rectangles that a browser would use to highlight a range of text. Use it to draw the selection highlight.getActualBoundingBox
returns the bounding box for a sub-range of text within a string. Use it if you need to know whether a point lies within a substring, rather than the entire string.To enable the flag, use --enable-blink-features=ExtendedTextMetrics
when launching Chrome from a script or command line, or enable “Experimental Web Platform features” via chrome://flags/#enable-experimental-web-platform-features.
I wrote a basic web app (opens in a new tab) in order to demonstrate the use of these features. It will function in Chrome versions beyond 128.0.6587.0 (Canary at the time of writing) with the above flags set. Some functionality is available in Safari Preview, and it’s growing all the time.
The app allows the editing of a single line of text drawn in an HTML canvas. Here I’ll work through usage of the new features.
In the demo, the first instance of “new Canvas Text Metrics” is considered a link back to this blog page. Canvas Text has no notion of links, and thousands of people have looked at Stack Exchange for a way to insert hyperlinks in canvas text. Part of the problem, assuming you know where the link is in the text, is determining when the link was clicked on. The TextMetrics getActualBoundingBox(start, end)
method is intended to simplify the problem by returning the bounding box of a substring of the text, in this case the link.
onStringChanged() {
text_metrics = context.measureText(string);
link_start_position = string.indexOf(link_text);
if (link_start_position != -1) {
link_end_position = link_start_position + link_text.length;
}
}
...
linkHit(x, y) {
let bound_rect = undefined;
try {
bound_rect = text_metrics.getActualBoundingBox(link_start_position, link_end_position);
} catch (error) {
return false;
}
let relative_x = x - string_x;
let relative_y = y - string_y;
return relative_x >= bound_rect.left && relative_y >= bound_rect.top
&& relative_x < bound_rect.right && relative_y < bound_rect.bottom;
}
The first function finds the link in the string and stores the start and end string offsets. When a click event happens, the second method is called to determine if the hit point was within the link area. The text metrics object is queried for the bounding box of the link’s substring. Note the call is contained within a try...catch
block because an exception will be returned if the substring is invalid. The event offset is mapped into the coordinate system of the text (in this case by subtracting the text location) and the resulting point is tested against the rectangle.
In more general situations you may need to use a regular expression to find links, and keep track of a more complex transformation chain to convert event locations into the text string’s coordinate system.
A primary concept of any editing application is the caret location because it indicates where typed text will appear, or what will be deleted by backspace, or where an insertion will happen. Mapping a hit point in the canvas into the caret position in the text string is a fundamental editing operation. It is possible to do this with existing methods but it is expensive (you can do a binary search using the width of substrings).
The TextMetrics getIndexFromOffset(offset)
method uses existing code in browsers to efficiently map a point to a string position. The underlying functionality is very similar to the document.caretPositionFromPoint(x,y)
method, but modified for the canvas situation. The demo code uses it to position the caret and to identify the selection range.
text_offset = event.offsetX - string_x;
caret_position = text_metrics.getIndexFromOffset(text_offset);
The getIndexFromOffset
function takes the horizontal offset, in pixels, measured from the origin of the text (based on the textAlign
property of the canvas context). The function finds the character boundary closest to the given offset, then returns the character index to the right for left-to-right text, and to the left for right-to-left text. The offset can be negative to allow characters to the left of the origin to be mapped.
In the figure below, the top string has textDirection = "ltr"
and textAlign = "center"
. The origin for measuring offsets is the center of the string. Green shows the offsets given, while blue shows the indexes returned. The bottom string demonstrates textDirection = "rtl"
and textAlign = "start"
.
An offset past the beginning of the text always returns 0, and past the end returns the string length. Note that the offset is always measured left-to-right, even if the text direction is right-to-left. The “beginning” and “end” of the text string do respect the text direction, so for RTL text the beginning is on the right.
The getIndexFromOffset
function may produce very counter-intuitive results when the text string has mixed bidi content, such as a latin substring within an arabic string. As the offset moves along the string the positions will not steadily increase, or decrease, but may jump around at the boundaries of a directional run. Full handling of bidi content requires incorporating bidi level information, particularly for selecting text, and is beyond the scope of this article.
Selected text is normally indicated by drawing a highlight over the range, but to produce such an effect in canvas requires estimating the rectangle using existing text metrics, and again making multiple queries to text metrics to obtain the left and right extents. The new TextMetrics getSelectionRects(start, end)
function returns a list of browser defined selection rectangles for the given subrange of the string. There may be multiple rectangles because the browser returns one for each bidi run; you would need to draw them all to highlight the complete range. The demo assumes a single rectangle because it assumes no mixed-direction strings.
selection_rect = text_metrics.getSelectionRects(selection_range[0], selection_range[1])[0];
...
context.fillStyle = 'yellow';
context.fillRect(selection_rect.x + string_x,
selection_rect.y + string_y,
selection_rect.width,
selection_rect.height)
Like all the new methods, the rectangle returned is in the coordinate system of the string, as defined by the transform, textAlign and textBaseline.
The new Canvas Text Metrics described here are in the process of standardization. Follow WHATWG Issue #10677 and add your feedback.
The implementation of Canvas Text Features was aided by Igalia S.L. funded by Bloomberg L.P.
Instead of using Chromium for browsing the Web, let’s explore how to use it for building applications.
Chromium is open-source and its codebase is organized in components which can be used for many different purposes. For example, Chromium is used for building browsers other than Chrome like Edge, Brave, Vivaldi, among others. You may also be familiarized with V8, the Chromium JavaScript engine that may be used to power scripting on server-side, like Node.js and Deno.
If you've played Monopoly, you'll know abuot the Bank Error in Your Favor
card in the Community Chest. Remember this?
A bank error in your favor? Sweet! But what if the bank makes an error in its favor? Surely that's just as possible, right?
I'm here to tell you that if you're doing everyday financial calculations—nothing fancy, but involving money that you care about—then you might need to know that using binary floating point numbers, then something might be going wrong. Let's see how binary floating-point numbers might yield bank errors in your favor—or the bank's.
In a wonderful paper on decimal floating-point numbers, Mike Colishaw gives an example.
Here's how you can reproduce that in JavaScript:
(1.05 * 0.7).toPrecision(2);
# 0.73
Some programmers might not be aware of this, but many are. By pointing this out I'm not trying to be a smartypants who knows something you don't. For me, this example illustrates just how common this sort of error might be.
For programmers who are aware of the issue, one typical approache to dealing with it is this: Never work with sub-units of a currency. (Some currencies don't have this issue. If that's you and your problem domain, you can kick back and be glad that you don't need to engage in the following sorts of headaches.) For instance, when working with US dollars of euros, this approach mandates that one never works with euros and cents, but only with cents. In this setting, dollars exist only as an abstraction on top of cents. As far as possible, calculations never use floats. But if a floating-point number threatens to come up, some form of rounding is used.
Another aproach for a programmer is to delegate financial calculations to an external system, such as a relational database, that natively supports proper decimal calculations. One difficulty is that even if one delegates these calculations to an external system, if one lets a floating-point value flow int your program, even a value that can be trusted, it may become tainted just by being imported into a language that doesn't properly support decimals. If, for instance, the result of a calculation done in, say, Postgres, is exactly 0.1, and that flows into your JavaScript program as a number, it's possible that you'll be dealing with a contaminated value. For instance:
(0.1).toPrecision(25)
# 0.1000000000000000055511151
This example, admittedly, requires quite a lot of decimals (19!) before the ugly reality of the situation rears its head. The reality is that 0.1 does not, and cannot, have an exact representation in binary. The earlier example with the cost of a phone call is there to raise your awareness of the possibility that one doesn't need to go 19 decimal places before one starts to see some weirdness showing up.
There are all sorts of examples of this. It's exceedingly rare for a decimal number to have an exact representation in binary. Of the numbers 0.1, 0.2, …, 0.9, only 0.5 can be exactly represented in binary.
Next time you look at a bank statement, or a bill where some tax is calculated, I invite you to ask how that was calculated. Are they using decimals, or floats? Is it correct?
I'm working on the decimal proposal for TC39 to try to work what it might be like to add proper decimal numbers to JavaScript. There are a few very interesting degrees of freedom in the design space (such as the precise datatype to be used to represent these kinds of number), but I'm optimistic that a reasonable path forward exists, that consensus between JS programmers and JS engine implementors can be found, and eventually implemented. If you're interested in these issues, check out the README in the proposal and get in touch!
I had the pleasure of learning about Lean 4 with David Christiansen and Joachim Breitner at their tutorial at BOBKonf 2024. I‘m planning on doing a couple of formalizations with Lean and would love to share what I learn as a total newbie, working on macOS.
I‘m on macOS and use Homebrew extensively. My simple go-to approach to finding new software is to do brew search lean
. This revealed lean
as well as surface elan
. Running brew info lean
showed me that that package (at the time I write this) installs Lean 3. But I know, out-of-band, that Lean 4 is what I want to work with. Running brew info elan
looked better, but the output reminds me that (1) the information is for the elan-init
package, not the elan
cask, and (2) elan-init
conflicts with both the elan
and the aforementioned lean
. Yikes! This strikes me as a potential problem for the community, because I think Lean 3, though it still works, is presumably not where new Lean development should be taking place. Perhaps the Homebrew formula for Lean should be updated called lean3
, and a new lean4
package should be made available. I‘m not sure. The situation seems less than ideal, but in short, I have been successful with the elan-init
package.
After installing elan-init
, you‘ll have the elan
tool available in your shell. elan
is the tool used for maintaining different versions of Lean, similar to nvm
in the Node.js world or pyenv
.
When I did the Lean 4 tutorial at BOB, I worked entirely within VS Code and created a new standalone package using some in-editor functionality. At the command line, I use lake init
to manually create a new Lean package. At first, I made the mistake of running this command, assuming it would create a new directory for me and set up any configuration and boilerplate code there. I was surprised to find, instead, that lake init
sets things up in the current directory, in addition to creating a subdirectory and populating it. Using lake --help
, I read about the lake new
command, which does what I had in mind. So I might suggest using lake new
rather than lake init
.
What‘s in the new directory? Doing tree foobar
reveals
foobar
├── Foobar
│ └── Basic.lean
├── Foobar.lean
├── Main.lean
├── lakefile.lean
└── lean-toolchain
Taking a look there, I see four .lean
files. Here‘s what they contain:
Main.lean
import «Foobar»
def main : IO Unit :=
IO.println s!"Hello, {hello}!"
Foobar.lean
-- This module serves as the root of the `Foobar` library.
-- Import modules here that should be built as part of the library.
import «Foobar».Basic
Foobar/Basic.lean
def hello := "world"
lakefile.lean
import Lake
open Lake DSL
package «foobar» where
-- add package configuration options here
lean_lib «Foobar» where
-- add library configuration options here
@[default_target]
lean_exe «foobar» where
root := `Main
It looks like there‘s a little module structure here, and a reference to the identifier hello
, defined in Foobar/Basic.lean
and made available via Foobar.lean
. I’m not going to touch lakefile.lean
for now; as a newbie, it looks scary enough that I think I’ll just stick to things like Basic.lean
.
There‘s also an automatically created .git
there, not shown in the directory output above.
Now that you‘ve got Lean 4 installed and set up a package, you‘re ready to dive in to one of the official tutorials. The one I‘m working through is David‘s Functional Programming in Lean. There‘s all sorts of additional things to learn, such as all the different lake
commands. Enjoy!
As usual, let's start by introducing the problem. Suppose you want to produce
either a Debian-derived sysroot for cross-compilation, something you can
chroot into, or even a full image you can boot with QEMU or on real hardware.
Debootstrap can get you
started and has minimal external dependencies. If you wish to avoid using
sudo
, Running debootstrap
under fakeroot
and fakechroot
works if
building a rootfs for the same architecture as the current host, but it has
problems out of the box for a foreign architecture. These tools are packaged
and in the main repositories for at least Debian, Arch, and Fedora, so a
solution that works without additional dependencies is advantageous.
I'm presenting my preferred solution / approach in the first subheading and relegating more discussion and background explanation to later on in the article, in order to cater for those who just want something they can try out without wading through lots of text.
Warning: I haven't found fakeroot
to be as robust as I would like, even
knowing its fundamental limitations with e.g. statically linked binaries.
Specifically, a sporadically reproducible case involving installing lots of
packages on riscv64 sid resulted in
/usr/lib/riscv64-linux-gnu/libcbor.so.0.10.2
being given the directory bit
in fakeroot
's database (which I haven't yet managed to track down to the
point I can file a useful bug report). I'm sharing this post because the
approach may still be useful to people, especially if you rely on fakeroot
for only the minimum needed to get a bootable image in qemu-system.
Not explored in this article: using newuidmap/newgidmap with appropriate /etc/subuid (see here), though note one-off setup is needed to allow your user to set sufficient UIDs.
Assuming you have debootstrap
and fakeroot
installed (sudo pacman -S debootstrap fakeroot
will suffice on Arch), and to support transparent
emulation of binaries for other architectures you also have user-mode QEMU
installed and set to execute via binfmt_misc (sudo pacman -S qemu-user-static qemu-user-static-binfmt
on Arch) we proceed to:
fakeroot
on the host, saving the
state (the uid/gid and permissions set after operations like chown/chmod) to
a file, and including fakeroot
in the list of packages to install for the
target..deb
s into the directory tree
created by debootstrap directory (as we need to be able to use it as a
pre-requisite of initiating the second-stage debootstrap which extracts and
installs all the packages).chroot
into the debootstrapped sysroot
without needing root permissions. Then give the illusion of permissions to
set arbitrary uid/gid and other permissions on files via fakeroot
(loading
the environment saved earlier).Translated into shell commands (and later a script), you can do this by:
SYSROOT_DIR=sysroot-deb-riscv64-sid
TMP_FAKEROOT_ENV=$(mktemp)
fakeroot -s "$TMP_FAKEROOT_ENV" debootstrap \
--variant=minbase \
--include=fakeroot,symlinks \
--arch=riscv64 --foreign \
sid \
"$SYSROOT_DIR"
mv "$TMP_FAKEROOT_ENV" "$SYSROOT_DIR/.fakeroot.env"
fakeroot -i "$SYSROOT_DIR/.fakeroot.env" -s "$SYSROOT_DIR/.fakeroot.env" sh <<EOF
ar p "$SYSROOT_DIR"/var/cache/apt/archives/libfakeroot_*.deb 'data.tar.xz' | tar xv -J -C "$SYSROOT_DIR"
ar p "$SYSROOT_DIR"/var/cache/apt/archives/fakeroot_*.deb 'data.tar.xz' | tar xv -J -C "$SYSROOT_DIR"
ln -s fakeroot-sysv "$SYSROOT_DIR/usr/bin/fakeroot"
EOF
cat <<'EOF' > "$SYSROOT_DIR/_enter"
#!/bin/sh
export PATH=/usr/sbin:$PATH
FAKEROOTDONTTRYCHOWN=1 unshare -fpr --mount-proc -R "$(dirname -- "$0")" \
fakeroot -i .fakeroot.env -s .fakeroot.env "$@"
EOF
chmod +x "$SYSROOT_DIR/_enter"
"$SYSROOT_DIR/_enter" debootstrap/debootstrap --second-stage
You'll note this creates a helper _enter
within the root of the rootfs for
chrooting into it and executing fakeroot
with appropriate arguments.
If you want to use this rootfs as a sysroot for cross-compiling, you'll need to convert any absolute symlinks to relative symlinks so that they resolve properly when being accessed outside of a chroot. We use the symlinks utility installed within the target filesystem for this:
"$SYSROOT_DIR/_enter" symlinks -cr .
I've written a slightly more robust and configurable encapsulation of the
above logic in the form of
rootless-debootstrap-wrapper
which I would recommend using/adapting in preference to the above. Further
code examples in the rest of this post use the rootless-debootstrap-wrapper
script for convenience.
Depending on how you look at it, fakeroot
is either a horrendous hack or a
clever use of LD_PRELOAD
developed at a time where there weren't lots of
options for syscall interposition. As there's been so much development in that
area I'd hope there are other alternatives by now, but I didn't see something
that's quite so easy to use, well tested for this use case, widely packaged,
and up to date.
I've avoided using fakechroot
both because I couldn't get it to work
reliably in the cross-architecture bootstrap scenario, and also because
thinking through how it logically should work in that scenario is fairly
complex. Given we're able to use user namespaces to chroot
, let's save
ourselves the hassle and do that. Except there was a slight hiccup in that
chown
was failing (running under fakeroot
) when chroot
ed in this way.
Thankfully the folks in the buildroot project had run into the same
issue
and their patch alerted me to the undocumented FAKEROOTDONTTRYCHOWN
environment variable. As written up in that commit message, the issue is that
under a user namespace with limited uid/gid mappings (in my case, just one),
chown
returns EINVAL
which isn't masked by fakeroot
unless this
environment variable is set.
There has of course been previous work on rootless debootstrap, notably
Johannes Schauer's blog
post
that takes a slightly different route (by my understanding, including
communication between LD_PRELOAD
ed fakeroot on the target and a faked
running on the host). A variant of this approach is used in
mmdebstrap from the same
author.
fakeroot.env
is keyed by the inode,
you may lose important permissions information if you copy the rootfs. You
should instead tar
it under fakeroot
, and if extracting in an unprivileged
environment again then untar it under fakeroot
, creating a new
fakeroot.env
.unshare
requires that unprivileged user namespace support is
enabled. I believe this is the case in all common distributions by now, but
please check your distro's guidance if not.Just to demonstrate how this working, here is how you can debootstrap all architectures supported by Debian + QEMU (except for mips, where I had issues with qemu) then run a trivial test - compiling and running a hello world:
#!/bin/sh
error() {
printf "!!!!!!!!!! Error: %s !!!!!!!!!!\n" "$*" >&2
exit 1
}
# TODO: mips skipped due to QEMU issues.
ARCHES="amd64 arm64 armel armhf i386 ppc64el riscv64 s390x"
mkdir -p "$HOME/debcache"
for arch in $ARCHES; do
rootless-debootstrap-wrapper \
--arch=$arch \
--suite=sid \
--cache-dir="$HOME/debcache" \
--target-dir=debootstrap-all-test-$arch \
--include=build-essential || error "Debootstrap failed for arch $arch"
done
for arch in $ARCHES; do
rootfs_dir="./debootstrap-all-test-$arch"
cat <<EOF > "$rootfs_dir/hello.c"
#include <stdio.h>
#include <sys/utsname.h>
int main() {
struct utsname buffer;
if (uname(&buffer) != 0) {
perror("uname");
return 1;
}
printf("Hello from %s\n", buffer.machine);
return 0;
}
EOF
./debootstrap-all-test-$arch/_enter sh -c "gcc hello.c && ./a.out"
done
Executing the above script eventually gives you:
Hello from x86_64
Hello from aarch64
Hello from armv7l
Hello from armv7l
Hello from x86_64
Hello from ppc64le
Hello from riscv64
Hello from s390x
(The repeated "armv7l" is because armel and armhf differ in ABI rather than the architecture as returned by uname).
Here is how to use the tool to build a bootable RISC-V image. First build the rootfs:
TGT=riscv-sid-for-qemu
rootless-debootstrap-wrapper \
--arch=riscv64 \
--suite=sid \
--cache-dir="$HOME/debcache" \
--target-dir=$TGT \
--include=linux-image-riscv64,zstd,default-dbus-system-bus || error "Debootstrap failed"
cat - <<EOF > $TGT/etc/resolv.conf
nameserver 1.1.1.1
EOF
"$TGT/_enter" sh -e <<EOF
ln -s /dev/null /etc/udev/rules.d/80-net-setup-link.rules # disable persistent network names
cat - <<INNER_EOF > /etc/systemd/network/10-eth0.network
[Match]
Name=eth0
[Network]
DHCP=yes
INNER_EOF
systemctl enable systemd-networkd
echo root:root | chpasswd
ln -sf /dev/null /etc/systemd/system/serial-getty@hvc0.service
EOF
Then produce an ext4 partition and extract the kernel and initrd:
fakeroot -i riscv-sid-for-qemu/.fakeroot.env sh <<EOF
ln -L riscv-sid-for-qemu/vmlinuz kernel
ln -L riscv-sid-for-qemu/initrd.img initrd
fallocate -l 30GiB rootfs.img
mkfs.ext4 -d riscv-sid-for-qemu rootfs.img
EOF
And boot it in qemu:
qemu-system-riscv64 \
-machine virt \
-cpu rv64 \
-smp 4 \
-m 8G \
-device virtio-blk-device,drive=hd \
-drive file=rootfs.img,if=none,id=hd,format=raw \
-device virtio-net-device,netdev=net \
-netdev user,id=net,hostfwd=tcp:127.0.0.1:10222-:22 \
-bios /usr/share/qemu/opensbi-riscv64-generic-fw_dynamic.bin \
-kernel kernel \
-initrd initrd \
-object rng-random,filename=/dev/urandom,id=rng \
-device virtio-rng-device,rng=rng \
-nographic \
-append "rw noquiet root=/dev/vda console=ttyS0"
You can then log in with user root
and password root
. We haven't installed
sshd
so far, but the above command line sets up forwarding from port 10222
on the local interface to port 22 on the guest in anticipation of that.
Update on what happened in WebKit in the week from November 23 to December 2.
The documentation on GTK/WPE port profiling with Sysprof landed upstream.
Support for anchor-center
alignment landed upstream for all the WebKit ports. This is a part of cutting-edge CSS spec called CSS Anchor Positioning. To test this feature, the CSSAnchorPositioning
runtime preference needs to be enabled.
WebKit has since a long time offered a non-standard method
Document.caretRangeFromPoint()
to get the caret range at a certain
coordinate, but now offers the same functionality in a standardised way.
We improved the multi touch support on WPE: the touch identifiers are now more reliable when using the Web API Pointer Events. This has been backported to the last stable release 2.46.4
The built-in JavaScript/ECMAScript engine for WebKit, also known as JSC or SquirrelFish.
On the JSC front, Justin Michaud has fixed a tricky issue in the implementation of Air shuffles (i.e. smartly copying N arbitrary locations to N different arbitrary locations). He also fixed some lowering code that generated invalid B3, as well as the 32-bit version of addI31Ref (part of the GC wasm extension).
Angelos Oikonomopoulos fixed another corner case in the testing of single-precision floating point arguments on 32-bits.
Support for multi-threaded GPU rendering landed
upstream for both GTK/WPE ports. In
main
branch, GPU accelerated tile rendering was already activated by
default—it is still the case, but now it utilizes one extra GPU rendering
thread instead of performing the GPU rendering using (and blocking) the main
thread.
The number of threads used for CPU multi-threaded rendering was controlled
by the WEBKIT_SKIA_PAINTING_THREADS
environment variable and has been
renamed to WEBKIT_SKIA_CPU_PAINTING_THREADS
. Likewise we now support the
setting WEBKIT_SKIA_GPU_PAINTING_THREADS
(where 0
implies using the main
thread, and values in the 1
to 4
range enable threaded GPU rendering) to
control the amount of GPU rendering threads used.
Negotiation of buffer formats with Wayland using DMA-BUF feedback was getting the first format that fits with the requirements in the first tranche even when the transparency did not match. Now we honor the transparency if there is a way to do it, even when other tranches than the first one need to be used. This allows the compositor to do direct scanout in more cases.
This has been a week filled with releases!
On the stable series, WebKitGTK 2.46.4 and WPE WebKit 2.46.4 include the usual stream of small fixes, a number of multimedia handling improvements focused on around Media Stream, and two important security fixes covered in a new security advisory (WSA‑2024‑0007: GTK, WPE). The covered vulnerabilities are known to be exploited in the wild, and updating is strongly encouraged; fresh packages are already available (or will be soon) in popular Linux distributions.
Also, development releases WebKitGTK 2.47.2 and WPE WebKit 2.47.2 are now available. The main highlights are the multi-threaded GPU rendering, and the added system settings API in WPEPlatform. These development snapshots are often timed around important changes; we greatly appreciate when people put the effort to give them a try, because detecting (and reporting) any issues earlier is a great help that gives us developers more time to polish the code before it reaches a stable version.
Flatpak 1.15.11 was released with a handful of patches related to accessibility. These patches enable WebKit accessibility to work in sandboxed environments. With this release, all the pieces of this puzzle fell in place, and now sandboxed apps that use WebKit are properly accessible and introspectable by screen readers and Braille generators.
Of course, there are further improvements to be made, and lots of fine-tuning to how WebKit handles accessibility of web pages. But this is nonetheless an exciting step, both for accessibility on Linux and also for the platform.
A WPE MiniBrowser runner for the Web-Platform-Tests (WPT) cross-browser test suite was added recently. Please check the documentation on how to use it and remember that there is also a WebKitGTK MiniBrowser runner there also available. Both runners allow to automatically download and use the last nightly universal bundle for running the tests if you pass the flag --install-browser
to ./wpt run
. Pass also --log-mach=-
for increased verbosity. Please note that this only adds the runner for manual testing. We are still working on adding WPE to the automated testing dashboard at wpt.fyi
Justin Michaud submitted a fix for flashing Yocto images to external SD cards.
The WPE WebKit web site now has a separate RSS feed for security advisories. It can be reached at https://wpewebkit.org/security.xml
and may be useful for those interested in automated notifications about security fixes.
That’s all for this week!
Let's talk about priorities, technical debt and hard problems in the Web Platform...
In many ways, browser engine projects are not that different from most other software projects. The "stewards" still have teams with managers and specializations and budgets and people out on leave, and so on. They have to prioritize and plan. And just like every other project I've ever seen, they face the same kinds of pressures and problems: There are never enough resources for everything, there are always new asks, they always accumulate tech debt, and sometimes there are really hard problems.
What is perhaps special about them is that they are trying to be more than just an isolated program: They're trying to contribute to a standard, interoperable platform. The catch, for us, is that we only reap the benefit when something makes it all the way through all of the team's independent priority gauntlets, get shipped widely, and so on.
This can be kind of painful to experience.
For example, consider the <details>
element. It's literally the simplest possible interactive element. Here's how we got it:
<details>
.If you're counting, that's nine years it took to reach shipping in all browsers. The newly defined "Baseline Widely Available" which indicates roughly when something should have reached as close to 100% market-share/deployment as possible would take another three... So, more or less, that happened last year.
And that's just the initial and appealing part. Then there are, of course, bugs discovered, new tests added, and ultimately feature improvements and iterations and so on. Even currently there are newly failing tests that make support for <details>
ragged as we've tried to improve things like find-in-page and and add new concepts like invokers.
As time goes by, we're accumulating squares in the feature grid and new "gaps" in it faster than we're filling them. We're accumulating tech debt.
Interop was originally intended as a way to pay down, that debt: Let's pick some things to prioritize together and turn all of the little red failure squares green. But prioritizing is tricky.
We'll get very roughly around 100 submissions of what we should focus on every year. But Interop is merely allowing browser makers to agree on how to focus and prioritize on some of the same things. The resources themselves are still finite. That means that prioritizing some things inevitably means not prioritizing something else.
And, there are a lot of competing pressures about what to prioritize, and why.
For example: It is super effective for developers if we can focus initial developments together. Imagine if we could have delivered <details>
across the board and very high quality in 2011, or 2012.
Focusing together on a few new features has other added benefits too. People are more excited to work on it for one. We also get everyone talking about the same things at the same time, that's helpful - nobody misses the big event. It means use will grow faster, etc. It gives us something a like ECMA annual editions. So, it's a little unsurprising that last year, Interop included areas like CSS nesting, popover, relative color syntax, declarative shadow DOM.
However - at the other end of the spectrum, there are lots of things which are already very ragged. These things are damned hard to prioritize. They're all over the map. They are of obviously different, and debatable kinds of value, to sometimes very different communities. They can also incur different costs on different engines, and so on.
All of this conspires together to create some perennially hard problems. They continue to be needs, sometimes for exceedingly long times.
This year, I'm making the case that we need to find a way to prioritize those perennially hard problems which, for whatever reason, we can never seem to prioritize. Perhaps every 5, 7 or 10 years we we should focus on these kinds of projects.
If you've been reading my blog or listening to our podcast, then you're already aware that MathML and SVG are probably the biggest examples of this kind of problem. Both are among the oldest web specifications, having their first versions published about the same time as HTML 4.0 and CSS 2. They were specially integrated with the HTML parser, and are integrated into the HTML Living Standard (MathML, SVG).
Yet both are historically dramatically under-funded and much of the actual work on them have been funded by volunteers and non-steward organizations! 26 years later we're still struggling to find the will to cross some important last miles.
Thus, every year, we have submissions about both for Interop.
The 2024 State of HTML survey found that <svg> was the top content pain point cited by developers, with almost double the pain attributed to “browser support”. <svg>
- that is the literal <svg>
element, not including the other ways SVGs can be used - is used on over 55% of HTML pages in the HTTP archive data. Only 27 of HTML's roughly 130 elements are more popular. SVG is also used heavily in embedded applications powered by Web engines.
A lot of math content is in more specific sites like arXiv and Wikipedia, which each have millions and millions of equations, or in online education or books. The HttpArchive crawl isn't the best way to measure that since it is focused mainly on public home pages where there's not likely to be a lot of math. However, even in the crawl, we still see thousands of pages do load 2 of the most popular JavaScript libraries which are bridging the gaps instead of rendering native math. This hurts performance and is unique - we don't require JavaScript to render text. We also know that numerous document editing tools like Adobe Indesign and Microsoft Word support MathML. Those are complex applications which require a lot of script already, and lacking good support means that they have to load even more.
Igalia has contributed implementations and improvements with funding from others and ourselves. Every year we have invested a bit ourselves to keep things moving forward. But it moves slowly this way. What we really need are some concerted efforts to push us across those last miles. We'd go a lot farther, a lot faster, together.
If you support the idea of some focus and push on these, please let us know - let vendors know. It might help.
Of course, it might not too. Historically, it's been difficult. What we know works is for someone outside of vendors to do the work - or fund it. Igalia will keep plugging away, but without external funding our own investments only go so far. If your organization would benefit from these, consider financially sponsoring some work. Alternatively, you can also help fund work on MathML directly.
Update on what happened in WebKit in the week from November 15 to November 22.
The getImageData() canvas method has been optimized to avoid an intermediate memory copy. This made fetching pixel data about ten times faster in the embedded hardware and laptops with integrated GPUs used for testing. The improvement is slated for inclusion in the upcoming 2.46.4 stable release.
The WebKit#31458 PR landed today. This adds a mechanism that leverages the damage information to reduce the amount of painting during composition. The biggest gains of that are expected with WPE used on embedded devices.
Running WebKit layout tests using the multi-threaded Skia-CPU mode (WEBKIT_SKIA_ENABLE_CPU_RENDERING=1
, WEBKIT_SKIA_PAINTING_THREADS>0
) fired assertions in debug/assert-enabled builds. Recording a DisplayList on the main thread and replaying it on a worker thread exposed a thread-safety issue. The Pattern class was not expecting to be dereferenced from a non-main thread. Pattern
now inherits from ThreadSafeRefCounted
to fix the problem.
Traditionally, we supported multi-threaded tile rendering using Cairo (which is CPU-only), and also using Skia in CPU rendering mode. Skia with GPU accelerated rendering is driven from the main thread and does not support multi-threading. However, there is a non-negligible amount of CPU work to be performed prior to using the GPU for rendering, where it can be beneficial to parallelize that work across multiple cores.
Preparation is ongoing for threaded GPU rendering, by adding GPU synchronization primitives to NativeImage and ImageBuffer for Skia, and making use of the new GPU synchronization primitives during DisplayList recording (on the main thread) and replays—which will happen in a GPU worker thread, once we have added support for that.
GStreamer-based multimedia support for WebKit, including (but not limited to) playback, capture, WebAudio, WebCodecs, and WebRTC.
Canvas to PeerConnection streaming was fixed, there was an issue with video orientation tags handling leading to flipped frames on receiving side.
Screen capture support using PipeWire and GStreamer was fixed, DMA-BUFs are now negotiated with PipeWire, enabling zero-copy rendering to video elements. Screen capture streaming to PeerConnection is still an open issue though.
New, modern platform API that supersedes usage of libwpe and WPE backends.
WPEPlatform now supports a Settings API allowing platforms and applications to set options such as fonts or dark mode. This can be tested by launching MiniBrowser and passing an INI-style configuration file with settings:
MiniBrowser --use-wpe-platform-api --config-file=config.ini
Adaptation of WPE WebKit targeting the Android operating system.
WPE-Android got updated to WebKit 2.46.3. Coming from 2.46.0, it includes a fix for DuckDuckGo results link, better text kerning, a better performing Canvas putImageData()
operation, improved selection of H.264 encoding parameters, and more.
As usual, the 0.1.2 release at GitHub contains the downloadable .apk
packages. The Maven repository has been updated as well.
Producing WPE-Android releases on GitHub has been automated, and version 0.1.2 has already been made this way, the only manual intervention being the approval of the draft created by the CI setup.
GNOME Web Canary builds are working again, now based on the GNOME Flatpak runtime instead of the soon-to-be-deprecated WebKit Flatpak SDK runtime. To install it run:
flatpak --user install \
https://nightly.gnome.org/repo/appstream/org.gnome.Epiphany.Canary.flatpakref
This version of GNOME Web leverages nightly WebKitGTK builds from the WebKit Git main
development branch.
The WebKitGTK Debian 11 bot has been retired. We officially stopped supporting WebKitGTK on Debian 11 on June 12th (one year after the release of Debian 12), however we have been maintaining WebKitGTK on Debian 11 for a longer time than initially expected. Debian 11 Security support reached end-of-life on August 14th, 2024.
That’s all for this week!
Last week, we talked about Chrome, Google’s browser.
We discussed open technologies, cooperativism, and Chromium’s governance with a focus on a transparent future. We talked about the development of browser variants, its use on different platforms, and how we will approach Generative AI in Chrome.
XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers.
Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition.
Short on Time?
This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):
While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants.
Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD)
Link: https://indico.freedesktop.org/event/6/contributions/383/
Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts.
The final agenda covered five topics in the scheduled order:
Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed.
From his notes, let’s dive into the key discussions!
Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.
We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.
This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years.
Here’s a breakdown of the key points raised at this meeting:
Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API!
During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.
In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.
To address this issue, we discussed several potential improvements:
By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.
Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li.
This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack.
Stay tuned for future updates!
Some days ago I wrote about the new VK_EXT_device_generated_commands Vulkan extension that had just been made public. Soon after that, I presented a talk at XDC 2024 with a brief introduction to it. It’s a lightning talk that lasts just about 7 minutes and you can find the embedded video below, as well as the slides and the talk transcription if you prefer written formats.
Truth be told, the topic deserves a longer presentation, for sure. However, when I submitted my talk proposal for XDC I wasn’t sure if the extension was going to be public by the time XDC would take place. This meant I had two options: if I submitted a half-slot talk and the extension was not public, I needed to talk for 15 minutes about some general concepts and a couple of NVIDIA vendor-specific extensions: VK_NV_device_generated_commands and VK_NV_device_generated_commands_compute. That would be awkward so I went with a lighning talk where I could talk about those general concepts and, maybe, talk about some VK_EXT_device_generated_commands specifics if the extension was public, which is exactly what happened.
Fortunately, I will talk again about the extension at Vulkanised 2025. It will be a longer talk and I will cover the topic in more depth. See you in Cambridge in February and, for those not attending, stay tuned because Vulkanised talks are recorded and later uploaded to YouTube. I’ll post the link here and in social media once it’s available.
Hello, I’m Ricardo from Igalia and I’m going to talk about Device-Generated Commands in Vulkan. This is a new extension that was released a couple of weeks ago. I wrote CTS tests for it, I helped with the spec and I worked with some actual heros, some of them present in this room, that managed to get this implemented in a driver.
Device-Generated Commands is an extension that allows apps to go one step further in GPU-driven rendering because it makes it possible to write commands to a storage buffer from the GPU and later execute the contents of the buffer without needing to go through the CPU to record those commands, like you typically do by calling vkCmd functions working with regular command buffers.
It’s one step ahead of indirect draws and dispatches, and one step behind work graphs.
Getting away from Vulkan momentarily, if you want to store commands in a storage buffer there are many possible ways to do it. A naïve approach we can think of is creating the buffer as you see in the slide. We assign a number to each Vulkan command and store it in the buffer. Then, depending on the command, more or less data follows. For example, lets take the sequence of commands in the slide: (1) push constants followed by (2) dispatch. We can store a token number or command id or however you want to call it to indicate push constants, then we follow with meta-data about the command (which is the section in green color) containing the layout, stage flags, offset and size of the push contants. Finally, depending on the size, we store the push constant values, which is the first chunk of data in blue. For the dispatch it’s similar, only that it doesn’t need metadata because we only want the dispatch dimensions.
But this is not how GPUs work. A GPU would have a very hard time processing this. Also, Vulkan doesn’t work like this either. We want to make it possible to process things in parallel and provide as much information in advance as possible to the driver.
So in Vulkan things are different. The buffer will not contain an arbitrary sequence of commands where you don’t know which one comes next. What we do is to create an Indirect Commands Layout. This is the main concept. The layout is like a template for a short sequence of commands. We create this layout using the tokens and meta-data that we saw colored red and green in the previous slide.
We specify the layout we will use in advance and, in the buffer, we ony store the actual data for each command. The result is that the buffer containing commands (lets call it the DGC buffer) is divided into small chunks, called sequences in the spec, and the buffer can contain many such sequences, but all of them follow the layout we specified in advance.
In the example, we have push constant values of a known size followed by the dispatch dimensions. Push constant values, dispatch. Push constant values, dispatch. Etc.
The second thing Vulkan does is to severely limit the selection of available commands. You can’t just start render passes or bind descriptor sets or do anything you can do in a regular command buffer. You can only do a few things, and they’re all in this slide. There’s general stuff like push contants, stuff related to graphics like draw commands and binding vertex and index buffers, and stuff to dispatch compute or ray tracing work. That’s it.
Moreover, each layout must have one token that dispatches work (draw, compute, trace rays) but you can only have one and it must be the last one in the layout.
Something that’s optional (not every implementation is going to support this) is being able to switch pipelines or shaders on the fly for each sequence.
Summing up, in implementations that allow you to do it, you have to create something new called Indirect Execution Sets, which are groups or arrays of pipelines that are more or less identical in state and, basically, only differ in the shaders they include.
Inside each set, each pipeline gets an index and you can change the pipeline used for each sequence by (1) specifying the Execution Set in advance (2) using an execution set token in the layout, and (3) storing a pipeline index in the DGC buffer as the token data.
The summary of how to use it would be:
First, create the commands layout and, optionally, create the indirect execution set if you’ll switch pipelines and the driver supports that.
Then, get a rough idea of the maximum number of sequences that you’ll run in a single batch.
With that, create the DGC buffer, query the required preprocess buffer size, which is an auxiliar buffer used by some implementations, and allocate both.
Then, you record the regular command buffer normally and specify the state you’ll use for DGC. This also includes some commands that dispatch work that fills the DGC buffer somehow.
Finally, you dispatch indirect work by calling vkCmdExecuteGeneratedCommandsEXT. Note you need a barrier to synchronize previous writes to the DGC buffer with reads from it.
You can also do explicit preprocessing but I won’t go into detail here.
That’s it. Thank for watching, thanks Valve for funding a big chunk of the work involved in shipping this, and thanks to everyone who contributed!
Update on what happened in WebKit in the week from November 8 to November 15.
Sysprof received a round of improvements to the Marks Waterfall view, the hover tooltip now show the duration of the mark. The Graphics view also received some visual improvements, such as taller graphs and line rendering without cutoffs. Finally, Sysprof collector is now able to handle multiprocess scenarios better.
A new tool for Sysprof was added: sysprof-cat
. It takes a capture file, and dumps it in textual form.
This is all in preparation to further profiler integration in WebKit on Linux. A new set of integration points is being prepared for WebKit where it can, for example, report the page FPS and memory usage to Sysprof in the Graphics view.
The JSCOnly port may be built with support for the GLib main loop when configured with cmake -DPORT=JSCOnly -DEVENT_LOOP_TYPE=GLib
. This is a seldom used option and the build was broken for months, but it has now been fixed.
This week the team took some time to kickstart improvements to the documentation. One of the goals we have had in mind for long is adding pages to the manual on a number of topics, and in this vein Georges has added an overview page for WebKitGTK and Alex started a page listing some of the available environment variables.
In order to allow sharing selected content between the GTK and WPE ports, Adrian is adding support to setup additional content directories for gi-docgen and to process templates to pick fragments of the source files depending on the port.
Improving what we already have is important, and Lauro has clarified how WebKitWebView::is-controlled-by-automation works.
We lately have been deploying nightly packaging bots, to provide binaries ready to use for different projects.
These bots run once per day and upload different built products that you can check below:
GNOME Web Canary (built products):
This one is meant to build GNOME Web with the GNOME SDK to produce the Canary builds of Web. Follow the progress at the corresponding Web merge request.
WebKitGTK and WPE WebKit MiniBrowser/WebDriver universal bundles.
These universal bundles should work on any Linux distribution and are
intended for running tests on third-party CI systems without having to
build WebKit. They include inside the tarball all the system libraries
and resources needed to run WebKit, from libc
up to the Mesa graphics
drivers without requiring the usage of containers (similar concept to
AppImage). Currently these builds are used
to for the WPT tests at wpt.fyi,
running on the Mozilla TaskCluster CI.
JSC universal bundle (built products.
Same content as the other universal bundles, but only including the jsc
command line program.
This is currently used by jsvu to easily allow developers to test the
latest version of JavaScriptCore.
That’s all for this week!
As many of you already know, Igalia took over the maintenance of the Servo project in January 2023. We’ve been working hard on bringing the project back to life again, and this blog post is a summary of our achievements so far.
You can skip this this section if you already know about the Servo project.
Servo is an experimental browser engine created by Mozilla in 2012. From the very beginning, it was developed alongside the Rust language, like a showcase for the new language that Mozilla was developing. Servo has always aimed to be a performant and secure web rendering engine, as those are also main characteristics of the Rust language.
Mozilla was the main force behind Servo’s development for many years, with some other companies like Samsung collaborating too. Some of Servo’s components, like Stylo and WebRender, were adopted and used in Firefox releases by 2017, and continue to be used there today.
In 2020, Mozilla laid off the whole Servo team, and the project moved to the Linux Foundation. Despite some initial interest in the project, by the end of 2020 there was barely any active work on the project, and 2021 and 2022 weren’t any better. At that point, many considered the Servo project to be abandoned.
Things changed in 2023, when Igalia got external funding to work on Servo and get the project moving again. We have previous experience working on Servo during the Mozilla years, and we also have a wide experience working on other web rendering engines.
To explore new markets and grow a stable community, the Servo project joined Linux Foundation Europe in September 2023, and is now one of the most active projects there. LF Europe is an umbrella organization that hosts the Servo project, and provides us with opportunities to showcase Servo at several events.
Over the last two years, Igalia has had a team of around five engineers working full-time on the Servo project. We’re taking care of the project maintenance, communication, community governance, and a large portion of development. We have also been looking for new partners and organizations that may be interested in the Servo project, and can devote resources or money towards moving it forward.
Two years is not a lot of time for a project like Servo, and we’re very proud of what we’ve achieved in this period. Some highlights, including work from Igalia and the wider Servo community:
Of course, it’s impossible to summarize all of the work that happened in so much time, with so many people working together to achieve these great results. Big thanks to everyone that has collaborated on the project! 🙏
While it’s hard to list all of our achievements, we can take a look at some stats. Numbers are always tricky to draw conclusions from, but we’ve been taking a look at the number of PRs merged in Servo’s main repository since 2018, to understand how the project is faring.
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 | |
---|---|---|---|---|---|---|---|
PRs | 1,188 | 986 | 669 | 118 | 65 | 776 | 1,771 |
Contributors | 27.33 | 27.17 | 14.75 | 4.92 | 2.83 | 11.33 | 26.33 |
Contributors ≥ 10 | 2.58 | 1.67 | 1.17 | 0.08 | 0.00 | 1.58 | 4.67 |
As a clarification, these numbers don’t include PRs from bots (dependabot
and Servo WPT Sync
).
Our participation in Outreachy has also made a marked difference. During each month-long contribution period, we get a huge influx of new people contributing to the project. This year, we participated in both the May and December cohorts, and the contribution periods are very visible in our March and October 2024 stats:
Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
PRs | 103 | 97 | 255 | 111 | 71 | 96 | 103 | 188 | 167 | 249 | 170 | 161 |
Contributors | 19 | 22 | 32 | 27 | 23 | 22 | 30 | 31 | 24 | 37 | 21 | 28 |
Contributors ≥ 10 | 2 | 3 | 9 | 3 | 2 | 2 | 2 | 6 | 5 | 8 | 7 | 7 |
Anyway overall, we think these numbers ratify that the project is back to life, and the community is growing and engaging with the project. We’re very happy about them.
Early this year we set up the Servo Open Collective and GitHub Sponsors, where many people and organizations have since been donating to the project. We decide how to spend this money transparently in the TSC, and so far we’ve used it to cover the project’s infrastructure costs, like self-hosted runners to speed up CI times.
We’re very grateful to everyone donating money to the project, so far adding up to over $24,500 from more than 500 donors. Thank you! 🙏
Servo has a lot of potential. Several things make it a very special project:
Of course, the project has some challenges too:
To finish on a positive note, we also have some key opportunities:
The future of Servo, as with many other open source projects, is still uncertain. We’ll see how things go in the coming years, but we would love to see that the Servo project can keep growing.
Servo is a huge project. To keep it alive and making progress, we need continuous funding on a bigger scale than crowdfunding can generally accomplish. If you’re interested in contributing to the project or sponsoring the development of specific functionality, please contact us at join@servo.org or igalia.com/contact.
Let’s hope we can walk this path together and keep working on Servo for many years ahead. 🤞
PS: Special thanks to my colleague Delan Azabani for proofreading and copyediting this blog post. 🙏
In October, many colleagues from Igalia participated in a TC39 meeting organized in Tokyo, Japan by Sony Interactive Entertainment to discuss proposed features for the JavaScript standard alongside delegates from various other organizations.
Let's delve together into some of the most exciting updates!
You can also read the full agenda and the meeting minutes on GitHub.
Import attributes, (alongside JSON modules) reached Stage 4. Import attributes allow customizing how modules are imported. For example, in all JavaScript environments you'll be able to natively import JSON files using
import myData from "./data" with { type: "json" };
The proposals reached finally reached Stage 4, after a bumpy path - including being regressed from Stage 3 to Stage 2 in 2023 and needing to change their syntax.
The proposals are already supported in Chrome, Safari, and all the server-side runtimes, with Firefox soon to follow!
Although we didn't work directly on the Iterator Helpers proposal, we've been eagerly anticipating its completion. It elevates Javascript's standard library iterators up to a level of developer convenience that's comparable to Python's itertools
module or the iterators in Rust. Here's a code snippet:
const result = Iterator.from(myArray)
.filter(myPredicate)
.take(50)
.map(myTransformFunc)
.reduce(mySummationFunc);
It's often convenient to think of processing code such as the above in terms of map and filter operations. But often you'd have to iterate through the array multiple times if you wrote it that way using Array's map
and filter
methods, which is inefficient. Conversely, if you wrote it with a for-of loop, you'd be using break
, continue
, and possibly state-tracking variables, which is harder to reason about. Iterator helpers give you the best of both worlds.
ECMA-402 is the Internationalization API specification, the companion standard to JavaScript's ECMA-262. Our colleagues Ben Allen and Ujjwal Sharma are on the editorial board of ECMA-402 and their responsibilities often include proposing small changes. This round, we received consensus for PRs that:
Fix a bug that caused date and time values to sometimes be rendered in the wrong numbering system for some locales.
Correctly format currency values when rendered in scientific/engineering notations.
Give an explicit ordering for the plural categories returned by Intl.PluralNames.resolvedOptions()
. This allows for easier testing of the correctness of this method, and makes it easier for developers to discover what plural categories exist in which languages.
Allow use of non-ISO 4217 data in CurrencyDigits AO This is a small change, but one that makes the ECMA-402 specification more closely match both extant localization needs and also web reality. Previously the specification mandated the use of a standard for determining the number of "minor units" displayed when formatting currency values -- think here the number of digits used to display cents when formatting values such as 1.25 USD. The previously mandated data source is useful for some contexts, but not others. This PR allows implementors to use whichever source of data on currency minor units is best suited for that engine -- something that implementators had already been doing.
TG4, a task group created one year ago within TC39, has been diligently working to standardize and enhance the source maps functionality, with the goal of ensuring a good and consistent developer experience across multiple platforms.
Thanks also to the efforts by our colleagues Nicolò Ribaudo and Asumu Takikawa, TC39 approved the first draft of the new specification (https://tc39.es/source-map/2024/), which can now advance through the Ecma publishing process to become an official standard.
JavaScript keeps evolving and adding many new features over time: this can significantly help JavaScript developers, but it comes with its own problems. Every new feature is something that browsers need to implement and optimize, and which might cause bugs and slowdowns.
Some committee members (mostly representing browsers), initiated a discussion about whether there are possible alternative approaches to evolving the language. The primary example that was presented as a potential path forward is to leverage existing deveoper tools more extensively: many developers already transpile their code, so what if we made it "official"?
We could split the language in two parts, that together compose the ECMAScript standard:
The discussion is in its very early stages, and the direction it will take is uncertain. It could evolve in many ways and it's possible that TC39 could ultimately decide that actually, the way things work today is fine. There are many voices pushing in opposite directions, and everything is still on the table. Stay tuned for more developments!
Our colleague Jesse Alama presented Decimal, showing the latest iterations on the design and data model for decimal in response to feedback in and between plenaries. The most recent version proposed an "auto-canonicalize" variant of IEEE 754 Decimal128, in which the notion of precision (or "quantum", to use the official term) of Decimal128 values would not be exposed. We received some feedback there, so decimal stays at stage 1. But stay tuned! We're going back to the drawing board and will keep iterating.
Promise.try()
is a new API that allows you to wrap a function--whether async or not--allowing it to be treated it as though it is always asynchronous. It replaces cumbersome workarounds like new Promise((resolve) => resolve(myFunction()))
. We didn't work on this proposal, but are nonetheless looking forward to using it!
Sometimes TC39 makes some mistakes, designing features one way only to later realise that it should have been done differently.
One example of this is the level of support that JavaScript has for defining subclasses of built-in classes, such as
class MyUint8Array extends Uint8Array {}
const myArr = new MyUint8Array([1, 2, 3]);
myArr instanceof MyUint8Array; // true!
// also true, even if we didn't redefine .map to return a MyUint8Array
myArr.map(x => x) instanceof Uint8Array;
It often leads to vulnerabilities in JavaScript engines, and leads to many hard-to-optimize patterns that make everybody pay the cost of this language feature, even for those who don't rely on it.
After features have been shipped for years it's usually too late for TC39 to change them: our highest priority is to "not break the web", meaning that once developers start relying on something it's going to stay there forever.
The discussion focused on whether or not the use cases appeared real, conclusion was that they are for Array and Promise, but we can move forward with the conservative step of removing making only prototype methods of typed arays, Array Buffer, and Shared Array Buffer not look at their this
to dynamically construct the corresponding class. The commitee will investigate further, the use cases for RegExp.
We're very excited to announce that our colleague Ben Allen presented the Measure proposal, and it has reached Stage 1. Measure proposes an API for handling general-purpose unit conversion between measurement scales and measurement systems. Measure was originally part of the localization-related Smart Units proposal, but was promoted into its own proposal in response to demand for this tool in a wide range of contexts.
Smart Units is a proposal to include an API for localizing measured quantities to locale-appropriate scales and measuring systems. This can be complicated by how the appropriate measuring scale for some usages varies based on the type of thing being measured; for example, many locales use a different measurement scale for the heights of people than they do for other length measurements.
Although much of the action involved in developing this proposal has shifted to the related Measure proposal, in this session we considered what units and usages should be supported.
Our colleague Philip Chimento presented a short update on the progress of getting Temporal into browsers. Here's the representing the test262 conformance as of last plenary!
Good news from the AsyncContext champions, including our colleague Andreu Botella: the proposal is almost ready for Stage 2.7! All the semantics relevant to ECMAScript have been finalized, and it's now just waiting on finalizing the integration semantics with the rest of the web APIs.
Update on what happened in WebKit in the week from November 1 to November 8.
The end-to-end latency slightly improved in the GstWebRTC backend, as the latency from capture devices is now properly taken into account.
Georges has proposed a new feature for Linux ports of WebKit: support for a new category of profiling information called “counters”. Counters are useful to track information over time, for example, the FPS of WebKit while showing a web page, or how much memory a web page is consuming during its display. The counters are integrated with Sysprof.
This is another tool that developers and enthusiasts can use to help profile and improve the performance of WebKit on Linux. The FPS counter is added as a proof of concept. This is still under review.
The prefer-hardware
WebCodecs option for video decoders is no longer ignored. It is used as a hint to attempt decoding with hardware-accelerated components. If that fails, the decoder falls back to software.
On the JSC ARMv7 front, work on enabling OMG, the highest WebAssembly optimizing JIT tier, is ongoing. Max Rottenkolber has added support for atomics. Justin Michaud has synced up the tail call code with 64-bits and submitted PRs to further sync the 64/32-bit OMG generators. Most importantly, he's been working on an OSR fix (On Stack Replacement, the ability for the VM to tier up to an optimizing tier even in the middle of a loop, which is vital for taking advantage of the optimized code). Angelos Oikonomopoulos has been going over corner cases in the B3 (the intermediate representation used by OMG) tests and submitting numerous fixes.
The minimum required ICU version is now 70.1. This change updates ICU version checked by CMake to reflect a change that had already been done in 284568@main, which rebaselined JavaScriptCore to ICU 70. By updating the version checks the build will fail as early as possible in case the required ICU version is not installed. In addition to ICU, the minimum versions of Harfbuzz and LibXML were updated too. These two libraries depend on ICU.
Philip fixed the --enable-write-console-messages-to-stdout
setting so that it works inside AudioWorklet environments; previously it would have been ignored.
The MediaRecorder backend gained WebM support (which requires GStreamer 1.24.9 or newer), and audio bitrate configuration support.
The GTK port of the MiniBrowser now uses the GtkGraphicsOffload widget when built with a modern GTK4 version. This allows GTK and the compositor to optimize the web view contents, potentially direct scanout it, or maybe put it in a monitor overlay plane as well. This should lead to less power consumption. This is an “invisible” improvement, meaning users won't be able to notice this.
The WPE WebKit 2.47.1 development release is now available. This is the first preview release for the upcoming stable series, and includes a few new features like support for the Spiel speech synthesis library, improvements to DMA-BUF usage in WebGL and video decoding, and the WPEPlatform API has gotten some new features and improvements.
As usual, feedback for development releases is welcome, including issue reports on Bugzilla.
New, modern platform API that supersedes usage of libwpe and WPE backends.
Carlos Garcia added basic touch input support to WPEPlatform DRM plug-in.
Mario published an article based on the talk delivered at the WebKit Contributors meeting on October 22nd, summarizing the work on WebKit done at Igalia in the past twelve months: Igalia and WebKit: status update and plans.
The original slides are also available.
That’s all for this week!
It’s been more than 2 years since the last time I wrote something here, and in that time a lot of things happened. Among those, one of the main highlights was me moving back to Igalia‘s WebKit team, but this time I moved as part of Igalia’s support infrastructure to help with other types of tasks such as general coordination, team facilitation and project management, among other things.
On top of those things, I’ve been also presenting our work around WebKit in different venues, such as in the Embedded Open Source Summit or in the Embedded Recipes conference, for instance. Of course, that included presenting our work in the WebKit community as part of the WebKit Contributors Meeting, a small and technically focused event that happens every year, normally around the Bay Area (California). That’s often a pretty dense presentation where, over the course of 30-40 minutes, we go through all the main areas that we at Igalia contribute to in WebKit, trying to summarize our main contributions in the previous 12 months. This includes work not just from the WebKit team, but also from other ones such as our Web Platform, Compilers or Multimedia teams.
So far I did that a couple of times only, both last year on October 24rth as well as this year, just a couple of weeks ago in the latest instance of the WebKit Contributors meeting. I believe the session was interesting and informative, but unfortunately it does not get recorded so this time I thought I’d write a blog post to make it more widely accessible to people not attending that event.
This is a long read, so maybe grab a cup of your favorite beverage first…
So first of all, what is the relationship between Igalia and the WebKit project?
In a nutshell, we are the lead developers and the maintainers of the two Linux-based WebKit ports, known as WebKitGTK and WPE. These ports share a common baseline (e.g. GLib, GStreamer, libsoup) and also some goals (e.g. performance, security), but other than that their purpose is different, with WebKitGTK being aimed at the Linux desktop, while WPE is mainly focused on embedded devices.
This means that, while WebKitGTK is the go-to solution to embed Web content in GTK applications (e.g. GNOME Web/Epiphany, Evolution), and therefore integrates well with that graphical toolkit, WPE does not even provide a graphical toolkit since its main goal is to be able to run well on embedded devices that often don’t even have a lot of memory or processing power, or not even the usual mechanisms for I/O that we are used to in desktop computers. This is why WPE’s architecture is designed with flexibility in mind with a backends-based architecture, why it aims for using as few resources as possible, and why it tries to depend on as few libraries as possible, so you can integrate it virtually in any kind of embedded Linux platform.
Besides that port-specific work, which is what our WebKit and Multimedia teams focus a lot of their effort on, we also contribute at a different level in the port-agnostic parts of WebKit, mostly around the area of Web standards (e.g. contributing to Web specifications and to implement them) and the Javascript engine. This work is carried out by our Web Platform and Compilers team, which tirelessly contribute to the different parts of WebCore and JavaScriptCore that affect not just the WebKitGTK and WPE ports, but also the rest of them to a bigger or smaller degree.
Last but not least, we also devote a considerable amount of our time to other topics such as accessibility, performance, bug fixing, QA... and also to make sure WebKit works well on 32-bit devices, which is an important thing for a lot of WPE users out there.
At Igalia we distinguish 4 main types of users of the WebKitGTK and WPE ports of WebKit:
Port users: this category would include anyone that writes a product directly against the port’s API, that is, apps such as a desktop Web browser or embedded systems that rely on a fullscreen Web view to render its Web-based content (e.g. digital signage systems).
Platform providers: in this category we would have developers that build frameworks with one of the Linux ports at its core, so that people relying on such frameworks can leverage the power of the Web without having to directly interface with the port’s API. RDK could be a good example of this use case, with WPE at the core of the so-called Thunder plugin (previously known as WPEFramework).
Web developers: of course, Web developers willing to develop and test their applications against our ports need to be considered here too, as they come with a different set of needs that need to be fulfilled, beyond rendering their Web content (e.g. using the Web Inspector).
End users: And finally, the end user is the last piece of the puzzle we need to pay attention to, as that’s what makes all this effort a task worth undertaking, even if most of them most likely don’t need what WebKit is, which is perfectly fine :-)
We like to make this distinction of 4 possible types of users explicit because we think it’s important to understand the complexity of the amount of use cases and the diversity of potential users and customers we need to provide service for, which is behind our decisions and the way we prioritize our work.
Our main goal is that our product, the WebKit web engine, is useful for more and more people in different situations. Because of this, it is important that the platform is homogeneous and that it can be used reliably with all the engines available nowadays, and this is why compatibility and interoperability is a must, and why we work with the the standards bodies to help with the design and implementation of several Web specifications.
With WPE, it is very important to be able to run the engine in small embedded devices, and that requires good performance and being efficient in multiple hardware architectures, as well as great flexibility for specific hardware, which is why we provided WPE with a backend-based architecture, and reduced dependencies to a minimum.
Then, it is also important that the QA Infrastructure is good enough to keep the releases working and with good quality, which is why I regularly maintain, evolve and keep an eye on the EWS and post-commit bots that keep WebKitGTK and WPE building, running and passing the tens of thousands of tests that we need to check continuously, to ensure we don’t regress (or that we catch issues soon enough, when there’s a problem). Then of course it’s also important to keep doing security releases, making sure that we release stable versions with fixes to the different CVEs reported as soon as possible.
Finally, we also make sure that we keep evolving our tooling as much as possible (see for instance the release of the new SDK earlier this year), as well as improving the documentation for both ports.
Last, all this effort would not be possible if not because we also consider a goal of us to maintain an efficient collaboration with the rest of the WebKit community in different ways, from making sure we re-use and contribute to other ports as much code as possible, to making sure we communicate well in all the forums available (e.g. Slack, mailing list, annual meeting).
Well, first of all the usual disclaimer: number of commits is for sure not the best possible metric, and therefore should be taken with a grain of salt. However, the point here is not to focus too much on the actual numbers but on the more general conclusions that can be extracted from them, and from that point of view I believe it’s interesting to take a look at this data at least once a year.
With that out of the way, it’s interesting to confirm that once again we are still the 2nd biggest contributor to WebKit after Apple, with ~13% of the commits landed in this past 12-month period. More specifically, we landed 2027 patches out of the 15617 ones that took place during the past year, only surpassed by Apple and their 12456 commits. The remaining 1134 patches were landed mostly by Sony, followed by RedHat and several other contributors.
Now, if we remove Apple from the picture, we can observe how this year our contributions represented ~64% of all the non-Apple commits, a figure that grew about ~11% compared to the past year. This confirms once again our commitment to WebKit, a project we started contributing about 14 years ago already, and where we have been systematically being the 2nd top contributor for a while now.
The 10 main areas we have contributed to in WebKit in the past 12 months are the following ones:
In the next sections I’ll talk a bit about what we’ve done and what we’re planning to do next for each of them.
content-visibility:auto
This feature allows skipping painting and rendering of off-screen sections, particularly useful to avoid the browser spending time rendering parts in large pages, as content outside of the view doesn’t get rendered until it gets visible.
We completed the implementation and it’s now enabled by default.
Navigation API
This is a new API to manage browser navigation actions and examine history, which we started working on in the past cycle. There’s been a lot of work happening here and, while it’s not finished yet, the current plan is that Apple will continue working on that in the next months.
hasUAVisualTransition
This is an attribute of the NavigateEvent interface, which is meant to be True
if the User Agent has performed a visual transition before a navigation event. It was something that we have also finished implementing and is now also enabled by default.
Secure Curves in the Web Cryptography API
In this case, we worked on fixing several Web Interop related issues, as well as on increasing test coverage within the Web Platform Tests (WPT) test suites.
On top of that we also moved the X25519 feature to the “prepare to ship” stage.
Trusted Types
This work is related to reducing DOM-based XSS attacks. Here we finished the implementation and this is now pending to be enabled by default.
MathML
We continued working on the MathML specification by working on the support for padding, border and margin, as well as by increasing the WPT score by ~5%.
The plan for next year is to continue working on core features and improve the interaction with CSS.
Cross-root ARIA
Web components have accessibility-related issues with native Shadow DOM as you cannot reference elements with ARIA attributes across boundaries. We haven’t worked on this in this period, but the plan is to work in the next months on implementing the Reference Target proposal to solve those issues.
Canvas Formatted Text
Canvas has not a solution to add formatted and multi-line text, so we would like to also work on exploring and prototyping the Canvas Place Element proposal in WebKit, which allows better text in canvas and more extended features.
Completed migration from Cairo to Skia for the Linux ports
If you have followed the latest developments, you probably already know that the Linux WebKit ports (i.e. WebKitGTK and WPE) have moved from Cairo to Skia for their 2D rendering library, which was a pretty big and important decision taken after a long time trying different approaches and experiments (including developing our own HW-accelerated 2D rendering library!), as well as running several tests and measuring results in different benchmarks.
The results in the end were pretty overwhelming and we decided to give Skia a go, and we are happy to say that, as of today, the migration has been completed: we covered all the use cases in Cairo, achieving feature parity, and we are now working on implementing new features and improvements built on top of Skia (e.g. GPU-based 2D rendering).
On top of that, Skia is now the default backend for WebKitGTK and WPE since 2.46.0, released on September 17th, so if you’re building a recent version of those ports you’ll be already using Skia as their 2D rendering backend. Note that Skia is using its GPU-based backend only on desktop environments, on embedded devices the situation is trickier and for now the default is the CPU-based Skia backend, but we are actively working to narrow the gap and to enable GPU-based rendering also on embedded.
Architecture changes with buffer sharing APIs (DMABuf)
We did a lot of work here, such as a big refactoring of the fencing system to control the access to the buffers, or the continued work towards integrating with Apple’s DisplayLink infrastructure.
On top of that, we also enabled more efficient composition using damaging information, so that we don’t need to pass that much information to the compositor, which would slow the CPU down.
Enablement of the GPUProcess
On this front, we enabled by default the compilation for WebGL rendering using the GPU process, and we are currently working in performance review and enabling it for other types of rendering.
New SVG engine (LBSE: Layer-Based SVG Engine)
If you are not familiar with this, here the idea is to make sure that we reuse the graphics pipeline used for HTML and CSS rendering, and use it also for SVG, instead of having its own pipeline. This means, among other things, that SVG layers will be supported as a 1st-class citizen in the engine, enabling HW-accelerated animations, as well as support for 3D transformations for individual SVG elements.
On this front, on this cycle we added support for the missing features in the LBSE, namely:
Besides all this, we also improved the performance of the new layer-based engine by reducing repaints and re-layouts as much as possible (further optimizations still possible), narrowing the performance gap with the current engine for MotionMark. While we are still not at the same level of performance as the current SVG engine, we are confident that there are several key places where, with the right funding, we should be able to improve the performance to at least match the current engine, and therefore be able to push the new engine through the finish line.
General overhaul of the graphics pipeline, touching different areas (WIP):
On top of everything else commented above, we also worked on a general refactor and simplification of the graphics pipeline. For instance, we have been working on the removal of the Nicosia layer now that we are not planning to have multiple rendering implementations, among other things.
DMABuf-based sink for HW-accelerated video
We merged the DMABuf-based sink for HW-accelerated video in the GL-based GStreamer sink.
WebCodecs backend
We completed the implementation of audio/video encoding and decoding, and this is now enabled by default in 2.46. As for the next steps, we plan to keep working on the integration of WebCodecs with WebGL and WebAudio.
GStreamer-based WebRTC backends
We continued working on GstWebRTC, bringing it to a point where it can be used in production in some specific use cases, and we will still be working on this in the next months.
Other
Besides the points above, we also added an optional text-to-speech backend based on libspiel to the development branch, and worked on general maintenance around the support for Media Source Extensions (MSE) and Encrypted Media Extensions (EME), which are crucial for the use case of WPE running in set-top-boxes, and is a permanent task we will continue to work on in the next months.
ARMv7/32-bit support:
A lot of work happened around 32-bit support in JavaScriptCore, especially around WebAssembly (WASM): we ported the WASM BBQJIT and ported/enabled concurrent JIT support, and we also completed 80% of the implementation for the OMG optimization level of WASM, which we plan to finish in the next months. If you are unfamiliar with what the OMG and BBQ optimization tiers in WASM are, I’d recommend you to take a look at this article in webkit.org: “Assembling WebAssembly“.
We also contributed to the JIT-less WASM, which is very useful for embedded systems that can’t support JIT for security or memory related constraints, and also did some work on the In-Place Interpreter (IPInt), which is a new version of the WASM Low-level interpreter (LLInt) that uses less memory and executes WASM bytecode directly without translating it to LLInt bytecode (and should therefore be faster to execute).
Last, we also contributed most of the implementation for the WASM GC, with the exception of some Kotlin tests.
As for the next few months, we plan to investigate and optimize heap/JIT memory usage in 32-bit, as well as to finish several other improvements on ARMv7 (e.g. IPInt).
The new WPE API is a new API that aims at making it easier to use WPE in embedded devices, by removing the hassle of having to handle several libraries in tandem (i.e. WPEWebKit, libWPE and WPEBackend-FDO, for instance), available from WPE’s releases page, and providing a more modern API in general, better aimed at the most common use cases of WPE.
A lot of effort happened this year along these lines, including the fact that we finally upstreamed and shipped its initial implementation with WPE 2.44, back in the first half of the year. Now, while we recommend users to give it a try and report feedback as much as possible, this new API is still not set in stone, with regular development still ongoing, so if you have the chance to try it out and share your experience, comments are welcome!
Besides shipping its initial implementation, we also added support for external platforms, so that other ones can be loaded beyond the Wayland, DRM and “headless” ones, which are the default platforms already included with WPE itself. This means for instance that a GTK4 platform, or another one for RDK could be easily used with WPE.
Then of course a lot of API additions were included in the new API in the latest months:
Last, we also added support for testing automation, and we can support WebDriver now in the new API.
With all this done so far, the plan now is to complete the new WPE API, with a focus on the Settings API and accessibility support, write API tests and documentation, and then also add an external platform to support GTK4. This is done on a best-effort basis, so there’s no specific release date.
This year was also a good year for WebKit on Android, also known as WPE Android, as this is a project that sits on top of WPE and its public API (instead of developing a fully-fledged WebKit port).
In case you’re not familiar with this, the idea here is to provide a WebKit-based alternative to the Chromium-based Web view on Android devices, in a way that leverages HW acceleration when possible and that it integrates natively (and nicely) with the several Android subsystems, and of course with Android’s native mainloop. Note that this is an experimental project for now, so don’t expect production-ready quality quite yet, but hopefully something that can be used to start experimenting with selected use cases.
If you’re adventurous enough, you can already try the APKs yourself from the releases page in GitHub at https://github.com/Igalia/wpe-android/releases.
Anyway, as for the changes that happened in the past 12 months, here is a summary:
On top of that, we published 3 different blog posts covering different topics, from a general intro to a more deep dive explanation of the internals, and showing some demos. You can check them out in Jani’s blog at https://blogs.igalia.com/jani
As for the future, we’ll focus on stabilization and regular maintenance for now, and then we’d like to work towards achieving production-ready quality for specific cases if possible.
On the QA front, we had a busy year but in general we could highlight the following topics.
In the next months, our main focus would be a revamp of the QA infrastructure to make sure that we can get all the bots (including the debug ones) to a healthier state, finish the migration of all the bots to the new SDK and, ideally, be able to bring back the ready-to-use WPE images that we used to have available in wpewebkit.org.
The current release cadence has been working well, so we continue issuing major releases every 6 months (March, September), and then minor and unstable development releases happening on-demand when needed.
As usual, we kept aligning releases for WebKitGTK and WPE, with both of them happening at the same time (see https://webkitgtk.org/releases and https://wpewebkit.org/release), and then also publishing WebKit Security Advisories (WSA) when necessary, both for WebKitGTK and for WPE.
Last, we also shortened the time before including security fixes in stable releases this year, and we have removed support for libsoup2 from WPE, as that library is no longer maintained.
On tooling, the main piece of news is that this year we released the initial version of the new SDK, which is developed on top of OCI-based containers. This new SDK fixes the issues with the current existing approaches based on JHBuild and flatpak, where one of them was great for development but poor for testing and QA, and the other one was great for testing and QA, but not very convenient for development.
This new SDK is regularly maintained and currently runs on Ubuntu 24.04 LTS with GCC 14 & Clang 18. It has been made public on GitHub and announced to the public in May 2024 in Patrick’s blog, and is now the officially recommended way of building WebKitGTK and WPE.
As for documentation, we didn’t do as much as we would have liked here, but we still landed a few contributions in docs.webkit.org, mostly related to WebKitGTK (e.g. Releases and Versioning, Security Updates, Multimedia). We plan to do more on this regard in the next months, though, mostly by writing/publishing more documentation and perhaps also some tutorials.
This has been a fairly long blog post but, as you can see, it’s been quite a year for WebKit here at Igalia, with many exciting changes happening at several fronts, and so there was quite a lot of stuff to comment on here. This said, you can always check the slides of the presentation in the WebKit Contributors Meeting here if you prefer a more concise version of the same content.
In any case, what’s clear it’s that the next months are probably going to be quite interesting as well with all the work that’s already going on in WebKit and its Linux ports, so it’s possible that in 12 months from now I might be writing an equally long essay. We’ll see.
Thanks for reading!
Form controls are notoriously difficult to style, something the web community has been talking about for years. In 2019, when I was still at Microsoft, I had been working with Greg Whitworth to start evangelizing the work that was being planned for <select>
, as well as the Open UI community group that would help bring this plan to life.
There's a lot that has happened in that five years, and more still to come. Most recently I've seen work being done to improve the customizability of the <details>
and <summary>
elements. More stylable accordions. Exciting!
<details>
is hard to work with #The <details>
element is a disclosure widget, which is a piece of UI that has a brief summary or heading and a control to expand the UI to show more details.
When you use <details>
however, you don't have a lot of control over customizing it (like a lot of HTML Controls). The little triangle to indicate whether it's open or closed is not easily replaced. Styling or customizing <details>
just isn't easy, which in turn means developers end up building a custom component for their accordions.
This ends up creating a lot of unnecessary work. Using existing HTML elements means you get all the security, accessibility and performance benefits that have already been baked in. The browser takes care of all that for you. Rebuilding from scratch means you've got to worry about adding all that back in, especially the accessibility bits.
But you can't style it how you want to, so you end up building it from scratch anyway. Rinse. Repeat. A tale as old as time.
There's quite a bit being proposed to help make <details>
more customizable and interoperable between browsers (because no one likes it when browsers make things behave/display differently!)
A few highlights:
display
property restrictions so you can use other display
types like flex & grid.::marker
stylingThe exciting news is items 1 & 3 in the list above should be shipping in Chrome 131 Stable next week (first week of November 2024). This will bring a new ::details-content
pseudo-element to the web, allowing more access to parts of <details>
.
<details>
styling explainer_ by David BaronMuch of this work to improve form controls is started within the Open UI community group. The community there has been working for years to make progress in this space and getting all the browser vendors to agree and work together is often a difficult and time-consuming task. Cheers to all you do.
Unleashing the power of 3D graphics in the Raspberry Pi is a key commitment for Igalia through its collaboration with Raspberry Pi. The introduction of Super Pages for the Raspberry Pi 4 and 5 marks another step in this journey, offering some performance enhancements and more efficient memory usage. In this post, we’ll dive deep into the technical details of Super Pages, discuss the challenges we faced during implementation, and illustrate the benefits this feature brings to the Raspberry Pi ecosystem.
A Memory Management Unit (MMU) is a hardware component responsible for handling memory access at the system level. It translates virtual addresses used by programs into physical addresses in main memory, enabling efficient memory management and protection. The MMU allows the operating system to allocate memory dynamically, isolating processes from one another to prevent them from interfering with each other’s memory.
Recommendation: 📚 Structured computer organization by Andrew Tanenbaum
The V3D MMU, which is part of the Broadcom GPU found in the Raspberry Pi 4 and 5, is responsible for translating 32-bit virtual addresses (VA) used by V3D into 40-bit physical addresses used externally to V3D. The MMU relies on a page table, stored in physical memory, which maps virtual addresses to their corresponding physical addresses. The operating system manages this page table, and the MMU uses it to perform address translation during memory access.
A fundamental principle of modern operating systems is that memory is not stored contiguously. Instead, a contiguous block of memory is divided into smaller blocks, called “pages”, which are scattered across the entire address space. These pages are typically 4KB in size. This approach enables more efficient memory management and allows for features like virtual memory and memory protection.
Over the years, the amount of available memory in computers has increased dramatically. An early IBM PC had up to 640 KiB of RAM, whereas the ThinkPad I’m typing on right now has 32 GB of RAM. Naturally, memory demands have grown alongside this increase. Today, it’s common for web browsers to consume several gigabytes of RAM, and a single shader can take up multiple megabytes.
As memory usage grows, a 4KB page size may become inefficient for managing large memory blocks. Handling a large number of small pages for a single block means the MMU must perform multiple address translations, which increases overhead. This can reduce the effectiveness of the Translation Lookaside Buffer (TLB), as it must store and handle more entries, potentially leading to more cache misses and reduced overall performance.
This is why many CPU manufacturers have introduced support for larger page sizes. For instance, x86 CPUs typically support 4KB and 2MB pages, with 1GB pages available if supported by the hardware. Similarly, ARM64 CPUs can support 4KB, 16KB, and 64KB page sizes. These larger page sizes help reduce the number of pages the MMU needs to manage, improving performance by reducing the overhead of address translation and making more efficient use of the TLB.
So, if CPUs are using bigger sizes, why shouldn’t GPUs do the same?
By default, V3D supports 4KB pages. However, by setting specific bits in the page table entry, it is possible to create 64KB “Big Pages” and 1MB “Super Pages.” The issue is that the current V3D driver available in Linux does not enable the use of Big or Super Pages, meaning this hardware feature is currently unused.
The advantage of enabling Big and Super Pages is that once an entry for any page within a Big or Super Page is cached in the MMU, it can be used to translate all virtual addresses within that page’s range without needing to fetch additional entries. In theory, this should result in improved performance, especially for applications with high memory demands, such as those using multiple large buffer objects (BOs).
As Igalia continually strives to enhance the experience for Raspberry Pi users, we decided to implement this feature in the upstream kernel. But before diving into the implementation details, let’s take a look at the real-world results and see if the theoretical benefits of Super Pages have translated into measurable improvements for Raspberry Pi users.
With Super Pages implemented, let’s now explore the actual performance improvements observed on the Raspberry Pi and see how impactful this feature is for users.
To measure the impact of Super Pages, we tested a variety of games and demos traces on the Raspberry Pi 4 and 5, covering genres from action to racing. On average, we observed a +1.40% FPS improvement on the Raspberry Pi 4 and a +1.30% improvement on the Raspberry Pi 5.
For instance, on the Raspberry Pi 4, Warzone 2100 saw an 8.36% FPS increase, and on the Raspberry Pi 5, Quake II enjoyed a 3.62% boost. These examples demonstrate the benefits of Super Pages in resource-demanding applications, where optimized memory handling becomes critical.
Trace | Before Super Pages | After Super Pages | Improvement |
---|---|---|---|
warzone2100.30secs.1024x768.trace | 56.39 | 61.10 | +8.36% |
ue4_shooter_game_shooting_low_quality_640x480.gfxr | 20.71 | 21.47 | +3.65% |
quake3e_capture_frames_1800_through_2400_1920x1080.gfxr | 60.88 | 62.50 | +2.67% |
supertuxkart-menus_1024x768.trace | 112.62 | 115.61 | +2.65% |
ue4_shooter_game_shooting_high_quality_640x480.gfxr | 20.45 | 20.88 | +2.10% |
quake2-gles3-1280x720.trace | 59.76 | 60.84 | +1.82% |
ue4_sun_temple_640x480.gfxr | 27.60 | 28.03 | +1.54% |
vkQuake_capture_frames_1_through_1200_1280x720.gfxr | 54.59 | 55.30 | +1.29% |
ue4_shooter_game_low_quality_640x480.gfxr | 32.75 | 33.08 | +1.00% |
sponza_demo02_800x600.gfxr | 20.90 | 21.03 | +0.61% |
supertuxkart-racing_1024x768.trace | 8.58 | 8.63 | +0.60% |
ue4_shooter_game_high_quality_640x480.gfxr | 19.62 | 19.74 | +0.59% |
serious_sam_trace02_1280x720.gfxr | 44.00 | 44.21 | +0.50% |
ue4_vehicle_game-2_640x480.gfxr | 12.59 | 12.65 | +0.49% |
sponza_demo01_800x600.gfxr | 21.42 | 21.46 | +0.19% |
quake3e-1280x720.trace | 84.45 | 84.52 | +0.09% |
Trace | Before Super Pages | After Super Pages | Improvement |
---|---|---|---|
quake2-gles3-1280x720.trace | 151.77 | 157.26 | +3.62% |
supertuxkart-menus_1024x768.trace | 306.79 | 313.88 | +2.31% |
warzone2100.30secs.1024x768.trace | 140.92 | 144.03 | +2.21% |
vkQuake_capture_frames_1_through_1200_1280x720.gfxr | 131.45 | 134.20 | +2.10% |
ue4_vehicle_game-2_640x480.gfxr | 24.42 | 24.88 | +1.89% |
ue4_shooter_game_high_quality_640x480.gfxr | 32.12 | 32.53 | +1.29% |
ue4_sun_temple_640x480.gfxr | 42.05 | 42.55 | +1.20% |
ue4_shooter_game_shooting_high_quality_640x480.gfxr | 52.77 | 53.31 | +1.04% |
quake3e-1280x720.trace | 238.31 | 240.53 | +0.93% |
warzone2100.70secs.1024x768.trace | 151.09 | 151.81 | +0.48% |
sponza_demo02_800x600.gfxr | 50.81 | 51.05 | +0.46% |
supertuxkart-racing_1024x768.trace | 20.91 | 20.98 | +0.33% |
ue4_shooter_game_low_quality_640x480.gfxr | 59.68 | 59.86 | +0.29% |
quake3e_capture_frames_1_through_1800_1920x1080.gfxr | 167.70 | 168.17 | +0.29% |
ue4_shooter_game_shooting_low_quality_640x480.gfxr | 53.40 | 53.51 | +0.22% |
quake3e_capture_frames_1800_through_2400_1920x1080.gfxr | 163.37 | 163.64 | +0.17% |
serious_sam_trace02_1280x720.gfxr | 60.00 | 60.03 | +0.06% |
sponza_demo01_800x600.gfxr | 45.04 | 45.04 | <.01% |
While an average +1% FPS improvement might seem modest, Super Pages can deliver more noticeable gains in memory-intensive 3D applications and when the GPU is under heavy usage. Let’s see how the Super Pages perform on Mesa CI.
To avoid introducing regressions in user-space, I usually test my custom kernels with Mesa CI, focusing on the “broadcom-postmerge” stage to verify that all Piglit and CTS tests ran smoothly. For Super Pages, I was pleasantly surprised by the job duration results, as some job durations were reduced by several minutes.
Job | Before Super Pages | After Super Pages |
---|---|---|
v3d-rpi4-traces:arm64 | ~4m30s | ~3m40s |
v3d-rpi5-traces:arm64 | ~3m30s | ~2m45s |
v3d-rpi4-gl-full:arm64 */6 | ~24-25 minutes | ~22-23 minutes |
v3d-rpi5-gl-full:arm64 | ~48 minutes | ~48 minutes |
v3dv-rpi4-vk-full:arm64 */6 | ~44 minutes | ~41 minutes |
v3dv-rpi5-vk-full:arm64 | ~102 minutes | ~92 minutes |
Seeing these reductions is especially rewarding. For example, the “v3dv-rpi5-vk-full:arm64” job duration decreased by 10 minutes, meaning more FPS for users and shorter wait times for Mesa developers.
After sharing a couple of tables, I’ll admit that showcasing performance improvements solely through numbers doesn’t always convey the real impact. Personally, I find it more satisfying to see performance gains in action with real-world applications.
This led me to explore PlayStation 2 (PS2) emulation on the RPi 5. From watching YouTube videos, I noticed that PS2 is a popular console for the RPi 5. While the PlayStation (PS1) emulates well even on the RPi 4, and Nintendo 64 and Sega Saturn struggle across most hardware, PS2 hits a sweet spot for testing the RPi 5’s limits.
Fortunately, I still have my childhood PS2 — my second console after the Nintendo GameCube, and one of the most successful consoles worldwide, including in Brazil. With a library packed with titles like Metal Gear Solid, Resident Evil, Tomb Raider, and Shadow of the Colossus, the PS2 remains a great system for collectors and retro gamers alike.
I selected a few games from my collection to benchmark on the RPi 5 using a PS2 emulator. My emulator of choice was Aether SX2 with Vulkan support. Although AetherSX2 is no longer in development, it still performs well on the RPi.
Initially, many games were barely playable, especially those with large buffer objects, like Shadow of the Colossus and Gran Turismo 4. However, after enabling Super Pages support, I noticed immediate improvements. For example, Shadow of the Colossus wouldn’t even open before Super Pages, and while it’s not fully playable yet, it does load now. This isn’t a silver bullet, but it’s a step forward in improving the driver one piece at a time.
I ended up selecting four games for a video comparison: Burnout 3: Takedown, Metal Gear Solid 3: Snake Eater, Resident Evil 4, and Tekken 4.
Disclaimer: The BIOS used in the emulator was extracted from my own PS2, and I played only games I own, with ROMs I personally extracted. Neither I nor Igalia encourage using downloaded BIOS or ROM files from the internet.
From the video, we can see noticeable improvements in all four games. Although they aren’t perfectly playable yet, the performance gains are evident, particularly in Resident Evil 4, where the gameplay saw a solid 5 FPS boost. I realize 18 FPS might not satisfy most players, but I still had a lot of fun playing Resident Evil 4 on the RPi 5.
When tracking the FPS for these games, it’s clear that the performance gains go well beyond the average 1% seen in other benchmarks. Super Pages show their true potential in high-memory applications like PS2 emulation.
Having seen the performance gains Super Pages can bring to the Raspberry Pi, let’s now dive into the technical aspects of the feature.
The first challenge was figuring out how to allocate a contiguous block of memory using shmem. The Shared Memory Virtual Filesystem (shmem) is used as a flexible memory mechanism that allows the GPU and CPU to share access to BOs through the system’s temporary filesystem, tmpfs. tmpfs is a volatile filesystem that stores files in RAM, making it ideal for temporary or high-speed data that doesn’t need to persist on RAM.
For example, to allocate a 256KB BO across four 64KB pages, we need four
contiguous 64KB memory blocks. However, by default, tmpfs only allocates
memory in PAGE_SIZE
chunks (as seen in shmem_file_setup()
), whereas
PAGE_SIZE
is 4KB on the Raspberry Pi 4 and 16KB on the Raspberry Pi 5. Since
the function drm_gem_object_init()
— which initializes an allocated
shmem-backed GEM object — relies on shmem_file_setup()
to back these objects
in memory, we had to consider alternatives, as the default PAGE_SIZE
would
divide memory into increments that are too small to ensure the large, contiguous
blocks needed by the GPU.
The solution we proposed was to create drm_gem_object_init_with_mnt()
, which
allows us to specify the tmpfs mountpoint where the GEM object will be
created. This enables us to allocate our BOs in a mountpoint that supports
larger page sizes. Additionally, to ensure that our BOs are allocated in the
correct mountpoint, we introduced drm_gem_shmem_create_with_mnt()
, which
allows the mountpoint to be specified when creating a new DRM GEM shmem object.
[PATCH v6 04/11] drm/gem: Create a drm_gem_object_init_with_mnt() function
[PATCH v6 06/11] drm/gem: Create shmem GEM object in a given mountpoint
The next challenge was figuring out how to create a new mountpoint that would
allow for different page sizes based on the allocation. Simply creating a new
tmpfs mountpoint with a fixed bigger page size wouldn’t suffice, as we needed
flexibility for various allocations. Inspired by the i915 driver, we decided to
use a tmpfs mountpoint with the “huge=within_size” flag. This flag, which
requires the kernel to be configured with CONFIG_TRANSPARENT_HUGEPAGE
, enables
the allocation of huge pages.
Transparent Huge Pages (THP) is a kernel feature that automatically manages large memory pages to improve performance without needing changes from applications. THP dynamically combines smaller pages into larger ones, typically 2MB, reducing memory management overhead and improving cache efficiency.
To support our new allocation strategy, we created a dedicated tmpfs mountpoint for V3D, called gemfs, which provides us an ideal space for managing these larger allocations.
[PATCH v6 05/11] drm/v3d: Introduce gemfs
With everything in place for contiguous allocations, the next step was configuring V3D to enable Big/Super Page support.
We began by addressing a major source of memory pressure on the Raspberry Pi: the current 128KB alignment for allocations in the virtual memory space. This alignment wastes space when handling small BO allocations, especially since the userspace driver performs a large number of these small allocations.
As a result, we can’t fully utilize the 4GB address space available for the GPU on the Raspberry Pi 4 or 5. For example, we can currently allocate up to 32,000 BOs of 4KB (~140MB) and 3,000 BOs of 400KB (~1.3GB). This becomes a limitation for memory-intensive applications. By reducing the page alignment to 4KB, we can significantly increase the number of BOs, allowing up to 1,000,000 BOs of 4KB (~4GB) and 10,000 BOs of 400KB (~4GB).
Therefore, the first change I made was reducing the VA alignment of all allocations to 4KB.
[PATCH v6 07/11] drm/v3d: Reduce the alignment of the node allocation
With the alignment issue resolved, we can now implement the code to properly set the flags on the Page Table Entries (PTE) for Big/Super Pages. Setting these flags is straightforward — a simple bitwise operation. The challenge lies in determining which BOs can be allocated in Super Pages. For a BO to be eligible for a Big Page, its virtual address must be aligned to 64KB, and the same applies to its physical address. Same thing for Super Pages, but now the addresses must be aligned to 1MB.
If the BO qualifies for a Big/Super Page, we need to iterate over 16 4KB pages (for Big Pages) or 256 4KB pages (for Super Pages) and insert the appropriate PTE.
Additionally, we modified the way we iterate through the BO’s memory. This was necessary because the THP may not always allocate the entire BO contiguously. For example, it might only allocate contiguously 1MB of a 2MB block. To handle this, we now iterate over the blocks of contiguous memory scattered across the scatterlist, ensuring that each segment is properly handled during the allocation process.
What is a scatterlist? It is a Linux Kernel data structure that manages non-contiguous memory as if it were contiguous. It organizes separate memory blocks into a single logical buffer, allowing efficient data handling, especially in Direct Memory Access (DMA) operations, without needing a physically contiguous memory allocation.
[PATCH v6 08/11] drm/v3d: Support Big/Super Pages when writing out PTEs
However, the last few patches alone don’t fully enable the use of Super Pages.
While PATCH 08/11 technically allows for Super Pages, we’re still relying on DRM
GEM shmem objects, meaning allocations are still happening in PAGE_SIZE
chunks. Although Big/Super Pages could potentially be used if the system
naturally allocated 1MB or 64KB contiguously, this is quite rare and not our
intended outcome. Our goal is to actively use Big/Super Pages as much as
possible.
To achieve this, we’ll utilize the V3D-specific mountpoint we created earlier
for BO allocation whenever possible. By creating BOs through
drm_gem_shmem_create_with_mnt()
, we can ensure that large pages are allocated
contiguously when possible, enabling the consistent use of Big/Super Pages.
[PATCH v6 09/11] drm/v3d: Use gemfs/THP in BO creation if available
And there you have it — Big/Super Pages are now fully enabled in V3D. The only
requirement to activate this feature in any given kernel is ensuring that
CONFIG_TRANSPARENT_HUGEPAGE
is enabled.
You can learn more about ongoing enhancements to the Raspberry Pi driver stack in this XDC 2024 talk by José María “Chema” Casanova Crespo. In the talk, Chema discusses the Super Pages work I developed, along with other advancements in the driver stack.
Of course, there are still plenty of improvements on the horizon at Igalia. I’m currently experimenting with 64KB CLE allocations in user-space, and I hope to share more good news soon.
Finally, I’d like to express my gratitude to Iago Toral and Tvrtko Ursulin for their invaluable support in developing Super Pages for the V3D kernel driver. Thank you both for sharing your experience with me!
A caret is the symbol used to show where text will appear in text input applications,such as a word processor, code editor or input form. It might be a vertical bar, or an underscore, or some other shape. It may flash, or pulse.
On the web, sites currently have some control over the editing caret through CSS, along with entirely custom solutions. Here I discuss the current state of caret customization on the web and look at proposals to allow more customization through CSS.
The majority of web sites use the default editing caret. In all browsers it is a vertical bar in the text color that blinks. The blink rate and duration varies across browsers: Chrome’s cursor blinks continuously at a fixed rate; Firefox blinks the caret continuously at a rate from user settings; Safari blinks with keyboard input but switches to a strobe effect with dictation input.
All browsers support the CSS caret-color
property allowing sites to color the caret independent of the text, and allow the color
to be animated. You can also make the caret disappear with a transparent color.
Changing the caret shape is currently not possible through CSS in any browser. There are some ways to work around this, such as those documented in this Stack Overflow post. The general idea is hide the caret and replace it with an element controlled by script. The script relies on the selection range and other APIs to know where the element should be positioned. Or you can completely replace the default browser editing behavior with a custom javascript editor (as is done by, for example, Google Docs).
Browser support for caret customization currently leaves web developers in an awkward situation: accept the basic default cursor with color control, or implement a custom editor, but nothing in-between.
There are at least two CSS properties not yet implemented in browsers that improve caret customization. The first concerns the interaction between color animation and the default blinking behavior, and the second adds support for different caret shapes.
When the caret-color
is animated, there is no way to reliably synchronize the
animation with the browser’s blinking rate. Different browsers blink at different
rates, and it may be under user control. The
CSS caret-animation
property, when
set to the value manual
suppresses the blinking, giving the color animation
complete control over the color of the caret at all times. Using caret-animation: auto
(the initial value) leaves the blinking behavior under browser control.
Site control of the blinking is both an accessibility benefit and potentially harmful to users. For users sensitive to motion, disabling the blinking may be a benefit. At the same time, a cursor that does not blink is much harder for users to recognize. Please use caution when employing this property when it is available in browsers.
There is an implementation of caret-animation
in Chrome 132 behind the CSSCaretAnimation
flag,
accessible through --enable-blink-features="CSSCaretAnimation"
on the command line.
Feel free to comment on the bug
if you have thoughts on this feature.
The shape of the caret in native applications is most commonly a vertical bar,
an underscore or a rectangular block. In addition, the shape often varies depending
on the input mode, such as insert or replace. The
CSS caret-shape property
allows sites to choose one of these shapes for the caret, or leave the choice
up to the browser. The recognized property values are auto
, bar
, block
and
underscore
.
No browser currently supports the caret-shape
property, and the specification
needs a little work to confirm the exact location and shape of the underscore
and block. Please leave feedback on the Chromium bug
if you would like this feature to be implemented.
The caret-shape
property does not allow any control over the size of the
caret, such as the thickness of the bar or block. There was, for example,
a proposal to add caret-width
as a means for working around browser bugs on zooming and transforming the
caret. Please create a new CSS working Group issue if you would like
greater customization and can provide use cases.
Early this month I spent a couple weeks in Montreal, visiting the city, but mostly to attend the GStreamer Conference and the following hackfest, which happened to be co-located with the XDC Conference. It was my first time in Canada and I utterly enjoyed it. Thanks to all those from the GStreamer community that organized and attended the event.
For now you can replay the whole streams of both days of conference, but soon the split videos for each talk will be published:
GStreamer Conference 2024 - Day 1, Room 1 - October 7, 2024
https://www.youtube.com/watch?v=KLUL1D53VQI
GStreamer Conference 2024 - Day 1, Room 2 - October 7, 2024
https://www.youtube.com/watch?v=DH64D_6gc80
GStreamer Conference 2024 - Day 2, Room 2 - October 8, 2024
https://www.youtube.com/watch?v=jt6KyV757Dk
GStreamer Conference 2024 - Day 2, Room 1 - October 8, 2024
https://www.youtube.com/watch?v=W4Pjtg0DfIo
And a couple pictures of igalians :)